Updating millions of records in oracle dating boy scout uniforms


11-Jan-2020 18:41

You will bypass all UNDO generation with DDL and have the opportunity, if appropriate, to skip REDO generation as well (by using NOLOGGING).At the very least, you will minimize the amount of REDO you generate.Additionally, you will bypass the inefficient buffer cache during this massive operation. Think about the math involved in updating millions of rows on, for example, 1 million blocks (about 7.5 GB of table data).You will have to read each and every block into the buffer cache (I’ll assume a full table scan in this case).And all of that has to be written to disk before the commit as well.

“How to Update Millions of Records in a Table” is a question that was first asked more than a decade ago and whose answer has grown over the years, the last update to it being just a few days ago.At this point, you’ve done 1 million block reads and 3 million block writes, and that doesn’t even count the REDO you’ve generated.