I wish every DBA should take care of its backups properly. If they had done so, there could never have been so much damage to people's nervous system 😁 (easier said than done).
Nevertheless, there was a 100GB gap of (?) lost archived redo log files along with 3 weeks old backup. I had to restore and open that Oracle database at any cost. What did I do :
1) I restored database (cf, spfile, datafiles etc.) from level 0 backup and recovered it as much as possible using level 1 backups. It doesn't need to say - at that point the database was inconsistent (mildly saying);
2) I created pfile and set the following parameters in it and brought database back in mount state (right before opening with resetlogs option) :
"_allow_resetlogs_corruption" = TRUE
"_allow_error_simulation" = true
undo_management = 'MANUAL'
3) alter database open resetlogs ;
At the end the database was opened successfully (unexpectedly 😀), I recreated another undo tablespace and extracted the data I needed.
That's it ! Keep an eye on your backups !
PS1. You may need to recreate controlfile during the recovery. SQL dump of it right after the restoration may be useful, particularly when you need only a part, not the whole, of the database.
PS2. The gap between SCNs in restored datafiles can be huge, and errors like ORA-00600: internal error code, arguments: [kcbzib_kcrsds_1], [], [], [], [], [], [], [], [], [], [], [] are possible. To overcome this, you may need to use the following parameter before opening the database in resetlogs mode :
# scn surge, being used instead of _minimum_giga_scn (in the past versions). level 1 increases # 1mln, level 3 increases scn by 3 mln. look at v$datafile_header of datafiles and compare it # with checkpoint_change# from v$database.
event="21307096 trace name context forever, level 2048"
No comments:
Post a Comment