Thursday, October 26, 2023

RMAN-06054: media recovery requesting unknown archived log

I wish every DBA should take care of its backups properly. If they had done so, there could never have been so much damage to people's nervous system 😁 (easier said than done). 

Nevertheless, there was a 100GB gap of (?) lost archived redo log files along with 3 weeks old backup. I had to restore and open that Oracle database at any cost. What did I do :

1) I restored database (cf, spfile, datafiles etc.) from level 0 backup and recovered it as much as possible using level 1 backups. It doesn't need to say - at that point the database was inconsistent (mildly saying);

2) I created pfile and set the following parameters in it and brought database back in mount state (right before opening with resetlogs option) :

"_allow_resetlogs_corruption"    = TRUE
"_allow_error_simulation"        = true
undo_management                  = 'MANUAL'

3) alter database open resetlogs ;

At the end the database was opened successfully (unexpectedly 😀), I recreated another undo tablespace and extracted the data I needed.

That's it ! Keep an eye on your backups !

Monday, October 9, 2023

CLSRSC-318: Failed to start Oracle OHASD service. Died at crsinstall.pm line 3114. Oracle Linux 9 (OL9)

Caught this error during the upgrade of Oracle Restart (SIHA) from 19.19 to 21.11. Here is the log :

Performing root user operation.

The following environment variables are set as:
   ORACLE_OWNER= grid
   ORACLE_HOME=  /u01/siha_2111

Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)  
[n]:    Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)  
[n]:    Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/siha_2111/crs/install/crsconfig_params
The log of current session can be found at:
 /u01/app/grid/crsdata/db-04/crsconfig/roothas_2023-10-09_11-36-24AM.log
2023/10/09 11:36:25 CLSRSC-595: Executing upgrade step 1 of 12: 'UpgPrechecks'.
2023/10/09 11:36:29 CLSRSC-595: Executing upgrade step 2 of 12: 'GetOldConfig'.
2023/10/09 11:36:31 CLSRSC-595: Executing upgrade step 3 of 12: 'GenSiteGUIDs'.
2023/10/09 11:36:31 CLSRSC-595: Executing upgrade step 4 of 12: 'SetupOSD'.
2023/10/09 11:36:31 CLSRSC-595: Executing upgrade step 5 of 12: 'PreUpgrade'.
2023/10/09 11:37:30 CLSRSC-595: Executing upgrade step 6 of 12: 'UpgradeAFD'.
2023/10/09 11:37:31 CLSRSC-595: Executing upgrade step 7 of 12: 'UpgradeOLR'.
clscfg: EXISTING configuration version 0 detected.
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
2023/10/09 11:37:34 CLSRSC-595: Executing upgrade step 8 of 12: 'UpgradeOCR'.
LOCAL ONLY MODE  
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node db-04 successfully pinned.
2023/10/09 11:37:36 CLSRSC-595: Executing upgrade step 9 of 12: 'CreateOHASD'.
2023/10/09 11:37:37 CLSRSC-595: Executing upgrade step 10 of 12: 'ConfigOHASD'.
2023/10/09 11:37:37 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2023/10/09 11:39:56 CLSRSC-214: Failed to start the resource 'ohasd' 

Died at /u01/siha_2111/crs/install/crsinstall.pm line 3114.

/var/log/messages contained lots of the following :

db-04 clsecho: /etc/init.d/init.ohasd: Waiting for ohasd.bin PID 12851 to move. CGROUP

The cause is Linux resource control groups (cgroups v2, which is default in OL9) in operating system. The solution - revert back to the previous state (if possible), enable legacy (v1) cgroups in the kernel command line and rerun the upgrade. You need to add systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller into /etc/default/grub file and regenerate grub2 menu if you'd like to keep it in after reboot.

# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rhgb numa=off transparent_hugepage=never crashkernel=1G-64G:448M,64G-:512M systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true

# grub2-mkconfig -o /boot/grub2/grub.cfg

That's it !


Friday, October 6, 2023

Error in invoking target 'irman ioracle' when installing Oracle 19c SIHA on Oracle Linux 9 (OL9)

Although Oracle Corp. hasn't certified yet the use of Oracle 19c software on OL9, I decided to try (at least) the installation of the Oracle Restart "Single Instance High Availability (SIHA)". 

Expected message about (un)supported OS was ignored, in particular, by setting CV_ASSUME_DISTID in the shell before invoking ./gridSetup.sh :

$ export CV_ASSUME_DISTID=OL8.8

Then, after getting an error from the title, I simply scp'ed /usr/lib64/libc_nonshared.a over from another OL8 server, retried the linking, and the installation went on pretty well with one exception at the end - Oracle CVU thrown the error :

INFO:  [Oct 6, 2023 3:08:33 PM] RPM Package Manager database ...FAILED (PRVG-13702)
INFO:  [Oct 6, 2023 3:08:33 PM] Post-check for Oracle Restart configuration was unsuccessful.  
INFO:  [Oct 6, 2023 3:08:33 PM] Failures were encountered during execution of CVU verification request "stage -post hacfg".
INFO:  [Oct 6, 2023 3:08:33 PM] RPM Package Manager database ...FAILED
INFO:  [Oct 6, 2023 3:08:33 PM] PRVG-13702 : RPM Package Manager database files are corrupt on nodes "aaa".

The culprit were existing rpm packages in the system with obsolete SHA1 hash algorithm in their signatures. It worth to mention that this server was gradually upgraded from OL6 to OL9 through out of its life, so there were still some rpms signed by SHA1 signature which isn't supported anymore in OL9. The solution was to temporary implement the support of old (unsupported) signatures :

# update-crypto-policies --set LEGACY

and to check:

# update-crypto-policies --show
LEGACY

Next attempt of running CVU has finally succeeded and the installation process (including root scripts for the upgrade part) finished up without any errors.

I run :

# update-crypto-policies --set DEFAULT

to return the changed things back.

Wednesday, July 5, 2023

Example of setting up sftp (ssh) session behind the proxy to the server on the internet on Linux

% sftp -o "ProxyCommand nc -X connect -x proxy_server_address:proxy_server_port %h %p" -P sftp_server_port username@sftp_server_address

% ssh -o "ProxyCommand nc -X connect -x proxy_server_address:proxy_server_port %h %p" -p ssh_server_port username@ssh_server_address

Monday, May 22, 2023

dbca - [INS-04008] Invalid combination of arguments passed from command line. One or more mandatory dependent arguments are not passed for the argument: -useWalletForDBCredentials

During calling of dbca you might encounter into that error running dbca in silent mode. The point is that the argument from the error (-useWalletForDBCredentials) isn't mandatory, as dbca help message says :

 $ dbca -silent -createDatabase -help
       -createDatabase - Command to Create a database.
               -responseFile | (-gdbName,-templateName)
               -responseFile - <Fully qualified path for a response file>
               -gdbName <Global database name>
               -templateName <Specify an existing template in default location or the complete template path for DB Creation or provide a new template name for template creation>
               [-useWalletForDBCredentials <true | false> Specify true to load database credentials from wallet]
                       -dbCredentialsWalletLocation <Path of the directory containing the wallet files>
                       [-dbCredentialsWalletPassword <Password to open wallet with auto login disabled>]
               [-characterSet <Character set for the database>]

...


But there is an dependent mandatory argument which depends on non-mandatory argument 😀. I.e. the argument 

-dbCredentialsWalletLocation

, which goes next after 

-dbCredentialsWalletLocation

, is actually required.

So, in case of such error just include 

-dbCredentialsWalletLocation <existing_path>

in your dbca call, and if you want, without -dbCredentialsWalletLocation 😀

 

Good luck !

Tuesday, May 9, 2023

Lots of INVALID objects in Oracle supplied schemas after PDB remote cloning to/from 19c; ORA-04023 error

I encountered into weird situation the other day. Imagine that you've made an successful upgrade of CDB from 12.2 to 19c version without any error. But after cloning new PDB over database link (to or from upgraded CDB) you're getting a warning in the alert log file like this :

PDB_TEST(19):*************************************************************** PDB_TEST(19):WARNING: Pluggable Database PDB_TEST with pdb id - 19 is
PDB_TEST(19):         altered with errors or warnings. Please look into
PDB_TEST(19):         PDB_PLUG_IN_VIOLATIONS view for more details.
PDB_TEST(19):***************************************************************

Moreover, there are hundreds of INVALID objects in Oracle supplied schemas, i.e. package bodies of DBMS_STATS, DBMS_MONITOR and so on packages are invalid.

What are you gonna do ?

I took a look into alert log file and found other lines :

ORA-04063: package body "SYS.DBMS_AQADM_SYS" has errors
ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_AQADM_SYS"
ORA-06512: at line 1

I got the ORA-04023 error (Object SYS.AQ$_POST_INFO could not be validated or authorized
) trying to compile this package. 

I used the following "method" to overcome this issue :

1. I dropped the object(s) generated ORA-04023 error (it was a pl/sql type) and recreated the dictionary with checking invalid objects.

2. After dictionary recreation, I analyzed spool file for ORA-04023 error(s), and if they were, I moved on to step 1 again. 

Finally, I ended up with 4 objects to delete (DBMS_AQ is dependable of those). Here is the script :

 

rem get connected to PDB

set echo on timi on
spool drop_inv_types.out append

alter session set "_oracle_script" = true ;
drop type SYS.AQ$_REG_INFO force ;
drop type SYS.AQ$_POST_INFO force ;
drop type SYS.MSG_PROP_T force ;
drop type SYS.AQ$_SUBSCRIBER force ;

rem
rem restart PDB in upgrade mode and recreate data dictionary (catalog.sql and catproc.sql)
rem

shutdown immediate
startup upgrade

@?/rdbms/admin/catalog
@?/rdbms/admin/catproc

@?/rdbms/admin/utlrp

shutdown immediate

startup

spool off

To sum up - I still don't know the real cause of the issue, I bumped into it only once and it occured only in particular CDB. Therefore I haven't got the information how to prevent it.

Hope it will help. Good Luck !!!

Thursday, March 2, 2023

Rolling upgrade of Oracle database from 12c to 19c - short outline

1. check unsupported objects on primary database :

SQL> select * from dba_rolling_unsupported ;

if find any - export them (if possible, or treat them in other way) and remove them from the database, then import after upgrade if needed

SQL> select 'drop table '||owner||'.'||table_name||' ;' from dba_rolling_unsupported ;

2. check unsupported objects again

2.2 create tracking table on the primary

SQL> create table c##ddi.tracking_table (phase number, text varchar2(4000)) ;
SQL> insert into c##ddi.tracking_table values (1, 'Start') ;

3. although using DG broker is supported during rolling upgrade (starting from version 12.2), i would recommend to disable data guard broker on primary and all standby databases (i caught some bugs on it) :

% dgmgrl sys/aaa
DGMGRL> disable configuration ;

SQL> alter system set dg_broker_start = false ;

3.1 configure archivelog destination for future primary database (in case dataguard broker is disabled) :

SQL> alter system set log_archive_dest_2 = 'service="dg-bbb-test1-loc1" ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=60 db_unique_name="bbb_tes
t1_loc1" net_timeout=30 valid_for=(online_logfile,all_roles)'

4. initialize rolling upgrade plan   

SQL> begin dbms_rolling.init_plan (future_primary => 'bbb_test1_loc2') ; end ;

5. query rolling plan  

SQL> select scope, name, curval from dba_rolling_parameters order by scope, name;

6. build the plan

SQL> begin dbms_rolling.build_plan ; end ;

6.1 verify the plan

SQL> select batchid,source,target,description,phase,status from dba_rolling_plan ;

7. start the plan

SQL> begin dbms_rolling.start_plan ; end ;

7.1 look through the events history :

SQL> select event_time,type,message from dba_rolling_events ;

8. insert data into tracking table on the primary (may be created earlier)

SQL> insert into c##ddi.tracking_table values (2, 'plan started, bbb_test1_loc2 is now TLS') ;

8.1 query this table on the TLS. you must see the data there as well

9. UPGRADE TLS to 19c

9.1 prepare cfg file upg1.cfg like this :

global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade

#
# Database number 1 - Full DB/CDB upgrade
#
upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/bbb_test1_loc2
upg1.sid=bbb_test1_loc2
upg1.source_home=/u01/app/oracle/product/12.2/db_202201  # Path of the source ORACLE_HOME
upg1.target_home=/u01/app/oracle/product/19/db_1917  # Path of the target ORACLE_HOME
upg1.start_time=NOW                                       # Optional. [NOW | +XhYm (X hours, Y minutes after launch) | dd/mm/yyyy hh:mm:ss]
upg1.run_utlrp=yes                                  # Optional. Whether or not to run utlrp after upgrade
upg1.timezone_upg=yes                               # Optional. Whether or not to run the timezone upgrade
upg1.target_version=19                      # Oracle version of the target ORACLE_HOME.  Only required when the target Oracle database version is 12.2
upg1.remove_underscore_parameter=yes
upg1.restoration=no

do other required staff (configure wallet to be opened automatically and so on)
 

9.2 run autoupgrade in analyze mode :

$ /u01/app/oracle/product/19/db_1917/jdk/bin/java -jar /mnt/tst/install/oracle/autoupgrade/autoupgrade.jar -restore_on_fail -config /export/home/oracle/migration/bbb_test1_loc2_to_19c/upg1.cfg -mode analyze

check logfiles carefully

9.3 run autoupgrade in deploy mode :

$ /u01/app/oracle/product/19/db_1917/jdk/bin/java -jar /mnt/tst/install/oracle/autoupgrade/autoupgrade.jar -restore_on_fail -config /export/home/oracle/migration/bbb_test1_loc2_to_19c/upg1.cfg -mode deploy

9.4 you might be encountered into error like :

Error: UPG-1524

Cause: PDBs have been found that are either MOUNTED or RESTRICTED. The following PDBs need attention: [PDB1, PDB2]
For further details, see the log file located at /u01/app/oracle/cfgtoollogs/autoupgrade/bbb_test1_loc2/bbb_test1_loc2/101/autoupgrade_20230223_user.log]

the main cause is configured "read only" open mode in GI :

% srvctl config database
Database unique name: bbb_test1_loc2
Database name: bbbtest1
Oracle home: /u01/app/oracle/product/19/db_1917
Oracle user: oracle
Spfile: +DATAC7/BBB_TEST1_M8F/PARAMETERFILE/spfile.299.1129575269
Password file: +datac7/pw_files/orapwbbb_test1_loc2
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools:
Disk Groups: RECOC7,DATAC7
Mount point paths:
Services: service1_prim,service2_prim
Type: SINGLE
OSDBA group: dba
OSOPER group: dba
Database instance: bbb_test1_loc2
Configured nodes: loc2-p0l2-bbb-adm
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

fix :

% srvctl modify database -d $ORACLE_SID -startoption "open"

!!! BE AWARE OF SERVICES WHICH MIGHT BE RUN AFTER REOPENING BEING UPGRADED DATABASE !!! you'd better remove services beforehand, and then recreate them after all works will have been completed :

% srvctl remove service ...
% srvctl add service ...

9.5 after successful upgrade procedure on TLS restart SQL APPLY on TLS manually (transient logical standby database)

SQL> alter database start logical standby apply immediate ;

9.6 make sure you've created log_archive_dest_n parameter(s) back to original primary database and all standby databases as well. it will be used after switchover

9.7 insert into protocol table values (on primary) :

SQL> insert into c##ddi.tracking_table values (3, 'TLS upgraded, bbb_test1_loc1 is still PRIMARY') ;
SQL> commit ;

check on TSL weather new data has appeared :

SQL> select * from c##ddi.tracking_table ;

10. SWITCHOVER

10.1 from primary database (12c) execute :

SQL> begin dbms_rolling.switchover; end
/

10.2 check on the former primary which now must be logical standby (and transient logical standby should already have been primary):

SQL> select * from v$dataguard_config ;

Now you can check the application and restart it if needed.


10.3 restart former primary database under new OH (19)
                                                        
% srvctl stop database -d $ORACLE_SID
% srvctl remove database -d $ORACLE_SID

switch to new ORACLE_HOME and add database using srvctl :

% srvctl add database -d bbb_test1_loc1 -o /u01/app/oracle/product/19/db_1917 -pwfile +datac7/pw_files/orapwbbb_test1_loc1 -spfile +datac7/sp_files/spfilebbb_test1_loc1.ora -role logical_st
andby -startoption mount -stopoption immediate -instance bbb_test1_loc1 -dbtype single -dbname bbbtest1 -policy automatic -node $(hostname)

% srvctl start database -d bbb_test1_loc1

10.4 you may encounter the error :

PRCR-1079 : Failed to start resource ora.bbb_test1_loc1.db
CRS-5017: The resource action "ora.bbb_test1_loc1.db start" encountered the following error:
ORA-01078: failure in processing system parameters
LRM-00101: unknown parameter name '_gc_cpu_time'

in this case you'd better to recreate spfile and remove all underscore parameters, which have left from oracle 12 :

SQL> create pfile='/export/home/oracle/pfile' from spfile='+datac7/sp_files/spfilebbb_test1_loc1.ora' ;
go to host end edit pfile (rem all underscore parameters)
SQL> create spfile='+datac7/sp_files/spfilebbb_test1_loc1.ora' from pfile='/export/home/oracle/pfile' ;

$ srvctl start database -d bbb_test1_loc1

former primary database must be started in mount state

10.5 if needed, copy files from old ORACLE HOME to new (*.ora files etc.)

10.6 on the new primary, consult upgrade plan and event log once again

SQL> select event_time,type,message from dba_rolling_events ;
SQL> select batchid,source,target,description,phase,status from dba_rolling_plan ;


11 FINISH phase

11.1 before running finish_plan, set log_archive_dest_state_n=enable for all standbys of new primary database

11.2 run from new primary database :
SQL> begin dbms_rolling.finish_plan ; end ;

/

SQL> select event_time,type,message from dba_rolling_events ;
SQL> select * from v$dataguard_config ;

11.3 insert into protocol table values :

SQL> insert into c##ddi.tracking_table values (4, 'FINISH') ;
SQL> commit ;

12 POSTUPGRADE tasks

12.1 modify clusterware configuration according to new roles (if not already done). you may need add custom services as it were for former primary :

for new primary :
$ srvctl modify database -d $ORACLE_SID -startoption open -role primary

for former primary :
$ srvctl modify database -d $ORACLE_SID -role physical_standby
$ srvctl add service -d $ORACLE_SID -s service1_prim -pdb pdb1 -role primary -policy automatic -failovertype session -failovermethod basic -failoverretry 10 -failoverdelay 10
$ srvctl add service -d $ORACLE_SID -s service2_prim -pdb pdb2 -role primary -policy automatic -failovertype session -failovermethod basic -failoverretry 10 -failoverdelay 10

12.2 restore Data Guard Broker configuration on both databases :

SQL> alter system set dg_broker_start = true ;

% dgmgrl sys/aaa
DGMGRL> enable configuration ;

after enabling configuration, the new membership roles should be synchronized, but the error can occur :

Error: ORA-16700: The standby database has diverged from the primary database.

in this case, recreate dataguard configuration :

DGMGRL> remove configuration ;
Removed configuration
DGMGRL> show configuration ;
ORA-16532: Oracle Data Guard broker configuration does not exist
DGMGRL> create configuration bbbtest1_dg as primary database is bbb_test1_loc2 connect identifier is "dg-bbb-test1-loc2" ;

                                                  
DGMGRL> add database bbb_test1_loc1 as connect identifier is "dg-bbb-test1-loc1" ;

in case of Error: ORA-16698: member has a LOG_ARCHIVE_DEST_n parameter with SERVICE attribute set
remove log_archive_dest_n parameter from being added database :

SQL> alter system set log_archive_dest_2='' scope=memory ;
SQL> alter system reset log_archive_dest_2 ;

DGMGRL> show configuration ;
DGMGRL> enable configuration ;

12.3 if needed, do switchover back to the former primary database

connect to dg broker (primary database) using tns :

$ dgmgrl sys@dg-bbb-test1-loc2 as sysdba
DGMGRL> switchover to bbb_test1_loc1 ;

12.4 it has noticed that new spfile (with default name, as autoupgrade.jar utility does) was created for upgraded database (transient logical standby). it has default name and contains lots of underscore parameters. after switchover one needs to fix this:

SQL> create pfile='/export/home/oracle/pfile' from spfile ;

edit pfile if needed (remove unnecessary underscore parameters) and recreate spfile with required custom name (if needed):

SQL> create spfile='+datac7/sp_files/spfilebbb_test1_loc2.ora' from pfile='/export/home/oracle/pfile' ; -- do it on the running instance in order to update clusterware registry

$ srvctl stop database -d $ORACLE_SID
$ srvctl start database -d $ORACLE_SID -o nomount

SQL> alter database mount ;
SQL> alter database open read only ; -- opened read only required on standby to enable optimizer fixes

enable optimizer fixes on the standby and primary databases :

SQL> begin dbms_optim_bundle.enable_optim_fixes ('ON','BOTH') ; END ;
/

restart physical standby :
$ srvctl stop database -d $ORACLE_SID
$ srvctl start database -d $ORACLE_SID

12.5 RUN BACKUP OF LEVEL 0 of the new PRIMARY
                                                      
12.6 set some parameters (recommended by Oracle Migration Team)

alter system set deferred_segment_creation=false ;
alter system set "_cursor_obsolete_threshold"=1024 scope=spfile ;
alter system set "_enable_ptime_update_for_sys"=true ;
alter system set optimizer_adaptive_plans=true ;

12.7 check database directory objects and do all the staff related to postupgrade tasks (gather dictionary stats after 10 days of work, increase compatible etc.)

12.8 check that database parameters are in sync between primary and standby databases

12.9 reconfigure databases in Oracle Enterprise Manager to new Oracle Home if needed

Friday, January 27, 2023

PostgreSQL : pg_archivecleanup doesn't delete any archived wal file

I've bumped into situation the other days related to removing archived log files not needed for recovery from the backup. After making backup by pg_basebackup, the pg_archivecleanup command was used like of :

$ pg_archivecleanup -d -n -x .gz /arch/archive 000000010000005C000000ED.00000028.backup

The response was :

$ pg_archivecleanup: keeping WAL file "/arch/archive/000000010000005C000000ED" and later

And nothing happened more. The directory /arch/archive contained 24k files, but pg_archivecleanup haven't considered them at all.

The culprit of such behavior - a prefix in archive wal filenames. Their names included 'archive' prefix, and pg_archivecleanup expects they didn't :

archive000000010000005E000000D6.gz
archive000000010000005E000000D5.gz
archive000000010000005E000000D4.gz
archive000000010000005E000000D3.gz
archive000000010000005E000000D2.gz
archive000000010000005E000000D1.gz
archive000000010000005E000000D0.gz
archive000000010000005E000000CF.gz
archive000000010000005E000000CE.gz

To make cleanup you'd better to rename them :

% cd ${archive_dir} && find -type f -name "archive*" -exec rename archive "" '{}' \;

After that the cleanup worked as expected :

$ pg_archivecleanup -d -n -x .gz /arch/archive 000000010000005C000000ED.00000028.backup

dry run cleanup execution :
pg_archivecleanup: keeping WAL file "/arch/archive/000000010000005C000000ED" and later
/arch/archive/000000010000001000000080.gz
pg_archivecleanup: file "/arch/archive/000000010000001000000080.gz" would be removed
/arch/archive/000000010000000100000029.gz
pg_archivecleanup: file "/arch/archive/000000010000000100000029.gz" would be removed
/arch/archive/00000001000000500000003D.gz
...

P.S. When -x switch is used, then compressed (and uncompressed as well) wal archived files are considered for removing.

That's it ! Good Luck !