Monday, May 26, 2014

How to cleanup orphaned datapump jobs from DBA_DATAPUMP_JOBS

In many cases you sometimes stop data pump job or in case of an abnormal end of the Data Pump job (the orphaned job) or using undocumented parameter KEEP_MASTER=Y, the master table remain in the database.

Though this topic is related to cleanup orphaned datapump jobs. But it is good to know several things before doing cleanup jobs.

1) You can check the orphaned data pump from the state column of the view dba_datapump_jobs and DBA_DATAPUMP_JOBS is based on gv$datapump_job, obj$, com$, and user$. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.

2) For a new data pump job without any job name it is used a system generated name. From the dba_datapump_jobs it is checked for existing data pump jobs and then obtain a unique new system generated jobname.

3) Data pump jobs are different from DBMS_JOBS and they are maintained differently. Jobs created with DBMS_JOBS use there own processes. Data Pump jobs use a master process and worker process(es).

4) If you drop the master table while doing the data pump export or data pump import operation then the scenario is discussed below.

In case of export if you drop data pump export operation then export process will abort.

In case of import if you drop data pump import operation then import process will abort while it leads an incomplete import.

If the data pump job is completed and master table exist (a common if you do export operation with KEEP_MASTER=y) then it is safe to drop the master table.

Step by step cleanup orphaned datapump jobs is discussed below.

Step 01: Check the orphaned datapump jobs.
sqlplus / as sysdba
SET lines 140
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a12 
COL operation LIKE owner_name
COL job_mode LIKE owner_name
SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs;

OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE        ATTACHED_SESSIONS
---------- -------------------- ---------- ---------- ------------ -----------------
ARJU       SYS_EXPORT_SCHEMA_01 EXPORT     SCHEMA     NOT RUNNING                  0
ARJU       SYS_EXPORT_SCHEMA_02 EXPORT     SCHEMA     NOT RUNNING                  0

Step 02: Check the state field. For orphaned jobs the state will be NOT RUNNING. So from the output we can say both are orphaned jobs.

Step 03: Drop the master table.
DROP TABLE ARJU.SYS_EXPORT_SCHEMA_01;
DROP TABLE ARJU.SYS_EXPORT_SCHEMA_02;

Step 04: Check for existing data pump jobs by query issued in step 01. If objects are in recyclebin bin then purge the objects from the recyclebin.
SQL> SELECT owner_name, job_name, operation, job_mode,
   state, attached_sessions
   FROM dba_datapump_jobs;

OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE        ATTACHED_SESSIONS
---------- -------------------- ---------- ---------- ------------ -----------------
ARJU       BIN$xMNQdACzQ6yl22kj EXPORT     SCHEMA     NOT RUNNING                  0
          9U0B8A==$0
ARJU       BIN$BmUy4r5MSX+ojxFk EXPORT     SCHEMA     NOT RUNNING                  0
          sw8ocg==$0

SQL> PURGE TABLE ARJU.SYS_EXPORT_SCHEMA_01;

Table purged.

SQL> PURGE TABLE ARJU.SYS_EXPORT_SCHEMA_02;

Table purged.

Check if there is any orphaned jobs again.
SQL> SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs;

no rows selected

Step 05: In this stage you did not get any orphaned jobs if the jobs have a master table. If there are still jobs listed in dba_datapump_jobs do cleanup process like below.
SET serveroutput on
SET lines 100
DECLARE
  job1 NUMBER;
BEGIN
  job1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_SCHEMA_01','ARJU');
  DBMS_DATAPUMP.STOP_JOB (job1);
END;
/

DECLARE
  job2 NUMBER;
BEGIN
  job2 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_SCHEMA_02','ARJU');
  DBMS_DATAPUMP.STOP_JOB (job2);
END;
/

Related Documents

IMPDP Stuck with wait event - "wait for unread message on broadcast channel"

I was trying to run data import using impdp and the job was stuck with wait event "wait for unread message on broadcast channel".
This is an idle wait event as per oracle, that means oracle is waiting for something to happen for the session to proceed.

I googled and checked metalink but couldn't really find a solution and my job was stuck indefinitely.
From one of the forums online I figured out an old defunct job in the DB can cause new impdp sessions to be stuck.

Hence I queried dba_datapump_jobs and found out that I have an old job in "NOT RUNNING" state:

SQL> select * from dba_datapump_jobs;

OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
------------ ------------------------------ ------------ ------------ ------------------- ------------ ----------------- -----------------
SYSTEM NEW1 IMPORT TABLE EXECUTING 4 1 3
SYSTEM SYS_IMPORT_FULL_01 IMPORT FULL NOT RUNNING 0 0 0

Checked to find the underlying object:

SQL> SELECT a.object_id, a.object_type, a.owner, a.object_name, a.status
FROM dba_objects a, dba_datapump_jobs j
WHERE a.owner=j.owner_name AND a.object_name=j.job_name
and j.job_name='SYS_IMPORT_FULL_01';

OBJECT_ID OBJECT_TYPE OWNER OBJECT_NAME STATUS
------------ ------------------- -------------------- ---------------------- -------
984119 TABLE SYSTEM SYS_IMPORT_FULL_01 VALID

15:02:47 SQL> select owner,object_name,subobject_name, object_type,last_ddl_time from dba_objects where object_id=984119
/

OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE LAST_DDL_TIME
-------------------- ---------------------------------------- ------------------------------ ------------------- -------------------
SYSTEM SYS_IMPORT_FULL_01 TABLE 13-08-2010 19:59:50

Dropped the datapump job table:

15:02:59 SQL> drop table SYSTEM.SYS_IMPORT_FULL_01;

Table dropped.

And then my impdp job "NEW1" started progressing:

15:04:03 SQL> select * from dba_datapump_jobs;

OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
------------ ------------------------------ ------------ ------------ ------------------- ------------ ----------------- -----------------
SYSTEM NEW1 IMPORT TABLE EXECUTING 4 1 6

NOTE: Please make sure that the old job is NOT RUNNING.
Secondly, if its a known job you can also try to:
impdp username/password attach=JOB_NAME
and then at the IMPDP prompt do a kill_job. This should also clear the jobs status from dba_datapump_jobs.

Copy: http://apunhiran.blogspot.com/2011/05/impdp-stuck-with-wait-event-wait-for.html

Wednesday, May 7, 2014

Archivelog retention policy changes in RMAN 11g or RMAN-08591: WARNING: invalid archived log deletion policy

In the past, that means at least until 10.2.x it used to be a bit clumsy to set the archivelog deletion policy to “applied on standby” on both primary and standby database. It involved setting an underscore parameter, as per Metalink Note 728053.1 “Configure RMAN to purge archivelogs after applied on standby”. For reference, the parameter was _log_deletion_policy, and you’d have to bounce the database for it to take effect.

SQL> alter system set "_log_deletion_policy"='ALL' scope=spfile;
Only then could you avoid the error message.Now with 11g that’s a lot easier.

Consider the following example, where we set the deletion policy for the standby. Please note that this was entirely possible with 10.2 as well without causing any error (see above note). Connecting to the standby database first.

[oracle@rhel5 ~]$ rman target sys/secretpwd@stdby

Recovery Manager: Release 11.2.0.1.0 - Production on Mon Oct 5 19:40:15 2009

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1225706675)

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name STDBY are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_stdby.f'; # default

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO applied on standby;

new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
new RMAN configuration parameters are successfully stored

RMAN> exit

Recovery Manager complete.
Let’s try the same with the primary database.
[oracle@rhel5 ~]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Mon Oct 5 19:42:09 2009

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1225706675)

RMAN> configure archivelog deletion policy to applied on standby;

new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
new RMAN configuration parameters are successfully stored
RMAN-08591: WARNING: invalid archived log deletion policy
Did you notice the warning? That was supposed to be possible. Now let’s have a closer look at what it says:
RMAN> host;

[oracle@rhel5 ~]$ oerr rman 8591
8591, 3, "WARNING: invalid archived log deletion policy"
// *Cause: An invalid ARCHIVELOG DELETION POLICY was supplied. The archived
//         log deletion policy was APPLIED but there was no mandatory
//         archived log destinations.
// *Action: One of the following:
//          1) Change archived log deletion policy using CONFIGURE command
//          2) Make one or more of standby destination as MANDATORY.
Aha! So maybe all we need is to make the standby database a mandatory target! Let’s try:
[oracle@rhel5 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Mon Oct 5 19:43:05 2009

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> select value from v$parameter where name = 'log_archive_dest_2';
VALUE
------------------------------------------------------------------------------
service=stdby lgwr async valid _for=(all_logfiles,primary_role)

SQL> alter system set log_archive_dest_2=
 2  'service=stdby lgwr async valid_for=(online_logfiles,primary_role) mandatory';

System altered.

SQL> exit
So that made this destination mandatory.
Update 110823: Some readers have pointed out I should have made the local destination mandatory instead. This did not work (the below test has been performed on 11.2.0.2-my 11.2.0.1 system was scrapped in the meantime). In this test I left the standby destination at its default value (optional) and changed the local destination to be mandatory:
SQL> alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST mandatory';

System altered.

SQL> select destination,binding from v$archive_dest
  2  where dest_id = 1;

DESTINATION                    BINDING
------------------------------ ---------
USE_DB_RECOVERY_FILE_DEST      MANDATORY
Now back in RMAN I tried to set the deletion policy:
[oracle@rac11gr2drnode1 ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Tue Aug 23 21:04:51 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1279071946)

RMAN> configure archivelog deletion policy to applied on standby;

using target database control file instead of recovery catalog
old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
new RMAN configuration parameters are successfully stored
RMAN-08591: WARNING: invalid archived log deletion policy

RMAN>
As you can see there is no point using a mandatory local destination for this archivelog deletion policy. Back to what I was going to explain anyway, I changed the standby destination to “mandatory” and the local destination to “optional” to complete the test.
Be warned though: changing a remote destination from optional to mandatory means the primary shuts down in case it can’t talk to it. Maybe having a different approach to archivelog clearing might be more worth your while.
Just to complete the example-with the standby destination set to mandatory, I can change the retention policy successfully:
RMAN> configure archivelog deletion policy to applied on standby;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
new RMAN configuration parameters are successfully stored

RMAN>
The last message said it all – new configuration successfully stored, and I promise I haven’t touched an underscore parameter. However, be warned again that the primary now has a very strong dependency on the network connection to your standby-this is a risk for a production outage.

Subject: Re: RMAN-08591: WARNING: invalid archived log deletion policy
Author: Tim Boles, United States
Date: Oct 24, 2012, 559 days ago
Message: oerr rman 8591
8591, 3, "WARNING: invalid archived log deletion policy"
// *Cause: An invalid ARCHIVELOG DELETION POLICY was supplied. The archived
// log deletion policy was APPLIED but there was no mandatory
// archived log destinations.
// *Action: One of the following:
// 1) Change archived log deletion policy using CONFIGURE command
// 2) Make one or more of standby destination as MANDATORY.

Sunday, May 4, 2014

Oracle Database 11g Certified Master Exam (OCM) NEW

What You Will Learn:
The Oracle Database 11g Certified Master Exam (OCM) credential is the industry's most advanced Database Administrator certification. This exam is based on Oracle Database 11g Release 2 (32-bit) and includes the use of Oracle Real Application Clusters (Oracle RAC). The operating system is Oracle Enterprise Linux Release 5.4 (32-bit).
Oracle Certified Master Credential Requirements
Obtaining the Oracle Certified Master credential requires passing an onsite practical exam that’s conducted at designated Oracle University facilities in each global region. You’ll be tested on your ability to perform database administration tasks in a live database environment. Skill sets are timed and require a high degree of Oracle database knowledge to successfully complete them.
Pre-Requisites for Registration
We highly recommend that you have a minimum of 3 to 4 years of experience as an Oracle Database Administrator. 5 years or more is more typical of qualified candidates.
Before Registering for the 11g OCM Exam:
  • Complete PRIOR CERTIFICATION - Oracle Database 11g Administrator Certified Professional (OCP).
  • Complete 2 of the advanced or specialty courses to meet the hands-on course requirements.
  • Gain considerable hands-on expertise fulfilling all of the exam objectives.
  • Get comfortable with the basic Linux command language.
  • Visit the Oracle Certification Program Website to view all the details about the OCM exam, including the minimum recommended skills.
Oracle Testing ID is Required
Please note that you must provide your 9-character Oracle Testing ID to initiate the OCM exam. The Oracle Testing ID is the unique identifier associated with candidate's Oracle exam history at Prometric and Pearson Vue.
We strongly recommended that OCM candidates locate their Oracle Testing ID at least one week prior to your exam date. To set up a web account with Pearson Vue and confirm an Oracle Testing ID, visit http://www.pearsonvue.com/oracle

Follow these instructions to obtain your Oracle ID.

Audience
  • Database Administrators
  • Support Engineer
  • Technical Consultant

Course Topics

Server Configuration


  • Create the database, Determine and set sizing parameters for database structures, Create and manage temporary, permanent, and undo tablespaces
  • Stripe data files across multiple physical devices and locations, Configure the database environment to support optimal data access performance, Create and manage database configuration files
  • Create and manage bigfile tablespaces, Create and Manage a tablespace that uses NFS mounted file system file
  • Create and manage multiple network configuration files, Create and configure a listener, Configure the database instance to support shared server connections
  • Set up network tracing, Manage Oracle network processes
  • Configure the network environment to allow connections to multiple databases
  • Use configurationless connections, Use OPatch to install a patch
  • Use Grid Infrastructure to manage oracle databases and other resources, Use Enterprise Manager Configuration Assistant(EMCA) utility

Enterprise Manager Grid Control

  • Install and Patch Enterprise Manager Grid Control software, Configure the Enterprise Manager repository, Create Enterprise Manager Grid Control users
  • Use Enterprise Manager to modify a database configuration, Configure Enterprise Manager to modify a database availability
  • Create and manage jobs, Create and monitor alerts, Create notifications, Implement Grid Control and Database Control
  • Choose the appropriate tablespace type for the intended use
  • Create Scheduler jobs, Create schedules, Assign jobs to windows, Create programs, Create job classes
  • Install the Enterprise Manager Grid Control infrastructure
  • Deploy Enterprise Manager Grid Control agents
  • Configure Grid Control for business requirements

Managing Database Availability

  • Mantain recovery catalogs, Configure Recovery Manager
  • Use Recovery Manager to perform database backups
  • Use Recover Manager to perform complete database restore and recovery operations
  • Configure RMAN
  • Create different types of RMAN backups to cater for different performance and retention requirements
  • Set Flashback Database parameters
  • Configure a Fast Recovery Area
  • Perform various recovery operations using Flashback technology

Data Management

  • Manage Materialized Views to improve rewrite and refresh performance
  • Configure and manage distributed materialized views
  • Create and Manage encrypted tablespaces, Manage Transport of tablespaces across platforms
  • Configure a schema to support a star transformation query, Administer external tables
  • Implement Data Pump export and import jobs for data transfer, Implement Data Pump to and from remote databases
  • Configure and use parallel execution for queries
  • Use SQL*Loader
  • Administer, manage and tune parallel execution

Data Warehouse Management

  • Administer partitioned tables and indexes using appropriate methods and keys
  • Perform partition maintenance operations
  • Maintain indexes on a partitioned table
  • Implement securefile LOB
  • Create and manage LOB segments
  • Implement fine-grained access control
  • Create and manage contexts
  • Administer flashback data archive and schema evolution

Performance Management

  • Administer Resource Manager, Use Result Cache, Use multi column statistics
  • Gather statistics on a specific table without invalidating cursors
  • Use partitioned indexes
  • Administer and tune schema object to support various access methods
  • Interpret execution plan, Use SQL tuning tools and features
  • Use SQL Tuning Advisor, Use SQL Access Advisor
  • Use SQL Performance Analyzer, Configure baseline templates
  • Use SQL Plan Management feature, Implement instance caging

Grid Infrastructure and ASM

  • Install Oracle Grid Infrastructure
  • Create ASM Disk Groups
  • Create and manage as ASM instance
  • Implement ASM failure groups
  • Creating ACFS File System
  • Start,Stop, Configure and Administer Oracle Grid Infrastructure

Real Application Clusters

  • Install the Oracle Database 11gR2 software
  • Configure ASM for the shared disks and create a clustered database
  • Configure archiving
  • Configure Services using both Manual and Policy Managed Methods

Data Guard

  • Create Physical Standby Database with real-time apply
  • Configure the data guard environment to reduce overheads of fast incremental backups on the primary database
  • Configure the Observer
  • Switchover and switch back
  • Configure connect time failover
  • Convert the standby to a snapshot standby
  • Configure archivelog deletion policy for the dataguard configuration

Related Training

Required Prerequisites:


  • Candidates should have a minimum of 3 – 4 years of Oracle database administrator experience.
  • Complete PRIOR CERTIFICATION -Oracle Database 11g Administrator Certified Professional (OCP)
  • Complete 2 of the Advanced or Specialty courses to meet the Hands On Course Requirements.
  • Every candidate must have their 9-character Oracle Testing ID to initiate the OCM exam.

    Suggested Prerequisites:

    • Familiarity navigating through online Oracle documentation
    • Proficient with using Mozilla browser software
    • Proficient with SQL
    • Advanced knowledge and use of Oracle Enterprise Server techn