Tuesday, November 26, 2013

How do we identify the High Water Mark (HWM)?

Option 1 

We can use DBMS_SPACE.UNUSED_SPACE procedure. 

How do we reset the HWM? 

Option 1 
Alter table emp move tablespace 

This option requires rebuilding the indexes. since the index will be invalid/unusable after running the above command. The downside is, rebuilding the index is additional overhead when we use this option. Also users can not use the application or reports while rebuilding the index. 

Option 2 

1. Export the data 
2. truncate the table 
3. import the table 
4. Analyze the table 

Option 3 
1. copy the table data 
2. truncate the original table 
3. insert back. 

Option 4 
Use DBMS_REDEFINITION package to copy the table. 

Option5 
In oracle10g, we have better option to reset the high water mark. We need to enable row movement before we shrink the space. We do not need to rebuild the index after reseting the HWM. This is the best option to reset the HWM. But this feature is not available in oracle9i. 


sql>alter table emp enable row movement; 

Table altered. 


 sql>alter table emp shrink space compact;
 
Table altered.

Search Results for: select count(*) ---- Is count(col) better than count(*)?


One question, that surfaces sometimes in my courses (especially in Performance Tuning) is, whether there is a benefit in avoiding count(*) in favor of count(col). The short answer is: No.
Dealing with Oracle DBAs and Developers has tought me, though, that these guys usually are not satisfied with the short answer; they require some kind of proof – which is a good thing, in my view. So here we go:
SQL> select count(*) from sales;

 COUNT(*)
----------
 14701488

Elapsed: 00:00:33.30
SQL> select count(cust_id) from sales;

COUNT(CUST_ID)
--------------
 14701488

Elapsed: 00:00:04.06
SQL> select count(*) from sales;

 COUNT(*)
----------
 14701488

Elapsed: 00:00:34.49
SQL> select count(cust_id) from sales;

COUNT(CUST_ID)
--------------
 14701488

Elapsed: 00:00:04.20
I think the above observation is responsible for the appearing of the myth that count(col) is superior. In fact, right now it is faster (about 10 times!) as the count(*). I did the two selects twice to show that caching has not much to say here. Unfortunately, the second select with count(col) is faster, but not necessarily correct! Watch it:
SQL> update sales set cust_id=null where rownum<2;
1 row updated.
Elapsed: 00:00:00.23
SQL> commit;
Commit complete.
Elapsed: 00:00:00.01
SQL> select count(*) from sales;
 COUNT(*)
----------
 14701488
Elapsed: 00:00:34.84
SQL> select count(cust_id) from sales;
COUNT(CUST_ID)
--------------
 14701487
Elapsed: 00:00:03.73
The count(cust_id) is still faster by far – but it shows a wrong result, should you be interested in the number of rows of the table  You probably now see already the point I am after: There is an index on the cust_id column that is used for count(cust_id), but not for count(*). If the column counted does not contain any NULL values, the result is identical, but the runtime is faster. The origin of the myth! The point is: If you would declare the indexed column as NOT NULL, the optimizer would know that it can use the index for the count(*):
SQL> update sales set cust_id=1  where rownum<2;
1 row updated.
Elapsed: 00:00:00.10
SQL> commit;
Commit complete.
Elapsed: 00:00:00.10
SQL> alter table sales modify (cust_id NOT NULL);
Table altered.
Elapsed: 00:00:38.72
SQL> set autotrace on explain
SQL> select count(*) from sales;
 COUNT(*)
----------
 14701488
Elapsed: 00:00:03.81
Execution Plan
----------------------------------------------------------
Plan hash value: 2989616816
-----------------------------------------------------------------------------------
| Id  | Operation             | Name              | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                   |     1 |  8499   (1)| 00:01:42 |
|   1 |  SORT AGGREGATE       |                   |     1 |            |          |
|   2 |   INDEX FAST FULL SCAN| SALES_CUST_ID_IDX |    14M|  8499   (1)| 00:01:42 |
-----------------------------------------------------------------------------------
SQL> alter table sales modify (cust_id NULL);
Table altered.
Elapsed: 00:00:00.27
SQL> select count(*) from sales;
 COUNT(*)
----------
 14701488
Elapsed: 00:00:36.00
Execution Plan
----------------------------------------------------------
Plan hash value: 1047182207
--------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |     1 | 19398   (1)| 00:03:53 |
|   1 |  SORT AGGREGATE    |       |     1 |            |          |
|   2 |   TABLE ACCESS FULL| SALES |    14M| 19398   (1)| 00:03:53 |
--------------------------------------------------------------------
As a general rule of thumb, you should always declare columns NOT NULL if you know that NULL values can’t be in that columns, in order to make the optimizer aware of that important information also. By the way, if you do count(1) instead of count(*), the outcome is the same:
SQL>  alter table sales modify (cust_id NOT NULL);
Table altered.
SQL> select count(1) from sales;
 COUNT(1)
----------
 14701488
Elapsed: 00:00:03.12
Execution Plan
----------------------------------------------------------
Plan hash value: 2989616816
-----------------------------------------------------------------------------------
| Id  | Operation             | Name              | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                   |     1 |  8499   (1)| 00:01:42 |
|   1 |  SORT AGGREGATE       |                   |     1 |            |          |
|   2 |   INDEX FAST FULL SCAN| SALES_CUST_ID_IDX |    14M|  8499   (1)| 00:01:42 |
-----------------------------------------------------------------------------------

Friday, November 22, 2013

Database Monthly Maintenance Tasks

Primary Tasks:
  • Perform an initial database health check, identifying any database vulnerabilities and deficiencies.
  • Configure, automate, and maintain a reliable database backup solution using Oracle’s RMAN backup utility
  • Setup monitoring services that alerts me (and others if necessary) when a database or server event has occurred that needs to be addressed
  • Perform routine, weekly database maintenance to ensure the health of the database.
  • Perform database recovery and emergency troubleshooting.
  • Diagnose and repair performance problems within the database.
  • Apply critical patch updates to the database each quarter
  • Perform minor version upgrades (ex: 11.2.0.1 to 11.2.0.2) 
  • Perform major version upgrades (ex: 10g to 11g)
  • Answer database related questions in support of developers and system administrators so they can better perform their jobs.

Additional Tasks: 
  • Provide database design support
  • Installation and setup of a new Oracle database
  • Installation and setup of standby databases for disaster recovery
  • Major schema modifications to existing databases
  • Data migration between Oracle databases or from other commercial databases (SQL Serever, MS Access, MySQL, flat file, etc…) to Oracle
  • SQL and PL/SQL Tuning

Use Data Guard and Configure the DB_ULTRA_SAFE Initialization Parameter for detects and prevents data block corruptions

Use Data Guard and configure the DB_ULTRA_SAFE initialization parameter on both the primary and standby systems for the most comprehensive data corruption prevention and detection.
·         On the primary database, set the DB_ULTRA_SAFE=DATA_AND_INDEX initialization parameter to prevent and detect data corruptions in a timely manner, and thus provide critical data protection and high availability for the Oracle Database.
The DB_ULTRA_SAFE initialization parameter also controls other data protection behavior in Oracle Database, such as requiring ASM to perform sequential mirror write I/Os.
Table 2-1 describes the values that the DB_ULTRA_SAFE parameter automatically assigns to the DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM, or DB_LOST_WRITE_PROTECT parameters.
Table 2-1 Parameter Values Set by the DB_ULTRA_SAFE Initialization Parameter
When you set DB_ULTRA_SAFE to ...
Then ...
DATA_AND_INDEX (recommended)
o                                DB_BLOCK_CHECKING is set to FULL.
o                                DB_LOST_WRITE_PROTECT is set     to TYPICAL.
o                                DB_BLOCK_CHECKSUM is set to FULL.
DATA_ONLY
o                                DB_BLOCK_CHECKING is set to MEDIUM.
o                                DB_LOST_WRITE_PROTECT is set to TYPICAL.
o                                DB_BLOCK_CHECKSUM is set to FULL.

Note:
When you set the DB_ULTRA_SAFE parameter, it automatically integrates and controls the behavior (described in Table 2-1) of the following initialization parameters:
o        DB_BLOCK_CHECKING detects and prevents data block corruptions.
Block checking prevents memory and data corruptions, but it incurs some performance overhead on every block change. For many applications, the block changes are a small percentage compared to the blocks read (typically less than five percent), so the overall effect of enabling block checking is small.
o        DB_BLOCK_CHECKSUM detects redo and data block corruptions and can prevent most corruptions from happening on the physical standby database.
Redo and data block checksums detect corruptions on the primary database and protect the standby database. This parameter requires minimal CPU resources.
o        DB_LOST_WRITE_PROTECT detects stray and lost writes.
Lost write protection enables a physical standby database to detect lost write corruptions on both the primary and physical standby database.
However, if you explicitly set the DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM, or DB_LOST_WRITE_PROTECT parameters in the initialization parameter file, then the DB_ULTRA_SAFE parameter has no effect and no changes are made to the parameter values. Thus, if you specify the DB_ULTRA_SAFE parameter, do not explicitly set these underlying parameters.
·         On physical standby databases, specify the DB_BLOCK_CHECKSUM and DB_LOST_WRITE_PROTECT parameters:
o        Set DB_BLOCK_CHECKSUM=FULL
If DB_BLOCK_CHECKSUM is set to FULL, then both disk corruption and in-memory corruption are detected and the block is not written to disk, thus preserving the integrity of the physical standby database. This parameter has minimal effect on Redo Apply performance.
o        Set DB_LOST_WRITE_PROTECT=TYPICAL
Lost write protection prevents corruptions—due to stray or lost writes on the primary—from being propagated and applied to the standby database. Setting this parameter has a negligible effect on the standby database. Moreover, setting the DB_LOST_WRITE_PROTECT initialization parameter is recommended over employing the HARD solution, because HARD does not provide full stray and lost write protection and redo application validation.
·         A standby database is a database that is decoupled from the primary database and on which redo data is checked and verified. Redo Apply and SQL Apply processes perform another layer of validation on the standby database that can detect stray or lost writes and corrupted blocks caused by hardware, software, or network issues. Most of these issues cannot be detected on the primary database or may remain hidden on the primary database for a long period.
·         Enable the DB_BLOCK_CHECKING initialization parameter.
Consider setting the DB_BLOCK_CHECKING parameter only on the primary database. Enabling DB_BLOCK_CHECKING on the standby database incurs a much higher overhead and can dramatically reduce Redo Apply performance. Testing is recommended to measure the effect on your environment.

Oracle Database 11g Release 2 (11.2.0.4) - New Features

Oracle Announced that Oracle DB 11g Release 2 (11.2.0.4) is available. The release note can be found here.

This is the list of new features, which are available in new patch set (11.2.0.4):

1. Oracle Data Redaction is now part of 11gR2
2. The Trace File Analyzer and Collector
3. RACcheck is now included
4. The OPTIMIZER_DYNAMIC_SAMPLING initialization parameter is set to the new value of 11
5. Total Recall: there is a new OPTIMIZE DATA clause when creating or altering a flashback data archive
6. The DES, RC4, and MD5 algorithms are desupported
7. New sqlnet.ora parameter SSL_EXTENDED_KEY_USAGE
8. New PrimaryLostWriteAction Property
9. New init.ora parameter: when set to true ENABLE_GOLDENGATE_REPLICATION RDBMS services used by Oracle GoldenGateare enabled

script that shows tablespace usage in %, blocks used and empty etc

Code:
SELECT RPAD(t.name,18,' ') tablespace ,
LPAD(TO_CHAR(CEIL( (SUM(s.blocks)*COUNT(*)) / (SUM(f.blocks) * 
POWER(COUNT(DISTINCT(f.file#)),2)) * 100 )),3) Pct ,
LPAD(TO_CHAR(TRUNC(SUM(f.blocks) * t.blocksize * 
COUNT(DISTINCT(f.file#)) / 
( COUNT(*) * 1024 * 1024 * 1024 ),2)),6) vol_G ,
LPAD(TO_CHAR(SUM(f.blocks) * t.blocksize * COUNT(DISTINCT(f.file#)) / 
( COUNT(*) * 1024 * 1024 )),8) volume_M ,
TRUNC(SUM(s.blocks) * t.blocksize / 
( 1024 * 1024 * COUNT(DISTINCT(f.file#))),2) taken_M ,
TRUNC( ( SUM(f.blocks) * t.blocksize * COUNT(DISTINCT(f.file#)) 
/ ( COUNT(*) * 1024 * 1024 ) )
- ( NVL(suM(s.blocks),0) * t.blocksize 
/ ( 1024 * 1024 * COUNT(DISTINCT(f.file#)) ) ),2) remain_M
FROM sys.seg$ s, sys.ts$ t, sys.file$ f
WHERE s.ts# (+) = t.ts# 
AND f.ts# = t.ts#
AND f.status$ = 2
GROUP BY t.name, t.blocksize
ORDER BY 1;



Code:
 
TABLESPACE         PCT VOL_G  VOLUME_M    TAKEN_M   REMAIN_M
------------------ --- ------ -------- ---------- ----------
SYSAUX             610    .11      120     731.06    -611.06
SYSTEM             232    .29      300     695.62    -395.62
UNDOTBS1             3    .02       25        .62      24.37
UNDOTBS2             3    .02       25        .62      24.37
USERS                8      0        5        .37       4.62

Calculate buffer cache hit ratio in the database. Make sure it is more than 80

Code:
column "phys_reads" format 999,999,999 
column "phy_writes" format 999,999,999 
select a.value + b.value "logical_reads", 
c.value "phys_reads",
round(100 * ((a.value+b.value)-c.value) / 
(a.value+b.value)) 
"BUFFER HIT RATIO" 
from v$sysstat a, v$sysstat b, v$sysstat c
where 
a.statistic# = 38 
and 
b.statistic# = 39 
and 
c.statistic# = 40;
Code:
 
logical_reads   phys_reads BUFFER HIT RATIO
------------- ------------ ----------------
   1.3177E+12      839,472              100
SQL>

How to retrieve the hash value for a password in Oracle 11g

As we all know, Oracle has decided to do not show the hash value for the password on the dba_users view.

The problem:
Saving your passwords to a file as below does not work anymore:


SELECT 'ALTER USER ' || username || ' identified by values '''|| password || ''';' FROM dba_users;


This is necessary, in our case, for releases, DEV/QA refreshes, etc…

The solution:
I first wrote the following query:


MYUSER@mydb> SELECT substr (trim(DBMS_METADATA.get_ddl ('USER','MYUSER')),
2 REGEXP_INSTR(trim(DBMS_METADATA.get_ddl ('USER','MYUSER')),'''',1,1,0,'m'), 20
3 ) PASSWD
4 FROM DUAL;
PASSWD
--------------------------------------------------------------------------------
'2YYYYY4DXXXXXX'


Note.- The hash value has been modified for security reasons.
I was working under the assumption Oracle will not change the algorithm any time soon (currently the length is 16 characters); but I also found “dangerous” to make this assumption.
So my next query is:

select REGEXP_SUBSTR(DBMS_METADATA.get_ddl ('USER','MYUSER'), '''[^'']+''') PASSWD from dual;

which I find more elegant.

MYUSER@MYDB> select REGEXP_SUBSTR(DBMS_METADATA.get_ddl ('USER','MYUSER'), '''[^'']+''') PASSWD from dual;
PASSWD
--------------------------------------------------------------------------------
'XXXXX4YYYYYZZZZZZ'
The code we need to use from now on is:

select 'alter user '||username||' identified by values '||REGEXP_SUBSTR(DBMS_METADATA.get_ddl ('USER',USERNAME), '''[^'']+''')||';' from dba_users

How to copy SQL Profile from one database to another one

The problem:
I need to copy SQL Profiles from one database to another database for stability and testing.

The solution:

1.- Create staging table to store the SQL Profiles to be copied on Source database:

MYUSER@MYDB> EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (table_name => 'PROFILE_STGTAB');

2.- Copy SQL Profiles from SYS to the staging table:


MYUSER@MYDB> EXEC DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (profile_category => '%', staging_table_name => 'PROFILE_STGTAB');


As I needed to copy all SQL Profiles on my database ‘%’ value for profile_category was the best option.

3.- Export staging table.

4.- Create staging table on Destination Database:


MYUSER@MYDB> EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (table_name => 'PROFILE_STGTAB');


5.- Import data on Destination database.

6.- Create SQL Profiles on Destination database using data stored on staging table:


MYUSER@MYDB> EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(replace => TRUE, staging_table_name => 'PROFILE_STGTAB');


I used replace = TRUE as I needed to have the same SQL Profiles on both databases.

Note.- Please, refer to http://download.oracle.com/docs/cd/B...n.htm#CACFCAEC for a full list of parameters and options.