Showing posts with label Oracle. Show all posts
Showing posts with label Oracle. Show all posts

Find Redo Log Size / Switch Frequency / Location in Oracle

What is redo logs?

In Oracle, the term "redo log" refers to a critical component of the database system that helps ensure data durability and recoverability. The redo log consists of a set of files, known as redo log files or redo logs, which store a record of changes made to the database.

Whenever a transaction modifies data in the database, Oracle generates redo entries, also known as redo records, that capture the before and after images of the modified data. These redo records are written to the redo log files in a sequential manner. The redo log files provide a means to recover the database to a consistent state in the event of a system failure or a database crash.

Redo logs serve two primary purposes:

1. Recovery: The redo log files are crucial for database recovery operations. In case of a failure, Oracle can use the redo log files to reapply the changes made by committed transactions that were not yet written to the data files, thus ensuring data consistency and integrity.

2. Redo Generation: The redo log files also play a role in maintaining the durability of the database. As changes are made to the database, the redo log files capture these modifications, allowing Oracle to recreate or "redo" those changes if necessary.

In summary, the redo log in Oracle is a fundamental component that helps ensure data integrity and recoverability by storing a record of changes made to the database. It plays a crucial role in database recovery and provides durability by capturing redo entries for all modifications made to the database.

Find Redo Log Size

To find the size of the redo log in Oracle, you can query the database dictionary views. Specifically, you can retrieve the redo log size information from the `V$LOG` view. Here's an example query:

 


This query will return the group number (`GROUP#`), thread number (`THREAD#`), and size in bytes (`BYTES`) of each redo log group in the database.

Note that the `V$LOG` view provides information about individual redo log groups, and the size of the redo log is typically the sum of the sizes of all the groups. So, if you want to calculate the total size of the redo log, you can use the following query:

 


This query will give you the total size of the redo log in bytes. You can divide the result by an appropriate unit (e.g., 1024 for kilobytes, 1024*1024 for megabytes) to obtain the size in a more readable format.

Keep in mind that the size of the redo log can vary depending on the configuration and settings of your Oracle database.


Redo Log Switch Frequency

To determine the redo log switch frequency in Oracle, you can query the database dictionary views to gather information about the redo log switches. Redo log switches occur when a filled redo log file is switched with an empty one to ensure continuous logging of database changes. Here's an example query:

 


This query retrieves the count of redo log switches from the `V$LOG_HISTORY` view. The `V$LOG_HISTORY` view provides a historical record of redo log switches that have occurred in the database.

You can also calculate the redo log switch frequency over a specific period of time by considering the timestamps of the redo log switches. Here's an example query that calculates the average redo log switch frequency per day:

 

 

This query divides the count of redo log switches by the time difference between the earliest and latest redo log switch timestamps to obtain the average frequency per day.

Note that the `V$LOG_HISTORY` view retains historical information for a limited period, which is determined by the database configuration. Therefore, if you need to analyze redo log switch frequency over a longer duration, you might need to consider other methods, such as log file monitoring tools or auditing features provided by Oracle or third-party tools.

 

Redo Log Location in Oracle

In Oracle, the redo log files are typically stored in a specific directory known as the "log file directory" or "log file location." The exact location of the redo log files depends on the configuration of the database and the operating system.

To find the location of the redo log files, you can query the `V$LOGFILE` view, which provides information about the redo log file configuration. Here's an example query:

 

 

This query retrieves the file paths (`MEMBER`) of the redo log files in the database. The `V$LOGFILE` view contains information about the redo log file members, including their locations.

Each redo log file member has a specific path associated with it. The path can be an absolute file system path or a relative path within the database directory structure, depending on the configuration. By querying the `V$LOGFILE` view, you can obtain the exact location of each redo log file member.

Additionally, you can also check the Oracle initialization parameter file (usually called "init.ora" or "spfile.ora") to find the specific location of the redo log files. Look for the parameter `LOG_FILE_NAME_n`, where `n` represents the redo log group number. The parameter value will indicate the path and filename of each redo log file member.

Remember to adjust the query or examine the parameter values for all redo log groups in case there are multiple groups configured in your Oracle database.

 

Redo Log File Status Descriptions

 

UNUSED – Online redo log has never been written to. This is the state of a redo log that was just added, or just after a RESETLOGS, when it is not the current redo log.

CURRENT – Current redo log. This implies that the redo log is active. The redo log could be open or closed.

ACTIVE – Log is active but is not the current log. It is needed for crash recovery. It may be in use for block recovery. It may or may not be archived.

CLEARING – Log is being re-created as an empty log after an ALTER DATABASE CLEAR LOGFILE statement. After the log is cleared, the status changes to UNUSED.

CLEARING_CURRENT – Current log is being cleared of a closed thread. The log can stay in this status if there is some failure in the switch such as an I/O error writing the new log header.

INACTIVE – Log is no longer needed for instance recovery. It may be in use for media recovery. It might or might not be archived.

 

 

Oracle 19c new features

 Oracle 19c new features

 


 

 

Oracle 19c, released in 2019, introduced several new features and enhancements across various areas of the database. Here are some notable features introduced in Oracle 19c:

  1. Automatic Indexing: Oracle 19c introduced the Automatic Indexing feature, which uses machine learning algorithms to identify and create indexes on tables automatically. This feature can improve query performance by automatically creating and maintaining indexes based on usage patterns.

  2. Real-Time Statistics: Oracle 19c enhanced the statistics gathering process by introducing real-time statistics. Instead of relying solely on scheduled statistics collection jobs, real-time statistics allow the optimizer to use more accurate and up-to-date statistics during query optimization, resulting in better query plans.

  3. Hybrid Partitioned Tables: With Oracle 19c, you can create hybrid partitioned tables that combine the benefits of both partitioning and non-partitioned tables. This allows for more flexible data management and improved performance for specific use cases.

  4. Multitenant Database Improvements: Oracle Multitenant, introduced in earlier versions, received several enhancements in 19c. These include increased capacity limits for pluggable databases (PDBs), improved cross-container operations, and simplified management operations for PDBs.

  5. Automatic Data Optimization: Oracle 19c introduced Automatic Data Optimization (ADO), which allows for the automatic compression and movement of data between different storage tiers based on usage patterns and policies. ADO enables cost-effective data lifecycle management and improves storage efficiency.

  6. Real Application Clusters (RAC) Improvements: Oracle 19c brought enhancements to Real Application Clusters (RAC), including better workload management and performance with the introduction of application continuity and the capability to prioritize resource allocation for specific workloads.

  7. Database In-Memory Improvements: The In-Memory column store feature, introduced in earlier versions, received performance and usability enhancements in Oracle 19c. This includes improved in-memory join performance, support for larger In-Memory column stores, and the ability to dynamically track usage statistics for In-Memory objects.

  8. Security Enhancements: Oracle 19c introduced several security enhancements, such as the ability to manage user privileges through a role commonality feature, support for password-less authentication using external services, and enhancements to Oracle Data Redaction for sensitive data protection.

These are just a few of the key features and enhancements introduced in Oracle 19c. Oracle regularly releases updates and patches, so it's always recommended to consult the official Oracle documentation and release notes for the most up-to-date information on features and enhancements in a specific version.

 

Oracle Memory architecture

 Oracle Memory architecture

 


 

 

Oracle database uses several memory structures to manage and optimize database operations. These memory structures are collectively referred to as the System Global Area (SGA) and the Program Global Area (PGA). Here are the main memory structures in Oracle:

System Global Area (SGA):

  1. Database Buffer Cache: The buffer cache holds copies of data blocks read from data files. It reduces disk I/O by caching frequently accessed data in memory, improving query performance.

  2. Redo Log Buffer: The redo log buffer stores changes made to the database before they are written to the redo log files. It ensures that all changes are recorded for recovery and provides high-performance transaction logging.

  3. Shared Pool: The shared pool consists of the Library Cache and the Data Dictionary Cache. The Library Cache stores SQL statements, execution plans, and other shared SQL and PL/SQL code. The Data Dictionary Cache stores information about database objects, user privileges, and other metadata.

  4. Large Pool: The large pool is an optional memory area used for large-scale allocations and I/O buffers for backup and restore operations, parallel execution, and session memory.

  5. Java Pool: The Java pool stores Java objects and bytecode for Java stored procedures and other Java-related operations.

  6. Streams Pool: The Streams pool is used by Oracle Streams, a feature for data replication and messaging. It stores buffered messages and other Streams-related data.

Program Global Area (PGA):

  1. Stack Space: The stack space is allocated for each session or process in the database. It contains session-specific data, including variables, parameters, and cursor state information.

  2. Private SQL Area: The private SQL area stores information specific to each SQL statement being executed, such as bind variables, query execution plans, and runtime buffers.

  3. Sorting Area: The sorting area is used for sorting operations, such as ORDER BY and GROUP BY clauses. It stores temporary data during sorting operations.

  4. Session Memory: Session memory includes various session-specific memory structures, such as session parameters, session cursors, and session-specific work areas.

These memory structures collectively manage and optimize the database's performance and resource utilization. The sizes of these memory areas can be configured and tuned based on the system's requirements and workload characteristics to ensure optimal performance and efficient memory usage in the Oracle database.

 

Oracle Undo Tablespace

 Oracle Undo Tablespace

 


In Oracle, the undo tablespace is a crucial component of the database that is used to manage and store undo information. Undo data represents the changes made to the database, such as modifications or deletions, that are necessary to roll back transactions or provide read consistency.

Here are some key points about the undo tablespace in Oracle:

  1. Purpose of Undo Tablespace: The undo tablespace is primarily used to provide transactional consistency and support various Oracle features like read consistency, flashback queries, and transaction rollback. It stores the before-images of the data blocks affected by transactions.

  2. Rollback Segments vs. Undo Tablespaces: In earlier versions of Oracle, rollback segments were used to manage undo data. However, starting with Oracle 9i, the undo tablespace was introduced as a more efficient and flexible alternative to manage undo information.

  3. Automatic Undo Management: Oracle introduced the concept of Automatic Undo Management (AUM) to simplify the administration of undo tablespaces. With AUM, the DBA does not need to manually manage rollback segments; instead, Oracle automatically manages the undo space allocation and retention.

  4. Undo Retention: Undo retention refers to the period for which undo data is retained in the undo tablespace. It determines the availability of consistent read data for queries and provides the timeframe during which a transaction can be rolled back. The undo retention is controlled by the UNDO_RETENTION parameter.

  5. Undo Tablespace Size: The size of the undo tablespace depends on the workload and the retention requirements of the system. The DBA needs to monitor the size of the undo tablespace and adjust it accordingly to prevent issues like ORA-01555 (snapshot too old) or ORA-30036 (unable to extend segment).

  6. Multiple Undo Tablespaces: Starting with Oracle 11g, multiple undo tablespaces can be created to provide better manageability, performance, and availability. Multiple undo tablespaces can be used for different purposes or to separate undo segments for specific applications or tablespaces.

  7. Flashback Features: The undo tablespace plays a crucial role in providing flashback features such as Flashback Query, Flashback Transaction, and Flashback Table. These features utilize the undo information to view past data or undo specific transactions.

The undo tablespace is an essential component in Oracle databases, responsible for maintaining the integrity, consistency, and concurrency of transactions. It enables features like read consistency, transaction rollback, and flashback queries, providing a reliable and efficient environment for data management and recovery.

 

Oracle Background Process

 Oracle Background Process

 

In Oracle Database, background processes are essential components that handle various tasks to support the functioning and performance of the database system. These processes run in the background and perform critical activities such as memory management, I/O operations, recovery, and monitoring. Here are some important background processes in Oracle:

  1. System Monitor (SMON): SMON is responsible for instance recovery, which ensures the database is in a consistent state after an instance crash or failure. It recovers the database by applying undo and redo logs to roll forward or roll back transactions.

  2. Process Monitor (PMON): PMON is responsible for process cleanup and process recovery. It detects and resolves failed and terminated processes, releasing system resources associated with them. PMON also performs process recovery during instance recovery.

  3. Database Writer (DBWn): DBWn processes (DBW0, DBW1, etc.) are responsible for writing modified database buffers from the database buffer cache to data files on disk. They ensure that dirty buffers are periodically written to disk, reducing the risk of data loss during a system failure.

  4. Log Writer (LGWR): LGWR writes redo log buffers to the redo log files on disk. It ensures that changes made to the database are durably stored in the redo logs before committing transactions. LGWR plays a crucial role in database recovery and maintaining data integrity.

  5. Checkpoint (CKPT): CKPT is responsible for signaling the DBWn processes to perform a checkpoint. A checkpoint flushes modified database buffers to disk and updates the control file and data file headers. It helps in reducing the instance recovery time during a crash or failure.

  6. Archiver (ARCn): The archiver process (ARCn) copies online redo log files to archive destinations for backup and recovery purposes. It ensures the availability of redo logs beyond the point of online log switching and enables point-in-time recovery.

  7. Dispatcher (Dnnn): Dispatchers are used in Shared Server configurations, where multiple client connections are served by a smaller number of dedicated server processes. Dispatchers receive client requests and direct them to the appropriate dedicated server process.

  8. Job Queue Processes (CJQn): Job Queue processes manage and execute scheduled jobs in the database. They handle tasks such as running stored procedures, executing PL/SQL blocks, or launching external programs as part of scheduled jobs.

These are some of the important background processes in Oracle Database. Each process plays a critical role in maintaining the integrity, availability, and performance of the database system.

 

DDL for all tablespace in oracle database.

 SQL> select 'create tablespace ' || df.tablespace_name || chr(10)
 || ' datafile ''' || df.file_name || ''' size ' || df.bytes
  2   || decode(autoextensible,'N',null, chr(10) || ' autoextend on maxsize '
 || maxbytes)
 || chr(10)
 || 'default storage ( initial ' || initial_extent
  3    4    5    6   || decode (next_extent, null, null, ' next ' || next_extent )
 || ' minextents ' || min_extents
 || ' maxextents ' ||  decode(max_  7  extents,'2147483645','unlimited',max_extents)
 || ') ;' as TBS_DDL
 from dba_data_files df, dba_tablespaces t
 where df.tablespace_name=t.tablespace_name
/  8    9   10   11   12   13

TBS_DDL
----------------------------------------------------------------------
create tablespace SYSTEM
 datafile '/oradata/vijay/system01.dbf' size 838860800
 autoextend on maxsize 34359721984
default storage ( initial 65536 minextents 1 maxextents unlimited) ;

create tablespace SYSAUX
 datafile '/oradata/vijay/sysaux01.dbf' size 503316480
 autoextend on maxsize 34359721984
default storage ( initial 65536 minextents 1 maxextents unlimited) ;

create tablespace UNDOTBS1
 datafile '/oradata/vijay/undotbs01.dbf' size 62914560
 autoextend on maxsize 34359721984
default storage ( initial 65536 minextents 1 maxextents unlimited) ;

create tablespace USERS
 datafile '/oradata/vijay/users01.dbf' size 5242880
 autoextend on maxsize 34359721984
default storage ( initial 65536 minextents 1 maxextents unlimited) ;


How to get self SID in Oracle

 

 



SQL> SELECT SYS_CONTEXT ('USERENV', 'CURRENT_SCHEMA') AS username, sys_context('USERENV','SID') "My SID" from dual;

USERNAME       My SID
-------------- --------
SHAAN          138
 

 

SQL> select sys_context('USERENV','SID') "My SID" from dual;

My SID
--------
138

 

Last DDL and DML Date/Time of Any Table

 Get Last DDL and DML Date Time Of Any Table.

 

SQL> alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

Session altered.

 

SQL>  select (select last_ddl_time from dba_objects where object_name='EMP' and owner='SHAAN') "DDL Time",
decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"
from (select nvl(max(ora_rowscn),0) maxscn from SHAAN.EMP);   2    3

DDL Time             DML Time
-------------------- ----------------------------------------
05-OCT-2022 22:30:16 05-OCT-22 10.31.21.000000000 PM


SQL> col "Owner Object" for a30
set lines 200 pages 1000
 select (select owner||'.'||object_name from dba_objects where object_name='EMP' and owner='SHAAN') "Owner Object",
 (select created from dba_objects where object_name='EMP' and owner='SHAAN') "Created Time",
 (select last_ddl_time from dba_objects where object_name='EMP' and owner='SHAAN') "DDL Time",
 decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"
 from (select nvl(max(ora_rowscn),0) maxscn from SHAAN.EMP);SQL> SQL>   2    3    4    5

Owner Object                   Created Time         DDL Time             DML Time
------------------------------ -------------------- -------------------- ----------------------------------------
SHAAN.EMP                      05-OCT-2022 22:30:16 05-OCT-2022 22:30:16 05-OCT-22 10.31.21.000000000 PM

 

ORA-65096: invalid common user or role name

 ORA-65096: invalid common user or role name means you logged on the CDB where you should be logged into a PDB.

 

This can be avoid by setting hidden parameter "_ORACLE_SCRIPT"=true.

 

Since this hidden parameter hence it is always advisable to use it under direction of Oracle Support.

 

SQL> create user shaan identified by shaan123 default tablespace users quota unlimited on users;
create user shaan identified by shaan123 default tablespace users quota unlimited on users
            *
ERROR at line 1:
ORA-65096: invalid common user or role name



SQL> alter session set "_ORACLE_SCRIPT"=true;

Session altered.

SQL> create user shaan identified by shaan123 default tablespace users quota unlimited on users;

User created.

SQL> grant connect, resource to shaan;

Grant succeeded.

SQL> conn shaan/shaan123
Connected.
SQL> show user
USER is "SHAAN"
 

ORA-01578: ORACLE data block corrupted

 Recently we saw below error in our alert log.


Errors in file


 /oracle/app/diag/rdbms/VIJAY/trace/VIJAY_ora_70454.trc  (incident=2300411):
ORA-01578: ORACLE data block corrupted (file # 182, block # 12483)
ORA-01110: data file 182: '/oradata1/VIJAY/audsys_ts_01.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option


To fix the above error we followed below steps ;


SQL> select owner,segment_name,segment_type,tablespace_name from dba_extents 
     where file_id=182 and 12483 between block_id AND block_id+blocks-1;
 
OWNER           SEGMENT_NAME          SEGMENT_TYPE       TABLESPACE_NAME
--------------- ------------------- ------------------ -----------------
JAIDBA         AUD$UNIFIED            TABLE              AUDSYS_TS

 

SQL> select count(*) from JAIDBA.AUD$UNIFIED;
select count(*) from JAIDBA.AUD$UNIFIED
       *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 182, block # 12483)
ORA-01110: data file 182: '/oradata1/VIJAY/audsys_ts_01.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

  

SQL> BEGIN

DBMS_REPAIR.SKIP_CORRUPT_BLOCKS (

SCHEMA_NAME => 'JAIDBA',

OBJECT_NAME => 'AUD$UNIFIED',

OBJECT_TYPE => dbms_repair.table_object,

FLAGS => dbms_repair.SKIP_FLAG);

END;

/  2    3    4    5    6    7    8

 

PL/SQL procedure successfully completed.

 

SQL> select SKIP_CORRUPT
from dba_tables
where owner = 'JAIDBA'
and table_name = 'AUD$UNIFIED';
  2    3    4
SKIP_COR
--------
ENABLED

 

SQL> alter table JAIDBA.AUD$UNIFIED move;
 
Table altered.

 

SQL> select count(*) from JAIDBA.AUD$UNIFIED;
 
  COUNT(*)
----------
         0


Restore full database on different server with same name.

 

Here I am taking example of database ORA11G which is hosted on oel1.oracle.com and restoring this on oel2.oracle.com with same name.

Assuming we have different file system on both servers.

  • Copy pfile and passwordfile from oel1.oracle.com to $ORACLE_HOME/dbs location of oel2.oracle.com and modify pfile per current server i.e. controlfile, adump location etc. also remove all hidden parameters from pfile.
  • Restore controlfile that was taken after level0 backup to oel2.oracle.com at location define in pfile.
  • Now create script to restore database.

·         In below script we are using ‘SET NEWNAME’ because we have different file system.
·         At the end of script we are using ‘SWITCH DATAFILE ALL;’ to update controlfile with new location.
·         Recover database command will restore archive logs from tape to disk on location mention in archive dest log_archive_dest_1, and start recovery.
·         Channels are multiple hands of RMAN to restore datafiles.


export ORACLE_SID=DB11G
rman target / <<! >DB11G_rman_restore.log
run {
sql 'alter session set optimizer_mode=rule';
allocate channel a1 type 'sbt_tape';
allocate channel a2 type 'sbt_tape';
allocate channel a3 type 'sbt_tape';
allocate channel a4 type 'sbt_tape';
send 'NB_ORA_SERV=denver, NB_ORA_CLIENT=ora11gdb-g1';
SET NEWNAME FOR DATAFILE 4 to '/u01/app/oracle/oradata/DB11G/datafile/o1_mf_users_8yobsg8j_.dbf';
SET NEWNAME FOR DATAFILE 3 to '/u01/app/oracle/oradata/DB11G/datafile/o1_mf_undotbs1_8yobsg7w_.dbf';
SET NEWNAME FOR DATAFILE 2 to '/u01/app/oracle/oradata/DB11G/datafile/o1_mf_sysaux_8yobsg7o_.dbf';
SET NEWNAME FOR DATAFILE 1 to '/u01/app/oracle/oradata/DB11G/datafile/o1_mf_system_8yobsg0w_.dbf';
SET NEWNAME FOR DATAFILE 5 to '/u01/app/oracle/oradata/DB11G/datafile/vijay_ts_01.dbf';
RESTORE DATABASE;
SWITCH DATAFILE ALL;
RECOVER DATABASE;
}
Exit;

  • Startup mount, and then get to know the max. Sequence of archive file information from the v$log_history. This will tell us, till what sequence recovery is needed.

    SQL> startup nomount;
    ORACLE instance started.

    Total System Global Area  413372416 bytes
    Fixed Size                  2213896 bytes
    Variable Size             306186232 bytes
    Database Buffers          100663296 bytes
    Redo Buffers                4308992 bytes
    SQL> alter database mount;

    Database altered.


    SQL> SELECT MAX(SEQUENCE#) FROM V$LOG_HISTORY;

    MAX(SEQUENCE#)
    --------------
                39

  • Now database is ready for restoration. Run restore script that we have prepared at step number 3.
 
 
[oracle@oel2 script]$ nohup sh restore_db11g.sh &
[1] 24665
[oracle@oel2 script]$ nohup: ignoring input and appending output to `nohup.out'
 
  • Once restore and recovery completes, login to sqlplus and rename redolog file before executing resetlog command, because if we don’t rename it to TST11G directory then while resetlog it will look for ORA11G directory to create redolog file and this may cause database corruption.

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------
/u01/app/oracle/oradata/ORA11G/onlinelog/o1_mf_3_90ob65o7_.log
/u01/app/oracle/oradata/
ORA11G/onlinelog/o1_mf_2_90ob64x1_.log
/u01/app/oracle/oradata/
ORA11G/onlinelog/o1_mf_1_90ob64bj_.log

SQL> alter database rename file '/u01/app/oracle/oradata/ORA11G/onlinelog/o1_mf_3_90ob65o7_.log' to '/u01/app/oracle/oradata/TST11G/onlinelog/o1_mf_3_90ob65o7_.log';

Database altered.

SQL> alter database rename file '/u01/app/oracle/oradata/ORA11G/onlinelog/o1_mf_2_90ob64x1_.log' to '/u01/app/oracle/oradata/TST11G/onlinelog/o1_mf_2_90ob64x1_.log';

Database altered.

SQL> alter database rename file '/u01/app/oracle/oradata/ORA11G/onlinelog/o1_mf_1_90ob64bj_.log' to '/u01/app/oracle/oradata/TST11G/onlinelog/o1_mf_1_90ob64bj_.log';

Database altered.

SQL> alter database open resetlogs;

Database altered.

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ---------------------------------
ORA11G           oel2.oracle.com

SQL> select name, open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
ORA11G    READ WRITE

SQL> select count(*) from v$recover_file;
 
COUNT(*)
----------
         0



Add new mountpoint on your linux server

  Below are the steps to follow for adding any new mount on you linux machine. [root@oem ~]# fdisk -l Disk /dev/sdb: 53.7 GB, 53687091200 by...