Oracle RAC Scan IP

 Scan IP:


In Oracle Real Application Clusters (RAC), SCAN stands for Single Client Access Name. It is a virtual IP address that provides a single name for clients to connect to the database service, regardless of the number of nodes or instances in the cluster. The SCAN IP simplifies client connectivity and load balancing in a RAC environment.

The SCAN IP is associated with the SCAN listener, which is a single listener process that runs on each node in the cluster. When a client connects to the SCAN IP, the SCAN listener routes the connection to one of the available nodes in the cluster, distributing the workload across multiple instances.

Here are some key points about SCAN IP in Oracle RAC:

  1. Simplified Client Connectivity: Instead of specifying individual node addresses or instance names, clients can connect to the SCAN IP to access the RAC database. This simplifies the client configuration and eliminates the need to update client connection details when the cluster topology changes.

  2. Load Balancing: The SCAN listener performs load balancing by distributing client connections across the available nodes. It uses a load-balancing algorithm to route each connection request to a different node, distributing the workload evenly across the cluster.

  3. Transparent Node Addition/Removal: With the SCAN IP, adding or removing nodes from the cluster does not impact client connectivity. Clients continue to connect to the SCAN IP, and the SCAN listener automatically redirects the connections to the appropriate nodes, even if the cluster topology changes.

  4. High Availability: The SCAN IP enhances high availability by providing a single, highly available entry point for client connections. If a node or SCAN listener fails, the SCAN VIP is automatically relocated to another node, ensuring continuous client connectivity.

  5. SCAN VIP and SCAN Listener: The SCAN IP consists of a virtual IP address (SCAN VIP) and a corresponding SCAN listener. The SCAN VIP is assigned to one of the nodes in the cluster, and the SCAN listener runs on each node to handle client connections to the SCAN IP.

By using the SCAN IP in Oracle RAC, clients can connect to the database service seamlessly, without the need to manage individual node addresses. The SCAN IP provides a centralized and load-balanced access point to the RAC database, enhancing scalability, availability, and ease of client connectivity.

Oracle Datagaurd Background Process

 Oracle Datagaurd Background Process

 

In Oracle Data Guard, background processes play a vital role in managing and maintaining the standby database and ensuring data protection and availability. These processes work in conjunction with primary and standby databases to facilitate data replication, synchronization, and failover operations. Here are some important background processes in Oracle Data Guard:

  1. Log Writer (LGWR): The LGWR process is responsible for writing redo log records to the online redo log files on the primary database. In a Data Guard configuration, LGWR also transmits redo data to the standby database(s) for applying the changes.

  2. Archiver (ARCn): The archiver process (ARCn) copies the archived redo log files from the primary database to the standby database(s). These archived logs are essential for maintaining the standby database in sync with the primary database.

  3. Media Recovery (MRP) Process: The Media Recovery Process (MRP) is responsible for applying redo data received from the primary database to the standby database(s). It continuously applies the archived redo logs to bring the standby database up to date with the primary database.

  4. Data Guard Broker Processes: The Data Guard Broker is a management framework for configuring, monitoring, and controlling Data Guard configurations. It includes various background processes, such as DGMGRL (Data Guard Manager Command-Line Interface) and DMON (Data Guard Broker Monitor). These processes handle the configuration and management tasks of the Data Guard setup.

  5. Log Apply Services (LNSn and RFS): Log Apply Services are responsible for transmitting and applying redo data from the primary database to the standby database(s). The Log Network Server (LNSn) on the primary database transmits redo data to the Remote File Server (RFS) process on the standby database, which applies the redo data.

  6. RFS (Remote File Server) Process: The RFS process receives the archived redo logs and standby redo logs from the primary database and applies them to the standby database. It is responsible for managing the transmission, receipt, and application of redo data.

  7. Data Guard Net Services (LNSn and NSSn): Data Guard Net Services processes handle the network communication between the primary and standby databases. The Log Network Server (LNSn) on the primary database and the Net Services Server (NSSn) on the standby database facilitate the efficient transmission of redo data and other communication required for Data Guard operations.

These are some of the key background processes involved in Oracle Data Guard. They work together to ensure data replication, synchronization, and failover capabilities, providing high availability and data protection for Oracle databases.

 

Oracle RAC Background process

 Oracle RAC Background process


In Oracle Real Application Clusters (RAC), background processes play a crucial role in managing the shared resources, coordination, and synchronization among the multiple instances that make up the cluster. These processes are specific to RAC and work together to ensure high availability, scalability, and fault tolerance. Here are some important background processes in Oracle RAC:

  1. Global Cache Service (GCS) Process: The GCS process manages the Global Cache Service, which is responsible for coordinating access to data blocks residing in the global cache across different instances. It handles block requests, grants or denies access, and manages the distributed lock management for data concurrency.

  2. Global Enqueue Service (GES) Process: The GES process manages the Global Enqueue Service, which handles lock management for non-data resources such as shared resources, library cache objects, and sequences across the RAC instances. It manages and coordinates the access and release of these enqueues across multiple instances.

  3. Global Resource Directory (GRD) Process: The GRD process maintains the Global Resource Directory, which is a shared memory structure that contains information about the ownership and location of cached data blocks and enqueues. It provides a centralized view of the resources and their statuses across the RAC instances.

  4. Cache Fusion Process: The Cache Fusion process handles the inter-instance communication and transfer of data blocks between the instances' local caches. It enables efficient data sharing and avoids unnecessary disk I/O by allowing instances to access each other's data blocks directly.

  5. Network Listener Process: The Network Listener process listens for incoming connection requests from clients and routes them to the appropriate RAC instance. It handles the initial connection establishment and enables clients to connect to any available instance in the RAC cluster.

  6. LMS Process: The Lock Manager Server (LMS) process handles lock management and coordination between instances. It manages the distributed locks and ensures data consistency and integrity in the RAC environment.

  7. GES/GCS Recovery Server (RS) Process: The GES/GCS Recovery Server process handles recovery operations in case of instance failures. It manages the recovery and redistribution of cached data blocks and enqueues to ensure data availability and consistency.

  8. LMON Process: The Global Enqueue Service Monitor (LMON) process monitors the health and availability of the GES and GCS processes. It detects failures and takes appropriate actions to recover or reconfigure the resources in case of failures or reconfiguration events.

These are some of the key background processes in Oracle RAC. They work together to provide high availability, scalability, and efficient resource management in a clustered database environment.

MySQL Background Process

 MySQL Background Process

 

In MySQL, background processes are responsible for various tasks that support the functioning and performance of the database system. These processes run continuously in the background and handle activities such as memory management, I/O operations, query execution, and monitoring. Here are some important background processes in MySQL:

  1. MySQL Server Process: The MySQL server process, also known as the mysqld process, is the main process that handles client connections, query execution, and overall management of the MySQL server. It coordinates with other background processes to perform different tasks.

  2. InnoDB Buffer Pool: InnoDB is the default storage engine in MySQL, and it utilizes a buffer pool to cache frequently accessed data pages in memory. The InnoDB Buffer Pool background process manages the buffer pool, including reading data from disk into the buffer pool and flushing modified pages back to disk.

  3. InnoDB Log Writer: The InnoDB Log Writer process (also called the InnoDB Log Flush or Log IO Thread) writes the changes made to the InnoDB redo log files. It ensures that the redo log records are durably stored on disk, providing transaction durability and crash recovery capabilities.

  4. InnoDB Page Cleaner: The InnoDB Page Cleaner process is responsible for the asynchronous flushing of dirty pages from the buffer pool to disk. It helps in maintaining a balance between data modifications and background flushing, optimizing I/O operations and database performance.

  5. MySQL Master/Slave Replication: In a replication setup, MySQL utilizes background processes to manage replication between the master and slave servers. These processes include the binary log sender (on the master) and the I/O thread and SQL thread (on the slave) to receive and apply the replicated changes.

  6. MySQL Event Scheduler: The MySQL Event Scheduler is a background process that manages the execution of scheduled events defined in the database. It triggers and runs events at specified times or intervals, enabling automation of various database tasks.

  7. MySQL Enterprise Monitor Agent: The MySQL Enterprise Monitor Agent is an optional background process used in MySQL Enterprise Edition. It collects performance and status data from the MySQL server and sends it to the MySQL Enterprise Monitor for monitoring, analysis, and alerting.

  8. MySQL Thread Pool: The MySQL Thread Pool is a background process that manages client connections and thread reuse. It helps in optimizing thread creation and handling, reducing the overhead associated with creating and destroying threads for each client connection.

These are some of the important background processes in MySQL. Each process plays a crucial role in ensuring the efficient operation, performance, and reliability of the MySQL database server.

 

Oracle Background Process

 Oracle Background Process

 

In Oracle Database, background processes are essential components that handle various tasks to support the functioning and performance of the database system. These processes run in the background and perform critical activities such as memory management, I/O operations, recovery, and monitoring. Here are some important background processes in Oracle:

  1. System Monitor (SMON): SMON is responsible for instance recovery, which ensures the database is in a consistent state after an instance crash or failure. It recovers the database by applying undo and redo logs to roll forward or roll back transactions.

  2. Process Monitor (PMON): PMON is responsible for process cleanup and process recovery. It detects and resolves failed and terminated processes, releasing system resources associated with them. PMON also performs process recovery during instance recovery.

  3. Database Writer (DBWn): DBWn processes (DBW0, DBW1, etc.) are responsible for writing modified database buffers from the database buffer cache to data files on disk. They ensure that dirty buffers are periodically written to disk, reducing the risk of data loss during a system failure.

  4. Log Writer (LGWR): LGWR writes redo log buffers to the redo log files on disk. It ensures that changes made to the database are durably stored in the redo logs before committing transactions. LGWR plays a crucial role in database recovery and maintaining data integrity.

  5. Checkpoint (CKPT): CKPT is responsible for signaling the DBWn processes to perform a checkpoint. A checkpoint flushes modified database buffers to disk and updates the control file and data file headers. It helps in reducing the instance recovery time during a crash or failure.

  6. Archiver (ARCn): The archiver process (ARCn) copies online redo log files to archive destinations for backup and recovery purposes. It ensures the availability of redo logs beyond the point of online log switching and enables point-in-time recovery.

  7. Dispatcher (Dnnn): Dispatchers are used in Shared Server configurations, where multiple client connections are served by a smaller number of dedicated server processes. Dispatchers receive client requests and direct them to the appropriate dedicated server process.

  8. Job Queue Processes (CJQn): Job Queue processes manage and execute scheduled jobs in the database. They handle tasks such as running stored procedures, executing PL/SQL blocks, or launching external programs as part of scheduled jobs.

These are some of the important background processes in Oracle Database. Each process plays a critical role in maintaining the integrity, availability, and performance of the database system.

 

PostgreSQL Background Process

 

 Background Process In PostgreSQL

     

In PostgreSQL, background processes are responsible for performing various tasks to support the functioning of the database system. These processes run continuously in the background and handle tasks such as maintenance, monitoring, and background operations. Here are some important background processes in PostgreSQL:

  1. Autovacuum: The autovacuum process is responsible for managing the automatic maintenance of database tables. It identifies and removes dead tuples (unused rows) from tables, updates statistics, and performs other essential maintenance tasks to optimize the performance of the database.

  2. Checkpointer: The checkpointer process writes dirty (modified) database buffers from memory to disk in a controlled manner. It helps in reducing the amount of time required for database recovery in case of a system crash and ensures that changes are durably stored on disk.

  3. Background Writer: The background writer process performs the task of writing dirty database buffers to disk when the system is under heavy load. It helps in reducing the I/O burden on the server by asynchronously writing the modified data to disk.

  4. WAL Writer: The Write-Ahead Log (WAL) writer process writes the WAL buffers to the disk. The WAL is a critical component of PostgreSQL's crash recovery mechanism, ensuring durability and consistency of transactions.

  5. Startup Process: The startup process is responsible for database startup and crash recovery. It coordinates with other background processes to perform necessary tasks during the database startup process.

  6. Archiver: The archiver process is responsible for managing the archiving of the Write-Ahead Log (WAL) segments. It copies the WAL files to a designated archive location for backup and point-in-time recovery purposes.

  7. Replication Processes: In a replication setup, PostgreSQL uses background processes for replication purposes. These processes include the sender process (sends WAL to replicas) and the receiver process (receives and applies WAL from the primary server).

  8. Background Workers: PostgreSQL allows the creation of custom background worker processes to perform specific tasks. These background workers can be created by extensions or custom applications to handle additional functionality beyond the core database processes.

These are some of the important background processes in PostgreSQL, and each plays a crucial role in ensuring the stability, performance, and durability of the database system.

DDL for all tablespace in oracle database.

 SQL> select 'create tablespace ' || df.tablespace_name || chr(10)
 || ' datafile ''' || df.file_name || ''' size ' || df.bytes
  2   || decode(autoextensible,'N',null, chr(10) || ' autoextend on maxsize '
 || maxbytes)
 || chr(10)
 || 'default storage ( initial ' || initial_extent
  3    4    5    6   || decode (next_extent, null, null, ' next ' || next_extent )
 || ' minextents ' || min_extents
 || ' maxextents ' ||  decode(max_  7  extents,'2147483645','unlimited',max_extents)
 || ') ;' as TBS_DDL
 from dba_data_files df, dba_tablespaces t
 where df.tablespace_name=t.tablespace_name
/  8    9   10   11   12   13

TBS_DDL
----------------------------------------------------------------------
create tablespace SYSTEM
 datafile '/oradata/vijay/system01.dbf' size 838860800
 autoextend on maxsize 34359721984
default storage ( initial 65536 minextents 1 maxextents unlimited) ;

create tablespace SYSAUX
 datafile '/oradata/vijay/sysaux01.dbf' size 503316480
 autoextend on maxsize 34359721984
default storage ( initial 65536 minextents 1 maxextents unlimited) ;

create tablespace UNDOTBS1
 datafile '/oradata/vijay/undotbs01.dbf' size 62914560
 autoextend on maxsize 34359721984
default storage ( initial 65536 minextents 1 maxextents unlimited) ;

create tablespace USERS
 datafile '/oradata/vijay/users01.dbf' size 5242880
 autoextend on maxsize 34359721984
default storage ( initial 65536 minextents 1 maxextents unlimited) ;


How to get self SID in Oracle

 

 



SQL> SELECT SYS_CONTEXT ('USERENV', 'CURRENT_SCHEMA') AS username, sys_context('USERENV','SID') "My SID" from dual;

USERNAME       My SID
-------------- --------
SHAAN          138
 

 

SQL> select sys_context('USERENV','SID') "My SID" from dual;

My SID
--------
138

 

Last DDL and DML Date/Time of Any Table

 Get Last DDL and DML Date Time Of Any Table.

 

SQL> alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

Session altered.

 

SQL>  select (select last_ddl_time from dba_objects where object_name='EMP' and owner='SHAAN') "DDL Time",
decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"
from (select nvl(max(ora_rowscn),0) maxscn from SHAAN.EMP);   2    3

DDL Time             DML Time
-------------------- ----------------------------------------
05-OCT-2022 22:30:16 05-OCT-22 10.31.21.000000000 PM


SQL> col "Owner Object" for a30
set lines 200 pages 1000
 select (select owner||'.'||object_name from dba_objects where object_name='EMP' and owner='SHAAN') "Owner Object",
 (select created from dba_objects where object_name='EMP' and owner='SHAAN') "Created Time",
 (select last_ddl_time from dba_objects where object_name='EMP' and owner='SHAAN') "DDL Time",
 decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"
 from (select nvl(max(ora_rowscn),0) maxscn from SHAAN.EMP);SQL> SQL>   2    3    4    5

Owner Object                   Created Time         DDL Time             DML Time
------------------------------ -------------------- -------------------- ----------------------------------------
SHAAN.EMP                      05-OCT-2022 22:30:16 05-OCT-2022 22:30:16 05-OCT-22 10.31.21.000000000 PM

 

ORA-65096: invalid common user or role name

 ORA-65096: invalid common user or role name means you logged on the CDB where you should be logged into a PDB.

 

This can be avoid by setting hidden parameter "_ORACLE_SCRIPT"=true.

 

Since this hidden parameter hence it is always advisable to use it under direction of Oracle Support.

 

SQL> create user shaan identified by shaan123 default tablespace users quota unlimited on users;
create user shaan identified by shaan123 default tablespace users quota unlimited on users
            *
ERROR at line 1:
ORA-65096: invalid common user or role name



SQL> alter session set "_ORACLE_SCRIPT"=true;

Session altered.

SQL> create user shaan identified by shaan123 default tablespace users quota unlimited on users;

User created.

SQL> grant connect, resource to shaan;

Grant succeeded.

SQL> conn shaan/shaan123
Connected.
SQL> show user
USER is "SHAAN"
 

ORA-01578: ORACLE data block corrupted

 Recently we saw below error in our alert log.


Errors in file


 /oracle/app/diag/rdbms/VIJAY/trace/VIJAY_ora_70454.trc  (incident=2300411):
ORA-01578: ORACLE data block corrupted (file # 182, block # 12483)
ORA-01110: data file 182: '/oradata1/VIJAY/audsys_ts_01.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option


To fix the above error we followed below steps ;


SQL> select owner,segment_name,segment_type,tablespace_name from dba_extents 
     where file_id=182 and 12483 between block_id AND block_id+blocks-1;
 
OWNER           SEGMENT_NAME          SEGMENT_TYPE       TABLESPACE_NAME
--------------- ------------------- ------------------ -----------------
JAIDBA         AUD$UNIFIED            TABLE              AUDSYS_TS

 

SQL> select count(*) from JAIDBA.AUD$UNIFIED;
select count(*) from JAIDBA.AUD$UNIFIED
       *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 182, block # 12483)
ORA-01110: data file 182: '/oradata1/VIJAY/audsys_ts_01.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

  

SQL> BEGIN

DBMS_REPAIR.SKIP_CORRUPT_BLOCKS (

SCHEMA_NAME => 'JAIDBA',

OBJECT_NAME => 'AUD$UNIFIED',

OBJECT_TYPE => dbms_repair.table_object,

FLAGS => dbms_repair.SKIP_FLAG);

END;

/  2    3    4    5    6    7    8

 

PL/SQL procedure successfully completed.

 

SQL> select SKIP_CORRUPT
from dba_tables
where owner = 'JAIDBA'
and table_name = 'AUD$UNIFIED';
  2    3    4
SKIP_COR
--------
ENABLED

 

SQL> alter table JAIDBA.AUD$UNIFIED move;
 
Table altered.

 

SQL> select count(*) from JAIDBA.AUD$UNIFIED;
 
  COUNT(*)
----------
         0


Stop/Start RAC services

Stop RAC services

Stop Listener:

$ srvctl stop listener -n ol7-122-rac1

$ srvctl status listener -n ol7-122-rac1

 

Stop database:

$ srvctl stop database -d VIJAY

$ srvctl status database -d VIJAY

 

Stop ASM:

$ srvctl stop asm -n ol7-122-rac1 -f

$ srvctl status asm -n ol7-122-rac1

 

Stop nodeapps:

$ srvctl stop nodeapps -n ol7-122-rac1 -f

$ srvctl stop nodeapps -n ol7-122-rac2 -f

 

Stop crs:

# crsctl stop crs

# crsctl check cluster -all

 

Start RAC Services:

Start crs:

crsctl start crs

crsctl start res ora.crsd -init

crsctl check cluster -all

Start Nodeapps

srvctl start nodeapps -n ol7-122-rac1/2

srvctl status nodeapps -n  ol7-122-rac1/2

 

Start asm

srvctl start asm -n ol7-122-rac1/2

srvctl status asm -n ol7-122-rac1/2

 

Start database:

srvctl start database -d VIJAY

 

Start listener

srvctl start listener -n ol7-122-rac1/2

srvctl status listener -n ol7-122-rac1/2


Backup OCR

 


1. OCR Dumps

[root@ol7-122-rac1 bin]# ./ocrconfig -export ocr_backup_`date +%Y%m%d`.dmp

PROT-58: successfully exported the Oracle Cluster Registry contents to file 'ocr_backup_20220922.dmp'

2. OCR Backups

[root@ol7-122-rac1 bin]# ./ocrconfig -manualbackup

ol7-122-rac1     2022/09/22 11:43:13     +DATA:/ol7-122-cluster/OCRBACKUP/backup_20220922_114313.ocr.308.1116070995     0

[root@ol7-122-rac1 bin]# ./ocrconfig -showbackup

ol7-122-rac2     2022/09/20 01:48:38     +DATA:/ol7-122-cluster/OCRBACKUP/backup00.ocr.288.1115862509     0

ol7-122-rac2     2022/09/20 01:48:38     +DATA:/ol7-122-cluster/OCRBACKUP/day.ocr.289.1115862519     0

ol7-122-rac2     2022/09/20 01:48:38     +DATA:/ol7-122-cluster/OCRBACKUP/week.ocr.290.1115862521     0

ol7-122-rac1     2022/09/22 11:43:13     +DATA:/ol7-122-cluster/OCRBACKUP/backup_20220922_114313.ocr.308.1116070995     0

Restore full database on different server with same name.

 

Here I am taking example of database ORA11G which is hosted on oel1.oracle.com and restoring this on oel2.oracle.com with same name.

Assuming we have different file system on both servers.

  • Copy pfile and passwordfile from oel1.oracle.com to $ORACLE_HOME/dbs location of oel2.oracle.com and modify pfile per current server i.e. controlfile, adump location etc. also remove all hidden parameters from pfile.
  • Restore controlfile that was taken after level0 backup to oel2.oracle.com at location define in pfile.
  • Now create script to restore database.

·         In below script we are using ‘SET NEWNAME’ because we have different file system.
·         At the end of script we are using ‘SWITCH DATAFILE ALL;’ to update controlfile with new location.
·         Recover database command will restore archive logs from tape to disk on location mention in archive dest log_archive_dest_1, and start recovery.
·         Channels are multiple hands of RMAN to restore datafiles.


export ORACLE_SID=DB11G
rman target / <<! >DB11G_rman_restore.log
run {
sql 'alter session set optimizer_mode=rule';
allocate channel a1 type 'sbt_tape';
allocate channel a2 type 'sbt_tape';
allocate channel a3 type 'sbt_tape';
allocate channel a4 type 'sbt_tape';
send 'NB_ORA_SERV=denver, NB_ORA_CLIENT=ora11gdb-g1';
SET NEWNAME FOR DATAFILE 4 to '/u01/app/oracle/oradata/DB11G/datafile/o1_mf_users_8yobsg8j_.dbf';
SET NEWNAME FOR DATAFILE 3 to '/u01/app/oracle/oradata/DB11G/datafile/o1_mf_undotbs1_8yobsg7w_.dbf';
SET NEWNAME FOR DATAFILE 2 to '/u01/app/oracle/oradata/DB11G/datafile/o1_mf_sysaux_8yobsg7o_.dbf';
SET NEWNAME FOR DATAFILE 1 to '/u01/app/oracle/oradata/DB11G/datafile/o1_mf_system_8yobsg0w_.dbf';
SET NEWNAME FOR DATAFILE 5 to '/u01/app/oracle/oradata/DB11G/datafile/vijay_ts_01.dbf';
RESTORE DATABASE;
SWITCH DATAFILE ALL;
RECOVER DATABASE;
}
Exit;

  • Startup mount, and then get to know the max. Sequence of archive file information from the v$log_history. This will tell us, till what sequence recovery is needed.

    SQL> startup nomount;
    ORACLE instance started.

    Total System Global Area  413372416 bytes
    Fixed Size                  2213896 bytes
    Variable Size             306186232 bytes
    Database Buffers          100663296 bytes
    Redo Buffers                4308992 bytes
    SQL> alter database mount;

    Database altered.


    SQL> SELECT MAX(SEQUENCE#) FROM V$LOG_HISTORY;

    MAX(SEQUENCE#)
    --------------
                39

  • Now database is ready for restoration. Run restore script that we have prepared at step number 3.
 
 
[oracle@oel2 script]$ nohup sh restore_db11g.sh &
[1] 24665
[oracle@oel2 script]$ nohup: ignoring input and appending output to `nohup.out'
 
  • Once restore and recovery completes, login to sqlplus and rename redolog file before executing resetlog command, because if we don’t rename it to TST11G directory then while resetlog it will look for ORA11G directory to create redolog file and this may cause database corruption.

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------
/u01/app/oracle/oradata/ORA11G/onlinelog/o1_mf_3_90ob65o7_.log
/u01/app/oracle/oradata/
ORA11G/onlinelog/o1_mf_2_90ob64x1_.log
/u01/app/oracle/oradata/
ORA11G/onlinelog/o1_mf_1_90ob64bj_.log

SQL> alter database rename file '/u01/app/oracle/oradata/ORA11G/onlinelog/o1_mf_3_90ob65o7_.log' to '/u01/app/oracle/oradata/TST11G/onlinelog/o1_mf_3_90ob65o7_.log';

Database altered.

SQL> alter database rename file '/u01/app/oracle/oradata/ORA11G/onlinelog/o1_mf_2_90ob64x1_.log' to '/u01/app/oracle/oradata/TST11G/onlinelog/o1_mf_2_90ob64x1_.log';

Database altered.

SQL> alter database rename file '/u01/app/oracle/oradata/ORA11G/onlinelog/o1_mf_1_90ob64bj_.log' to '/u01/app/oracle/oradata/TST11G/onlinelog/o1_mf_1_90ob64bj_.log';

Database altered.

SQL> alter database open resetlogs;

Database altered.

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ---------------------------------
ORA11G           oel2.oracle.com

SQL> select name, open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
ORA11G    READ WRITE

SQL> select count(*) from v$recover_file;
 
COUNT(*)
----------
         0



Add new mountpoint on your linux server

  Below are the steps to follow for adding any new mount on you linux machine. [root@oem ~]# fdisk -l Disk /dev/sdb: 53.7 GB, 53687091200 by...