Showing posts with label Oracle RAC. Show all posts
Showing posts with label Oracle RAC. Show all posts

Oracle RAC SCAN

 Oracle RAC SCAN

 

Oracle RAC SCAN, which stands for Single Client Access Name, is a feature in Oracle Real Application Clusters (RAC) that provides a single virtual hostname for client connections to the cluster database. The SCAN simplifies client connectivity and load balancing by abstracting the underlying cluster configuration and presenting a unified endpoint.

Here are key points to understand about Oracle RAC SCAN:

  1. Simplified Client Connectivity: Instead of connecting to individual node names or VIP (Virtual IP) addresses, clients connect to the SCAN. The SCAN acts as a single virtual hostname that remains constant regardless of the cluster size or configuration changes.

  2. SCAN VIP: The SCAN is associated with three SCAN VIPs (Virtual IP addresses). These VIPs are assigned to network interfaces on different nodes within the cluster. Each node in the cluster listens on all three VIPs, allowing clients to connect to any SCAN VIP.

  3. SCAN Listener: The SCAN Listener is a single listener process that runs on each node in the cluster. It listens on the SCAN VIPs and handles incoming client connection requests. The SCAN Listener redirects client connections to the appropriate node and instance within the cluster.

  4. Load Balancing: The SCAN Listener performs load balancing by distributing client connections across the available nodes and instances in the RAC cluster. It uses a load-balancing algorithm to evenly distribute client requests and optimize resource utilization.

  5. High Availability: The SCAN provides high availability for client connections. If a node or SCAN VIP fails, the SCAN VIP is automatically relocated to another node, ensuring uninterrupted client connectivity. Clients do not need to update their connection details in case of node failures.

  6. Transparent Node Addition/Removal: Adding or removing nodes from the RAC cluster does not impact client connectivity. Clients continue to connect to the SCAN, and the SCAN Listener dynamically adjusts the routing of connections to reflect the updated cluster configuration.

  7. SCAN Configuration: The SCAN is configured during the installation or configuration of Oracle RAC. It requires a SCAN hostname and a corresponding DNS entry pointing to the SCAN VIPs. Clients use the SCAN hostname to connect to the cluster database.

By utilizing the Oracle RAC SCAN feature, clients can connect to the cluster database without needing to be aware of the underlying cluster configuration. The SCAN provides a unified and load-balanced entry point, enhancing scalability, availability, and ease of client connectivity in Oracle RAC environments.

 

Oracle RAC Scan IP

 Scan IP:


In Oracle Real Application Clusters (RAC), SCAN stands for Single Client Access Name. It is a virtual IP address that provides a single name for clients to connect to the database service, regardless of the number of nodes or instances in the cluster. The SCAN IP simplifies client connectivity and load balancing in a RAC environment.

The SCAN IP is associated with the SCAN listener, which is a single listener process that runs on each node in the cluster. When a client connects to the SCAN IP, the SCAN listener routes the connection to one of the available nodes in the cluster, distributing the workload across multiple instances.

Here are some key points about SCAN IP in Oracle RAC:

  1. Simplified Client Connectivity: Instead of specifying individual node addresses or instance names, clients can connect to the SCAN IP to access the RAC database. This simplifies the client configuration and eliminates the need to update client connection details when the cluster topology changes.

  2. Load Balancing: The SCAN listener performs load balancing by distributing client connections across the available nodes. It uses a load-balancing algorithm to route each connection request to a different node, distributing the workload evenly across the cluster.

  3. Transparent Node Addition/Removal: With the SCAN IP, adding or removing nodes from the cluster does not impact client connectivity. Clients continue to connect to the SCAN IP, and the SCAN listener automatically redirects the connections to the appropriate nodes, even if the cluster topology changes.

  4. High Availability: The SCAN IP enhances high availability by providing a single, highly available entry point for client connections. If a node or SCAN listener fails, the SCAN VIP is automatically relocated to another node, ensuring continuous client connectivity.

  5. SCAN VIP and SCAN Listener: The SCAN IP consists of a virtual IP address (SCAN VIP) and a corresponding SCAN listener. The SCAN VIP is assigned to one of the nodes in the cluster, and the SCAN listener runs on each node to handle client connections to the SCAN IP.

By using the SCAN IP in Oracle RAC, clients can connect to the database service seamlessly, without the need to manage individual node addresses. The SCAN IP provides a centralized and load-balanced access point to the RAC database, enhancing scalability, availability, and ease of client connectivity.

Oracle RAC Background process

 Oracle RAC Background process


In Oracle Real Application Clusters (RAC), background processes play a crucial role in managing the shared resources, coordination, and synchronization among the multiple instances that make up the cluster. These processes are specific to RAC and work together to ensure high availability, scalability, and fault tolerance. Here are some important background processes in Oracle RAC:

  1. Global Cache Service (GCS) Process: The GCS process manages the Global Cache Service, which is responsible for coordinating access to data blocks residing in the global cache across different instances. It handles block requests, grants or denies access, and manages the distributed lock management for data concurrency.

  2. Global Enqueue Service (GES) Process: The GES process manages the Global Enqueue Service, which handles lock management for non-data resources such as shared resources, library cache objects, and sequences across the RAC instances. It manages and coordinates the access and release of these enqueues across multiple instances.

  3. Global Resource Directory (GRD) Process: The GRD process maintains the Global Resource Directory, which is a shared memory structure that contains information about the ownership and location of cached data blocks and enqueues. It provides a centralized view of the resources and their statuses across the RAC instances.

  4. Cache Fusion Process: The Cache Fusion process handles the inter-instance communication and transfer of data blocks between the instances' local caches. It enables efficient data sharing and avoids unnecessary disk I/O by allowing instances to access each other's data blocks directly.

  5. Network Listener Process: The Network Listener process listens for incoming connection requests from clients and routes them to the appropriate RAC instance. It handles the initial connection establishment and enables clients to connect to any available instance in the RAC cluster.

  6. LMS Process: The Lock Manager Server (LMS) process handles lock management and coordination between instances. It manages the distributed locks and ensures data consistency and integrity in the RAC environment.

  7. GES/GCS Recovery Server (RS) Process: The GES/GCS Recovery Server process handles recovery operations in case of instance failures. It manages the recovery and redistribution of cached data blocks and enqueues to ensure data availability and consistency.

  8. LMON Process: The Global Enqueue Service Monitor (LMON) process monitors the health and availability of the GES and GCS processes. It detects failures and takes appropriate actions to recover or reconfigure the resources in case of failures or reconfiguration events.

These are some of the key background processes in Oracle RAC. They work together to provide high availability, scalability, and efficient resource management in a clustered database environment.

Stop/Start RAC services

Stop RAC services

Stop Listener:

$ srvctl stop listener -n ol7-122-rac1

$ srvctl status listener -n ol7-122-rac1

 

Stop database:

$ srvctl stop database -d VIJAY

$ srvctl status database -d VIJAY

 

Stop ASM:

$ srvctl stop asm -n ol7-122-rac1 -f

$ srvctl status asm -n ol7-122-rac1

 

Stop nodeapps:

$ srvctl stop nodeapps -n ol7-122-rac1 -f

$ srvctl stop nodeapps -n ol7-122-rac2 -f

 

Stop crs:

# crsctl stop crs

# crsctl check cluster -all

 

Start RAC Services:

Start crs:

crsctl start crs

crsctl start res ora.crsd -init

crsctl check cluster -all

Start Nodeapps

srvctl start nodeapps -n ol7-122-rac1/2

srvctl status nodeapps -n  ol7-122-rac1/2

 

Start asm

srvctl start asm -n ol7-122-rac1/2

srvctl status asm -n ol7-122-rac1/2

 

Start database:

srvctl start database -d VIJAY

 

Start listener

srvctl start listener -n ol7-122-rac1/2

srvctl status listener -n ol7-122-rac1/2


Backup OCR

 


1. OCR Dumps

[root@ol7-122-rac1 bin]# ./ocrconfig -export ocr_backup_`date +%Y%m%d`.dmp

PROT-58: successfully exported the Oracle Cluster Registry contents to file 'ocr_backup_20220922.dmp'

2. OCR Backups

[root@ol7-122-rac1 bin]# ./ocrconfig -manualbackup

ol7-122-rac1     2022/09/22 11:43:13     +DATA:/ol7-122-cluster/OCRBACKUP/backup_20220922_114313.ocr.308.1116070995     0

[root@ol7-122-rac1 bin]# ./ocrconfig -showbackup

ol7-122-rac2     2022/09/20 01:48:38     +DATA:/ol7-122-cluster/OCRBACKUP/backup00.ocr.288.1115862509     0

ol7-122-rac2     2022/09/20 01:48:38     +DATA:/ol7-122-cluster/OCRBACKUP/day.ocr.289.1115862519     0

ol7-122-rac2     2022/09/20 01:48:38     +DATA:/ol7-122-cluster/OCRBACKUP/week.ocr.290.1115862521     0

ol7-122-rac1     2022/09/22 11:43:13     +DATA:/ol7-122-cluster/OCRBACKUP/backup_20220922_114313.ocr.308.1116070995     0

Add new mountpoint on your linux server

  Below are the steps to follow for adding any new mount on you linux machine. [root@oem ~]# fdisk -l Disk /dev/sdb: 53.7 GB, 53687091200 by...