Consisting naming is nog required for Oracle ASM devices, but LUNs used for the OCR and VOTE functions of Oracle RAC environments must have the same device names on all RAC systems. If the names for the OCR and VOTE devices are different, create a new device for each of these functions, on each of the RAC nodes, as follows:
First, check the PVIDs of each disk that is to be used as an OCR or VOTE device on all the RAC nodes. For example, if you're setting up a RAC cluster consisting of 2 nodes, called node1 and node2, check the disks as follows:
root@node1 # lspv | grep vpath | grep -i none
vpath6 00f69a11a2f620c5 None
vpath7 00f69a11a2f622c8 None
vpath8 00f69a11a2f624a7 None
vpath13 00f69a11a2f62f1f None
vpath14 00f69a11a2f63212 None
root@node2 /root # lspv | grep vpath | grep -i none
vpath4 00f69a11a2f620c5 None
vpath5 00f69a11a2f622c8 None
vpath6 00f69a11a2f624a7 None
vpath9 00f69a11a2f62f1f None
vpath10 00f69a11a2f63212 None
As you can see, vpath6 on node 1 is the same disk as vpath4 on node 2. You can determine this by looking at the PVID.
Check the major and minor numbers of each device:
root@node1 # cd /dev
root@node1 # lspv|grep vpath|grep None|awk '{print $1}'|xargs ls -als
0 brw------- 1 root system 47, 6 Apr 28 18:56 vpath6
0 brw------- 1 root system 47, 7 Apr 28 18:56 vpath7
0 brw------- 1 root system 47, 8 Apr 28 18:56 vpath8
0 brw------- 1 root system 47, 13 Apr 28 18:56 vpath13
0 brw------- 1 root system 47, 14 Apr 28 18:56 vpath14
root#node2 # cd /dev
root@node2 # lspv|grep vpath|grep None|awk '{print $1}'|xargs ls -als
0 brw------- 1 root system 47, 4 Apr 29 13:33 vpath4
0 brw------- 1 root system 47, 5 Apr 29 13:33 vpath5
0 brw------- 1 root system 47, 6 Apr 29 13:33 vpath6
0 brw------- 1 root system 47, 9 Apr 29 13:33 vpath9
0 brw------- 1 root system 47, 10 Apr 29 13:33 vpath10
Now, on each node set up a consisting naming convention for the OCR and VOTE devices. For example, if you wish to set up 2 ORC and 3 VOTE devices:
On server node1:
# mknod /dev/ocr_disk01 c 47 6
# mknod /dev/ocr_disk02 c 47 7
# mknod /dev/voting_disk01 c 47 8
# mknod /dev/voting_disk02 c 47 13
# mknod /dev/voting_disk03 c 47 14
On server node2:
mknod /dev/ocr_disk01 c 47 4
mknod /dev/ocr_disk02 c 47 5
mknod /dev/voting_disk01 c 47 6
mknod /dev/voting_disk02 c 47 9
mknod /dev/voting_disk03 c 47 10
This will result in a consisting naming convention for the OCR and VOTE devices on bothe nodes:
root@node1 # ls -als /dev/*_disk*
0 crw-r--r-- 1 root system 47, 6 May 13 07:18 /dev/ocr_disk01
0 crw-r--r-- 1 root system 47, 7 May 13 07:19 /dev/ocr_disk02
0 crw-r--r-- 1 root system 47, 8 May 13 07:19 /dev/voting_disk01
0 crw-r--r-- 1 root system 47, 13 May 13 07:19 /dev/voting_disk02
0 crw-r--r-- 1 root system 47, 14 May 13 07:20 /dev/voting_disk03
root@node2 # ls -als /dev/*_disk*
0 crw-r--r-- 1 root system 47, 4 May 13 07:20 /dev/ocr_disk01
0 crw-r--r-- 1 root system 47, 5 May 13 07:20 /dev/ocr_disk02
0 crw-r--r-- 1 root system 47, 6 May 13 07:21 /dev/voting_disk01
0 crw-r--r-- 1 root system 47, 9 May 13 07:21 /dev/voting_disk02
0 crw-r--r-- 1 root system 47, 10 May 13 07:21 /dev/voting_disk03
How do you test if Oracle TDP (RMAN) is working properly?
# tdpoconf showenv
The traditional method for making an Oracle database capable of 7*24 operation is by means of creating an HACMP cluster in an Active-Standby configuration. In case of a failure of the Active system, HACMP lets the standby system take over the resources, start Oracle and thus resumes operation. This takeover is done with a downtime period of aprox. 5 to 15 minutes, however the impact on the business applications is more severe. It can lead to interruptions up to one hour in duration.
Another way to achieve high availability of databases, is to use a special version of the Oracle database software called Real Application Cluster, also called RAC. In a RAC cluster multiple systems (instances) are active (sharing the workload) and provide a near always-on database operation. The Oracle RAC software relies on IBM's HACMP software to achieve high availability for hardware and the operating system platform AIX. For storage it utilizes a concurrent filesystem called GPFS (General Parallel File System), a product of IBM. Oracle RAC 9 uses GPFS and HACMP. With RAC 10 you no longer need HACMP and
GPFS.
HACMP is used for network down notifications. Put all network adapters of 1 node on a single switch and put every node on a different switch. HACMP only manages the public and private network service adapters. There are no standby, boot or management adapters in a RAC HACMP cluster. It just uses a single
hostname; Oracle RAC and GPFS do not support hostname take-over or IPAT (IP Address take-over). There are no disks, volume groups or resource groups defined in an HACMP RAC cluster. In fact, HACMP is only necessary for event handling for Oracle RAC.
Name your HACMP RAC clusters in such away, that you can easily recognize the cluster as a RAC cluster, by using a naming convention that starts with RAC_.
On every GPFS node of an Oracle RAC cluster a GPFS daemon (mmfs) is active. These daemons need to communicate with each other. This is done via the public network, not via the private network.
Cache Fusion
Via SQL*Net an Oracle block is read in memory. If a second node in an HACMP RAC cluster requests the same block, it will first check if it already has it stored locally in its own cache. If not, it will use a private dedicated network to ask if another node has the block in cache. If not, the block will be read from disk. This is called Cache Fusion or Oracle RAC interconnect.
This is why on RAC HACMP clusters, each node uses an extra private network adapter to communicate with the other nodes, for Cache Fusion purposes only. All other communication, including the communication between the GPFS daemons on every node and the communication from Oracle clients, is done via the public network adapter. The throughput on the private network adapter can be twice as high as on the public network adapter.
Oracle RAC will use its own private network for Cache Fusion. If this network is not available, or if one node is unable to access the private network, then the private network is no longer used, but the public network will be used instead. If the private network returns to normal operation, then a fallback to the private network will occur. Oracle RAC uses cllsif of HACMP for this purpose.
Number of results found for topic
Oracle: 3.
Displaying results: 1 - 3.