Oracle RAC database Concept

** Myself
Hw require on 1 machine
- 2 or more interface (1 pri,1 pub)
- If more interface use redundant private
- 1 virtual ip
- Sum at least 3 ip/machine
Fail over interface recomment(not loadbalance)
- if window then normal install 1 private then to use team config virtual adapter to private example use load balance & fail over(config loadbalance) and use primary and standby (new interface)
- if want to change ip or teaming will be stop db and crs then change it(other node no need stop crs)
- if window want to teaming then teaming and use one private install crs
- if after install crs ,window want to teaming then stop db and crs then change it(other node no need stop crs)
Cluster ready service
- crs use all platform
- if use ocfs will be not to user clusterware then to use crs
- if not use third party clusterware then to use crs
- after install third party clusterware then install crs
Cluster file system
- for standard edition , oracle db use to asm but ocr and vote and archive log which can cfs
- for standard edition ,Example cluster file location
/oracle/cdata/ocr.dbf, /oracle/cdata/voting.dbf
/oracle/oradata/<+ASM>/spfile<+ASM>.ora
/oracle/oradata/<DBNAME>/spfile<+ASM>.ora
/oracle/archive/<DBNAME>/
- ocfs (oracle file system on linux and window)
- ocfs on linux first install rpm then will be use and before install crs
- ocfs on linux can be use central sw and central db files and cluster disk crs
- ocfs on win after install crs
- ocfs on win can not use central sw but central db files and cluster disk crs
- veritas cfs on unix,gfs on linux first install after that install crs
- cfs will be use third party for platform clusterware
- ASM and Raw can be use all platform
Oracle database
- DB name is unique
- service_name = db name which is unique
- instance=thread
- instance_name=oracle_sid(default)
- components of instance have redo and undo for each instance but shared controlfiles,datafiles,spfiles,archive logs
Listener
- All time use net assist for cluster because it help to correct setting cluster environment
Step Installation of Oracle RAC 10g Environment
- Do not create team for private card
- Disable other private card(only one)
- Config DNS for pub,priv,vip
- Config authentication to use remote shell for unix
- Install crs sw ,for linux can be 1 sw (ocfs) or cfs for all platform and for win can be ocfs but not only one sw
- Can be use ocfs with ocr,vote,spfile,archive,and external backup (redo,and datafile may be ASM)
- Config vip ,ons,gsd (vip use public interface
- Install db sw
- For standard edition(4CPU/1 host) must be ASM storage keep datafiles and redos
- For EE can be use CFS,ASM,Raw for keep datafiles and redos
- Create db
- If create db with ASM will be config ASM on all node which use +ASM is service name but instance is +ASM<number> each instance
- If can’t connect ASM (mount disk all instance) may be config tnsname for connect
- Archive log can be cfs by enable archive with path of cfs
- Location use common location which control better than omf
- TAF policy is basic
- Redo may be thread number for name of redolog and group
Problem of Oracle RAC 10g Environment
- If all private failed another machine will be dump(TOC)
- If public interface failed then vip will be take to another node
- If repair public interface then use crs_stop vip on another node and restart crs ,crs_start –all(not need to start srvctl because crs will start instance and db auto or restart instance again by srvctl ), again for relocate vip to original machine
Stopping the Oracle RAC 10g Environment
The first step is to stop the Oracle instance. When the instance (and related services) is down, then bring down the ASM instance. Finally, shut down the node applications (Virtual IP, GSD, TNS Listener, and ONS).
$ export ORACLE_SID=orcl1
$ emctl stop dbconsole
$ srvctl stop instance -d orcl -i orcl1
$ srvctl stop asm -n linux1
$ srvctl stop nodeapps -n linux1

Starting the Oracle RAC 10g Environment
The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and ONS). When the node applications are successfully started, then bring up the ASM instance. Finally, bring up the Oracle instance (and related services) and the Enterprise Manager Database console.
$ export ORACLE_SID=orcl1
$ srvctl start nodeapps -n linux1
$ srvctl start asm -n linux1
$ srvctl start instance -d orcl -i orcl1
$ emctl start dbconsole

Start/Stop All Instances with SRVCTL
Start/stop all the instances and their enabled services. I have included this step just for fun as a way to bring down all instances!
$ srvctl start database -d orcl
$ srvctl stop database -d orcl
 
 
HA concept 
System downtime
-          Plan
o   Datachange
o   System change
-          Unplan
o   Data failures
o   Human failures 
 

0 comments:

Loading