admin 管理员组

文章数量: 1184232


2024年1月15日发(作者:matlab的subplot是什么意思)

ORACLE 12C R2 Real Application Cluster Installation Guide

朱海清

StarTimes Software Technology Co., Ltd

ASM磁盘空间最低要求

12C R2相比前一版本,OCR的磁盘占用需求有了明显增长。

为了方便操作,设置如下:

External: 1个卷x40G

Normal: 3个卷x30G

Hight:

Flex:

5个卷x25G

3个卷x30G

OCR+VOLTING+MGMT存储通常放到一个磁盘组,且选择Normal的冗余方式,也即最少3块asm磁盘80G空间。

操作系统安装

操作系统安装时把“Server with GUI“和”Compatibility Libraries”勾上,其他都不用选择。

版本采用CentOS 7、RHEL 7或者Oracle Linux 7

安装oracle预安装包

yum install -y oracle-rdbms-server-12cR1-preinstall

创建用户和组

oracle用户和dba、oinstall组已经在上一步创建完毕。

rac所有节点的oracle用户和grid用户的uid和gid必须一致,所以创建的时候最好制定uid和gid。

groupadd --gid 54323 asmdba

groupadd --gid 54324 asmoper

groupadd --gid 54325 asmadmin

groupadd --gid 54326 oper

groupadd --gid 54327 backupdba

groupadd --gid 54328 dgdba

groupadd --gid 54329 kmdba

usermod --uid 54321 --gid oinstall --groups dba,oper,asmdba,asmoper,backupdba,dgdba,kmdba oracle

useradd --uid 54322 --gid oinstall --groups dba,asmadmin,asmdba,asmoper grid

安装目录

mkdir -p /u01/app/12.2.0/grid

mkdir -p /u01/app/grid

mkdir -p /u01/app/oracle

chown -R grid:oinstall /u01

chown oracle:oinstall /u01/app/oracle

chmod -R 775 /u01/

用户环境变量

grid环境变量

cat <>/home/grid/.bash_profile

ORACLE_SID=+ASM1

ORACLE_HOME=/u01/12.2.0/grid

PATH=$ORACLE_HOME/bin:$PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

export ORACLE_SID CLASSPATH ORACLE_HOME LD_LIBRARY_PATH PATH

EOF

在节点2,ORACLE_SID=+ASM2

oracle环境变量

cat <>/home/oracle/.bash_profile

ORACLE_SID=starboss1

ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1

ORACLE_HOSTNAME=rac01

PATH=$ORACLE_HOME/bin:$PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

export ORACLE_SID ORACLE_HOME ORACLE_HOSTNAME PATH LD_LIBRARY_PATH CLASSPATH

EOF

在节点2,ORACLE_SID=starboss2,ORACLE_HOSTNAME=rac02

修改

# vi /etc/systemd/

RemoveIPC=no

# systemctl daemon-reload

# systemctl restart systemcd-logind

加载pam_模块

echo "session required pam_" >> /etc/pam.d/login

禁用selinux

setenforce 0

vi /etc/sysconfig/selinux

禁用防火墙

# systemctl stop firewalld && systemctl disable firewalld

修改ulimit

cat <> /etc/security/limits.d/

oracle soft nproc 16384

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

oracle soft stack 10240

oracle hard stack 32768

grid soft nproc 16384

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

grid soft stack 10240

grid hard stack 32768

EOF

创建自定义的ulimit

cat <> /etc/profile.d/

if [ $USER = "oracle" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -u 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

fi

if [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -u 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

fi

EOF

修改共享内存分区大小

将如下参数添加到/etc/fstab,具体大小数值根据实际情况调整,因为这个数值和物理内存以及MEMORY_TARGET有关。

echo “shm /dev/shm tmpfs size=12g 0 0” >> /etc/fstab

修改后,只需重新对shm进行挂载即可:

mount -o remount /dev/shm

多路径

# yum install device-mapper-multipath

# cp /usr/share/doc/device-mapper-multipath-0.4.9/ /etc/

获取scsi id

# /usr/lib/udev/scsi_id --whitelisted --replace-whitespace –-device=/dev/sda

# vi /etc/

multipaths {

multipath {

wwid 36000d316

alias vol01

}

multipath {

wwid 36000d315

alias vol02

}

}

# systemctl start e

# multipath -ll

配置磁盘

ASMlib方式

安装ASMLib

Oracle Linux 7

yum install -y kmod-oracleasm

CentOS 7

yum install -y /centos/7/os/x86_64/Packages/.x86_

yum install -y /otn_software/asmlib/7.x86_

yum install -y /repo/OracleLinux/OL7/latest/x86_64/getPackage/7.x86_

其他版本下载:

/technetwork/server-storage/linux/asmlib/

ASM磁盘配置

12C R2中对磁盘组空间要求比12C R1更大。

[root@rac01 ~]# /etc/init.d/oracleasm configure -i

Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library

driver. The following questions will determine whether the driver is

loaded on boot and what permissions it will have. The current values

will be shown in brackets ('[]'). Hitting without typing an

answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [ OK

Scanning the system for Oracle ASMLib disks: [ OK

[root@rac01 ~]# reboot

用fdisk在共享磁盘上创建主分区:

[root@rac01 ~]# fdisk /dev/sdd

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

Device does not contain a recognized partition table

Building a new DOS disklabel with disk identifier 0x86f899a0.

Command (m for help): n

Partition type:

p primary (0 primary, 0 extended, 4 free)

e extended

Select (default p): p

Partition number (1-4, default 1):

First sector (2, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2, default 39976959):

]

]

Using default value 39976959

Partition 1 of type Linux and of size 19.1 GiB is set

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

在集群的任意节点创建asm磁盘:

[root@rac01 ~]# /etc/init.d/oracleasm createdisk OCR01 /dev/sdd1

Marking disk "OCR01" as an ASM disk: [ OK ]

[root@rac01 ~]# /etc/init.d/oracleasm createdisk OCR02 /dev/sde1

Marking disk "OCR02" as an ASM disk: [ OK ]

[root@rac01 ~]# /etc/init.d/oracleasm createdisk OCR03 /dev/sdf1

Marking disk "OCR03" as an ASM disk: [ OK ]

[root@rac01 ~]# /etc/init.d/oracleasm createdisk DATA01 /dev/sdb1

Marking disk "DATA01" as an ASM disk: [ OK ]

[root@rac01 ~]# /etc/init.d/oracleasm createdisk DATA02 /dev/sdc1

Marking disk "DATA02" as an ASM disk: [ OK ]

分别两个节点执行:

[root@rac01 ~]# /etc/init.d/oracleasm scandisks

[root@rac01 ~]# /etc/init.d/oracleasm listdisks

注:

如果需要清空磁盘,重新部署asm,需要使用dd命令,如:

dd if=/dev/zero of=/dev/sdb1 bs=8192 count=128000

UDEV方式

参考:/articles/linux/udev-scsi-rules-configuration-in-oracle-linux

确认在所有RAC节点上已经安装了必要的UDEV包

[root@rh2 ~]# rpm -qa|grep udev

5

CentOS 6/Oracle Linux 6/RHEL 6

1.通过scsi_id获取设备的块设备的唯一标识名,假设系统上已有LUN sdc-sdp

for i in c d e f g h i j k l m n o p ;

do

echo "sd$i" "`scsi_id -g -u -s /block/sd$i` ";

done

sdc 1IET_00010001

sdd 1IET_00010002

sde 1IET_00010003

sdf 1IET_00010004

以上列出于块设备名对应的唯一标识名

2.创建必要的UDEV配置文件,

首先切换到配置文件目录

[root@rh2 ~]# cd /etc/udev/rules.d

定义必要的规则配置文件

[***************]#

[***************]#

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010001", NAME="ocr1", OWNER="grid",

GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010002", NAME="ocr2", OWNER="grid",

GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010003", NAME="asm-disk1",

OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010004", NAME="asm-disk2",

OWNER="grid", GROUP="asmadmin", MODE="0660"

Result 为/sbin/scsi_id -g -u -s %p的输出, 按顺序填入刚才获取的唯一标识名即可

OWNER一般为grid,GROUP为asmadmin,MODE即为磁盘读写权限,采用0660即可

NAME为UDEV映射后的设备名,

建议为OCR和VOTE DISK创建独立的DISKGROUP,为了容易区分将该DISKGROUP专用的设备命名为的形式

其余磁盘可以根据其实际用途或磁盘组名来命名

3.将该规则文件拷贝到其他节点上

[***************]#ther_node:/etc/udev/rules.d

4.在所有节点上启动udev服务,或者重启服务器即可

[***************]#/sbin/udevcontrolreload_rules

[***************]#/sbin/start_udev

Starting udev: [ OK ]

5.检查设备是否到位

[***************]#cd/dev

[root@rh2 dev]# ls -l ocr*

brw-rw---- 1 grid asmadmin 8, 32 Jul 10 17:31 ocr1

brw-rw---- 1 grid asmadmin 8, 48 Jul 10 17:31 ocr2

[root@rh2 dev]# ls -l asm-disk*

brw-rw---- 1 grid asmadmin 8, 64 Jul 10 17:31 asm-disk1

brw-rw---- 1 grid asmadmin 8, 80 Jul 10 17:31 asm-disk2

brw-rw---- 1 grid asmadmin 8, 96 Jul 10 17:31 asm-disk3

brw-rw---- 1 grid asmadmin 8, 112 Jul 10 17:31 asm-disk4

CentOS 7/Oracle Linux 7/RHEL 7

获取块设备id

# /usr/lib/udev/scsi_id -g -u -d /dev/sdb1

14f504e46494c45526a75744363422d796357662d4b436a65

# /usr/lib/udev/scsi_id -g -u -d /dev/sdc1

14f504e46494c455254535a7a414d2d62494b6f2d5a6f6a42

# /usr/lib/udev/scsi_id -g -u -d /dev/sdd1

14f504e46494c45526566324e626c2d4770654c2d6b443064

# /usr/lib/udev/scsi_id -g -u -d /dev/sde1

14f504e46494c455266326e7547552d384953442d6135576a

# /usr/lib/udev/scsi_id -g -u -d /dev/sdf1

14f504e46494c4552774263526f742d534a75392d36374f69

创建参数文件

touch /etc/scsi_

options=-g

# cat /etc/udev/rules.d/

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent",

RESULT=="14f504e46494c45526a75744363422d796357662d4b436a65", SYMLINK+="asm-disk1", OWNER="grid", GROUP="asmadmin",

MODE="0660"

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent",

RESULT=="14f504e46494c455254535a7a414d2d62494b6f2d5a6f6a42", SYMLINK+="asm-disk2", OWNER="grid", GROUP="asmadmin",

MODE="0660"

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent",

RESULT=="14f504e46494c45526566324e626c2d4770654c2d6b443064", SYMLINK+="asm-disk3", OWNER="grid", GROUP="asmadmin",

MODE="0660"

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent",

RESULT=="14f504e46494c455266326e7547552d384953442d6135576a", SYMLINK+="asm-disk4", OWNER="grid", GROUP="asmadmin",

MODE="0660"

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent",

RESULT=="14f504e46494c4552774263526f742d534a75392d36374f69", SYMLINK+="asm-disk5", OWNER="grid", GROUP="asmadmin",

MODE="0660"

加载并刷新块设备分区表

# /sbin/partprobe /dev/sdb1

# /sbin/partprobe /dev/sdc1

# /sbin/partprobe /dev/sdd1

# /sbin/partprobe /dev/sde1

# /sbin/partprobe /dev/sdf1

udev测试

#

/sbin/udevadm test /block/sdb/sdb1

#

/sbin/udevadm test /block/sdc/sdc1

#

/sbin/udevadm test /block/sdd/sdd1

#

/sbin/udevadm test /block/sde/sde1

#

/sbin/udevadm test /block/sdf/sdf1

启动服务

#

/sbin/udevadm control --reload-rules

检查连接生成

[root@udev ~]# ls -l /dev/asm-disk*

lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk1 -> sdb1

lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk2 -> sdc1

lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk3 -> sdd1

lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk4 -> sde1

lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk5 -> sdf1

禁用ntp

/sbin/service ntpd stop

chkconfig ntpd off

mv /etc/ /etc/

rm /var/run/

停止avahi-daemon服务

systemctl stop avahi-dnsconfd

systemctl stop avahi-daemon

systemctl disable avahi-dnsconfd

systemctl disable avahi-daemon

IP配置

如果不安装DNS服务,通过hosts文件来解析,则只能配置一个SCAN IP,只能连接rac某一个节点,无法实现负载均衡。

#public,接业务交换机,bond

192.168.245.134 rac01

192.168.245.140 rac02

#private,直连心跳,bond

10.0.1.1 rac01-priv

10.0.1.2 rac02-priv

#virtual

192.168.245.136 rac01-vip

192.168.245.142 rac02-vip

#scan-ip,oracle rac service

192.168.245.135 rac-cluster-scan

安装cvuqdisk软件包

安装包在数据库安装软件压缩包的rpm文件夹下。

rpm -ivh

安装GI

从Oracle Grid Infrastructure 12c Release 2 (12.2)开始,GI 安装方式变成了image-based方式,Oracle 提供的Grid 安装文件是直接已经安装好的ORACLE_HOME,

# su - grid

$cd /u01/12.2.0/grid

$unzip /oracle_soft/grid_

$ ./

选择“Configure Oracle Grid Infrastructure for a New Cluster”,点击Next

第4布,初始界面可能只看到本地节点,点击Add添加对方节点即可,输入hosts文件中配置的节点名称和vip的名称。

在“SSH connectivity”中,输入grid用户的密码,点击setup,安装程序会自动设置grid用户的ssh免秘登陆认证。

按照事先的规划,选择网卡用途

第八步是创建OCR和VOLTING的存储磁盘,如果界面中没有磁盘,点击“Change Discovery Path”重新发现一下磁盘即可。

如果我们没有在环境中配置DNS和GNS服务,检查就会报DNS和的错误,因为我们采用的是hosts文件解析,这里略过即可。

点击install,GI会开始安装,并复制文件到其它节点机型同步安装。

这里提示用root用户执行脚本,必须先在本地节点挨个启动执行,成功之后,才能再在其他节点并行。

[root@rac01 ~]# sh /u01/app/oraInventory/

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[root@rac01 ~]# sh /u01/app/12.2.0/grid/

Performing root user operation.

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/12.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /etc/

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

/u01/app/grid/crsdata/rac01/crsconfig/rootcrs_rac01_2017-08-16_

2017/08/16 14:48:16 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.

2017/08/16 14:48:16 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2017/08/16 14:48:59 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2017/08/16 14:48:59 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.

2017/08/16 14:49:04 CLSRSC-363: User ignored prerequisites during installation

2017/08/16 14:49:04 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.

2017/08/16 14:49:05 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.

2017/08/16 14:49:06 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.

2017/08/16 14:49:12 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.

2017/08/16 14:49:13 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.

2017/08/16 14:49:13 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.

2017/08/16 14:49:44 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.

2017/08/16 14:49:51 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.

2017/08/16 14:49:52 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.

2017/08/16 14:49:57 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.

2017/08/16 14:50:12 CLSRSC-330: Adding Clusterware entries to file 'e'

2017/08/16 14:50:35 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.

2017/08/16 14:50:40 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

2017/08/16 14:51:20 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.

2017/08/16 14:51:25 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-2672: Attempting to start '' on 'rac01'

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start 'nitor' on 'rac01'

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of 'nitor' on 'rac01' succeeded

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2672: Attempting to start 'n' on 'rac01'

CRS-2676: Start of 'n' on 'rac01' succeeded

CRS-2676: Start of '' on 'rac01' succeeded

Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/ for details.

2017/08/16 14:52:52 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade grid oinstall'

CRS-2672: Attempting to start '' on 'rac01'

CRS-2672: Attempting to start 'e' on 'rac01'

CRS-2676: Start of 'e' on 'rac01' succeeded

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-4256: Updating the profile

Successful addition of voting disk 252d21a926494fd5bfdcbc163b9fd646.

Successful addition of voting disk 6f00d3b3ba454f14bfc15f10a6466e3e.

Successful addition of voting disk 5aed4ef45df94ff1bf4934d8883d39a3.

Successfully replaced voting disk group with +DATA.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 252d21a926494fd5bfdcbc163b9fd646 (/dev/oracleasm/disks/OCR03) [DATA]

2. ONLINE 6f00d3b3ba454f14bfc15f10a6466e3e (/dev/oracleasm/disks/OCR02) [DATA]

3. ONLINE 5aed4ef45df94ff1bf4934d8883d39a3 (/dev/oracleasm/disks/OCR01) [DATA]

Located 3 voting disk(s).

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2673: Attempting to stop 'e' on 'rac01'

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2677: Stop of 'e' on 'rac01' succeeded

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2673: Attempting to stop 'r_' on 'rac01'

CRS-2677: Stop of 'r_' on 'rac01' succeeded

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2673: Attempting to stop '' on 'rac01'

CRS-2677: Stop of '' on 'rac01' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed

CRS-4133: Oracle High Availability Services has been stopped.

2017/08/16 14:54:18 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.

CRS-4123: Starting Oracle High Availability Services-managed resources

CRS-2672: Attempting to start '' on 'rac01'

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start 'nitor' on 'rac01'

CRS-2676: Start of 'nitor' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2672: Attempting to start 'n' on 'rac01'

CRS-2676: Start of 'n' on 'rac01' succeeded

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start 'r_' on 'rac01'

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2676: Start of 'r_' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start 'e' on 'rac01'

CRS-2676: Start of 'e' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-6023: Starting Oracle Cluster Ready Services-managed resources

CRS-6017: Processing resource auto-start for servers: rac01

CRS-6016: Resource auto-start has completed for server rac01

CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources

CRS-4123: Oracle High Availability Services has been started.

2017/08/16 14:57:01 CLSRSC-343: Successfully started Oracle Clusterware stack

2017/08/16 14:57:01 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.

CRS-2672: Attempting to start '1LSNR_' on 'rac01'

CRS-2676: Start of '1LSNR_' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

CRS-2672: Attempting to start '' on 'rac01'

CRS-2676: Start of '' on 'rac01' succeeded

2017/08/16 15:01:35 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

2017/08/16 15:04:36 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

asmca配置数据盘

GI安装完成后,我们需要使用asmca来创建存放业务数据库的asm磁盘组,为oracle数据库的安装做准备。

#su - grid

$/u01/app/12.2.0/grid/bin/asmca

在asm配置助手中,选择磁盘组菜单,可以看到已经mount的OCR磁盘组,并且2个节点的ASM实力全部是UP状态。

点击Create按钮,创建业务数据库的ASM磁盘组,取名DATA,然后选择前面创建的磁盘,并点击OK完成创建。

完成后的结果。

安装oracle

grid安装完成后,下一步工作是安装oracle数据库软件和业务数据库实例。

将linuxx64_12201_上传到rac01的任意目录,解压后,用oracle用户启动runInstaller。

数据库开始安装后,oracle会将软件同步复制到其余节点进行同步安装。

第五步,默认是“Policy managed”,如无特殊需求,可以选择”Admin managed“

第六步,先填写oracle密码,然后点击setup,程序会自动设置各个节点的oracle用户免密登陆。完成后,点下一步

内存较大的情况下,一般不要勾选自动内存管理,只调整oracle可用内存即可。

点选前面asmca创建的磁盘组,然后设置SYS密码

由于没有安装DNS并使用GNS服务,这两个错误可以直接忽略。

安装完后的集群状态

[grid@rac01 ~]$ crsctl status res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

1LSNR_

ONLINE ONLINE rac01 STABLE

ONLINE ONLINE rac02 STABLE

ONLINE ONLINE rac01 STABLE

ONLINE ONLINE rac02 STABLE

ONLINE ONLINE rac01 STABLE

ONLINE ONLINE rac02 STABLE

ONLINE ONLINE rac01 STABLE

ONLINE ONLINE rac02 STABLE

ONLINE ONLINE rac01 STABLE

k

ONLINE ONLINE rac02

ONLINE

ONLINE

ONLINE

ONLINE

rac01

rac02

_advm

ONLINE

ONLINE

ONLINE

ONLINE

rac01

rac02

--------------------------------------------------------------------------------

OFFLINE OFFLINE

OFFLINE OFFLINE

rac01

rac02

Cluster Resources

--------------------------------------------------------------------------------

ER_

NR

1 ONLINE ONLINE rac01

1 ONLINE ONLINE rac01

1 ONLINE ONLINE rac01

2

3

ONLINE

OFFLINE OFFLINE

ONLINE

rac02

1 ONLINE ONLINE rac01

rver

1 ONLINE ONLINE rac01

1 ONLINE ONLINE rac01

1 ONLINE ONLINE rac01

1 ONLINE ONLINE rac02

1 ONLINE ONLINE rac01

1 ONLINE ONLINE rac02

--------------------------------------------------------------------------------

2 ONLINE ONLINE rac01

STABLE

STABLE

STABLE

STABLE

STABLE

STABLE

STABLE

STABLE

169.254.107.91 10.0. 0.1,STABLE

Started,STABLE

STABLE

Started,STABLE

STABLE

Open,STABLE

STABLE

STABLE

STABLE

STABLE

Open,HOME=/u01/app/oracle/product/12.2.0/dbhome_1,STABLE

Open,HOME=/u01/app/oracle/product/12.2.0/dbhome_1,STABLE

附件

RAC数据库集群启动、停止

RAC数据库目前是全自动的,当操作系统启动时,ASM设备会自动挂载,数据库也会随之自动启动。

如果需要手动启动或者停止数据库,请参照如下说明。

启动、停止oracle数据库实例

监听:

[root@RAC01 ~]$ srvctl start listener

[root@RAC01 ~]$ srvctl stop listener

--启动监听

--停止监听

数据库

[root@RAC01 ~]$ srvctl start database -d starboss --启动数据库

[root@RAC01 ~]$ srvctl stop database -d starboss --停止数据库

或者

[root@RAC01 ~]$ srvctl stop database -d starboss -o immediate

[root@RAC01 ~]$ srvctl start database -d starboss -o open/mount/'read only'

--停止数据库

--启动到打开、挂载、只读模式

启停Oracle RAC集群

这个操作会停止数据库,并停止rac其他所有的集群服务(如asm实例、vip、监听以及rac高可用环境):

[root@rac01 ~]$ crsctl start cluster -all --启动

[root@rac01 ~]$ crsctl stop cluster -all --停止

增加swap分区大小

[root@rac02 grid]# free -m

total used free shared buff/cache available

Mem: 11757 136 5078 8 6542 11539

Swap: 6015 0 6015

[root@rac02 grid]# mkdir /swap

[root@rac02 grid]# dd if=/dev/zero of=/swap/swap bs=1024 count=6291456 #一个block是1k,6291456+0 records in

6291456+0 records out

6442450944 bytes (6.4 GB) copied, 8.93982 s, 721 MB/s

[root@rac02 grid]# /sbin/mkswap /swap/swap

Setting up swapspace version 1, size = 6291452 KiB

no label, UUID=35c98431-eb56-4ad7-99cd-d3414cce75ca

[root@rac02 grid]# /sbin/swapon /swap/swap

swapon: /swap/swap: insecure permissions 0644, 0600 suggested.

[root@rac02 grid]# free -m

Mem:

total

Swap:

12159

11757

used

141

0

free

12159

5074

shared

buff/cache

8

6542

available

11534

检查决策盘

[grid@rac01 ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 95b79a3ef6274fdebfe1d1323f0cc829 (/dev/oracleasm/disks/OCR03) [OCR]

2. ONLINE 404499d583f04f15bf24c89a4269bbe9 (/dev/oracleasm/disks/OCR02) [OCR]

3. ONLINE 6e010b265aee4f15bfd1d4260ab5ac9c (/dev/oracleasm/disks/OCR01) [OCR]

6291456就是6G

Located 5 voting disk(s).

RAC服务检查

grid用户任意节点执行如下命令

[grid@rac01 ~]$ crsctl check cluster -all

**************************************************************

rac01:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

rac02:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

手动切换SCAN到其他节点

/u01/app/12.2.0/grid/bin/srvctl relocate scan_listener -i 1 -n rac02

执行完成后,scan_listener和scan_vip都会切换到指定节点

设置EM访问

SQL> exec DBMS_XDB_PSPORT(5501)

SQL> exec DBMS_XDB_PPORT(5500)


本文标签: 节点 磁盘 数据库 创建 安装