Search This Blog
Sunday, 30 January 2011
Thursday, 27 January 2011
Modifying NFS resource hosts options in VCS
I have a nfs share (/apps/Wst) under veritas cluster control with permission ro=host1:host2. I need to add one more host (host3) to this list.
Below is the procedure.
1. Get the name of resource related to this mountpoint in share type.
root@node1# hares -display -type Share|grep /apps/Wst
intProc_wst_share ArgListValues node1 PathName 1 /apps/Wst Options 1 ro=host1:host2
intProc_wst_share ArgListValues node2 PathName 1 /apps/Wst Options 1 ro=host1:host2
intProc_wst_share PathName global /apps/Wst
We got it. The resource name is intProc_wst_share.
2. Make the vcs configuration rewritable.
root@node1# haconf -makerw
3. Add the host3 also in the list using below command with host3 also
root@node1# hares -modify intProc_wst_share Options ro=host1:host2:host3
4. Disable critical status so that restarting the resource doesnot effect the any parent or child.
root@node1# hares -modify intProc_wst_share Critical 0
5. Save the configuration and sync with all the nodes of cluster
root@node1# haconf -dump -makero from localhost
6. Stop and start the intProc_wst_share resource.
root@node1# hares -offline intProc_wst_share -sys node1; hares -online intProc_wst_share -sys node1
7. Now the third host (host3) is visible in showmount output and the share can be mounted on host3 also.
root@node1# showmount -e
export list for node1:
/apps/Wst ro=host1,host2,host3
Extending VCS Filesystem with new LUNs
Target: My VCS cluster has been provided 6 storage LUNS from EMC storage for extending five existing vxvm filesystem
The Filesystem be to increased. All the filesysetm are a part of proddg
/dev/vx/dsk/proddg/oradatavol_data04 /oracle/sales/data04
/dev/vx/dsk/proddg/oradatavol_data05 /oracle/sales/data05
/dev/vx/dsk/proddg/oradatavol_index /oracle/sales/index
/dev/vx/dsk/proddg/oradatavol_data03 /oracle/sales/data03
/dev/vx/dsk/proddg/oradatavol_data /oracle/sales/data
/dev/vx/dsk/proddg/oradatavol_data01 /oracle/sales/data01
LUN ID of assinged EMC disks are : 09db e09d5 09ee 09e2 09fa
Below are the steps for carrying out this activity.
1. Configure and probe both fiber path on both nodes
root@node1 # cfgadm -al|grep fc
c1 fc-fabric connected configured unknown
c2 fc-fabric connected configured unknown
root@node1 # cfgadm -c configure c1 c2
2. Run vxdctl enable for vxvm to handble the disk.
root@node1 # vxdctl enable
3. Confirm the new disks are listed in vxdisk list output with nolabel status.
root@node1 # vxdisk list|grep nolabel
emc0_0a02 auto - - nolabel
emc0_09db auto - - nolabel
emc0_09d5 auto - - nolabel
emc0_09ee auto - - nolabel
emc0_09e2 auto - - nolabel
emc0_09fa auto - - nolabel
4. Get the physical path of each disk for labeling the disk
root@node1 # vxdisk list emc0_09db emc0_09d5 emc0_09ee emc0_09e2 emc0_09fa|grep c2
c2t5006048452A814A7d49s2 state=enabled
c2t5006048452A814A7d48s2 state=enabled
c2t5006048452A814A7d51s2 state=enabled
c2t5006048452A814A7d50s2 state=enabled
c2t5006048452A814A7d52s2 state=enabled
[Above steps must be one all the nodes]
5. label each of above disk using Solaris format command .
6. Following steps must be done the master node of vxvm(this is applicable only for shared DGs). The master host can be listed using below command. Here the master host is node1
root@node1 # vxdctl -c mode
mode: enabled: cluster active - MASTER
master: node2
8. Initialise each disk with veritas.
root@node2 # vxdisk -f init emc0_0a02
root@node2 # vxdisk -f init emc0_0adb
root@node2 # vxdisk -f init emc0_0ad5
root@node2 # vxdisk -f init emc0_0aee
root@node2 # vxdisk -f init emc0_0ae2
root@node2 # vxdisk -f init emc0_0afa
8. Adding disk to proddg. (proddg_xxx is uniq disk alias assined by me for each new disks. Henceforth we will be using alias name)
root@node2 # vxdg -g proddg adddisk proddg_113=emc0_0a02
root@node2 # vxdg -g proddg adddisk proddg_114=emc0_09db
root@node2 # vxdg -g proddg adddisk proddg_114=emc0_09d5
root@node2 # vxdg -g proddg adddisk proddg_116=emc0_09ee
root@node2 # vxdg -g proddg adddisk proddg_117=emc0_09e2
root@node2 # vxdg -g proddg adddisk proddg_118=emc0_09fa
9. Make sure vxdisk list print all the new 6 disks as a part of proddg and online shared.
root@node2 # vxdisk list|egrep 'emc0_09db |emc0_09d5 |emc0_09ee |emc0_09e2| emc0_09fa|emc0_0a02|emc0_09fa'
emc0_0a02 auto:cdsdisk proddg_113 proddg online shared
emc0_09db auto:cdsdisk proddg_114 proddg online shared
emc0_09d5 auto:cdsdisk proddg_115 proddg online shared
emc0_09ee auto:cdsdisk proddg_116 proddg online shared
emc0_09e2 auto:cdsdisk proddg_117 proddg online shared
emc0_09fa auto:cdsdisk proddg_118 proddg online shared
10. New letus extend each volume by 100G to curresponding disk. If the disk name is not specified, it will grow any awailable disk.
root@node2 # vxresize -g proddg -F vxfs oradatavol_index +100G proddg_113
root@node2 # vxresize -g proddg -F vxfs oradatavol_data +100G proddg_114
root@node2 # vxresize -g proddg -F vxfs oradatavol_data05 +100G proddg_115
root@node2 # vxresize -g proddg -F vxfs oradatavol_data04 +100G proddg_116
root@node2 # vxresize -g proddg -F vxfs oradatavol_data0 +100G proddg_117
root@node2 # vxresize -g proddg -F vxfs oradatavol_data01 +100G proddg_117
root@node2 # vxresize -g proddg -F vxfs oradatavol_data03 +100G proddg_118
11. Now all vlumes are resized to new size.
root@node2 # df -hFvxfs
........
/dev/vx/dsk/proddg/oradatavol_data04 1000G 863G 136G 87% /oracle/sales/data04
/dev/vx/dsk/proddg/oradatavol_data05 1000G 856G 143G 86% /oracle/sales/data05
/dev/vx/dsk/proddg/oradatavol_index 500G 384G 115G 77% /oracle/sales/index
/dev/vx/dsk/proddg/oradatavol_data03 1000G 819G 179G 83% /oracle/sales/data03
/dev/vx/dsk/proddg/oradatavol_data 1000G 858G 141G 86% /oracle/sales/data
/dev/vx/dsk/proddg/oradatavol_data01 1000G 839G 160G 84% /oracle/sales/data01
.........
Subscribe to:
Posts (Atom)