SDD 2 SDDPCM Upgrade in Cluster Servers
1. Check the cluster services. , and stop the services on Both the nodes.
[root@Node 2:/usr/es/sbin/cluster/utilities]$ lssrc -ls clstrmgrES
Current state: ST_STABLE
sccsid = "@(#)36 1.135.1.91 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 53haes_r550, 0845B_hacmp550 10/21/08 13:31:47"
i_local_nodeid 1, i_local_siteid -1, my_handle 4
ml_idx[3]=0 ml_idx[4]=1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 10
local node vrmf is 5500
cluster fix level is "0"
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId - 3 NodeName - node1
PgSpFree = 4191043 PvPctBusy = 0 PctTotalTimeIdle = 24.991928
DNP Values for NodeId - 4 NodeName - node2
PgSpFree = 4191026 PvPctBusy = 0 PctTotalTimeIdle = 24.975261
2. Capture the logs for the reference on both the nodes and transfer the same to some other server.
lsvpcfg >> /tmp/SDD110911/`hostname`_lsvpcfg.110911
datapath query essmap >> /tmp/SDD110911/`hostname`_datapath.110911
df -gt >> /tmp/SDD110911/`hostname`_df.110911
lsfs >> /tmp/SDD110911/`hostname`_lsfs.110911
lspv >> /tmp/SDD110911/`hostname`_lspv.110911
[root@Node 2:/usr/es/sbin/cluster/utilities]$ lsvpcfg
vpath14 (Avail pv hbvg) 75NL8311663 = hdisk16 (Avail ) hdisk36 (Avail )
vpath20 (Avail pv EAI1_VG00_N) 75NL83114F4 = hdisk42 (Avail ) hdisk47 (Avail )
vpath21 (Avail pv EAI1_VG01_N) 75NL83114F5 = hdisk43 (Avail ) hdisk48 (Avail )
vpath22 (Avail pv EAI1_VG02_N) 75NL83115F5 = hdisk44 (Avail ) hdisk49 (Avail )
vpath23 (Avail pv EAI2_VG00_N) 75NL8311650 = hdisk45 (Avail ) hdisk50 (Avail )
vpath24 (Avail pv EAI2_VG01_N) 75NL831174F = hdisk46 (Avail ) hdisk51 (Avail )
3. Take Mksysb back up & Run OS Check script from NIM Master (or) where the check script is configured.
4. Stop the Cluster Servies through smitty cl_stop on both the nodes.
5. Stop “sddsrv” service.
[root@Node 2:/usr/es/sbin/cluster/utilities]$ stopsrc -s sddsrv
0513-044 The sddsrv Subsystem was requested to stop.
6. Remove the device fsc* from the output of ‘lsdev -Cc adapter’
[root@Node 2:/usr/es/sbin/cluster/utilities]$ rmdev -Rdl fcs0
fcnet0 deleted
hdisk16 deleted
hdisk42 deleted
hdisk43 deleted
hdisk44 deleted
hdisk45 deleted
hdisk46 deleted
fscsi0 deleted
fcs0 deleted
7. Remove the device “dpo”.
[root@Node 2:/usr/es/sbin/cluster/utilities]$ lsdev |grep -i dpo
dpo Available Data Path Optimizer Parent
[root@Node 2:/usr/es/sbin/cluster/utilities]$ rmdev -Rdl dpo
vpath14 deleted
vpath20 deleted
vpath21 deleted
vpath22 deleted
vpath23 deleted
vpath24 deleted
dpo deleted
[root@Node 2:/usr/es/sbin/cluster/utilities]$ lspv
hdisk0 00c3f5b4e1ab7e0a rootvg active
hdisk1 00c3f5b4a4750b8d rootvg active
8. Remove the sdd fileset ‘devices.sdd.53.rte’ & ` devices.fcp.disk.ibm.rte` using
smitty install_remove.
[root@Node 2:/usr/es/sbin/cluster/utilities]$ smitty install_remove
Remove Installed Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* SOFTWARE name [devices.sdd.53.rte]
PREVIEW only? (remove operation will NOT occur) no
REMOVE dependent software? no
EXTEND file systems if space needed? no
DETAILED output? yes
[root@Node 2:/usr/es/sbin/cluster/utilities]$ smitty install_remove
Remove Installed Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* SOFTWARE name [devices.fcp.disk.ibm.rte]
PREVIEW only? (remove operation will NOT occur) no
REMOVE dependent software? no
EXTEND file systems if space needed? no
DETAILED output? yes
9. Download the SDDPCM & MPIO file sets from IBM Website and transfer it to the server where the activity going to be preformed.
10. Install sddpcm & mpio filests using smitty install.
11. Reboot the machine.
12. Check for `pcmpath query essmap` or `lspcmcfg` command status.
If required do manual Import and export of Volume Groups between both the nodes. During Import make sure the PVID of the Volume Group are same ( Refer the output taken before the activity of ‘lspv’ ).
13. Smitty hacmp > Extended Resource Configuration > Extended Topology Configuration > Configure HACMP Communication Interfaces/Devices > Remove Communication Interfaces/Devices (Now remove the disk heart which is mentioned as vpath device )
14.
[root@Node 2:/tmp]$
HACMP for AIX
Move cursor to desired item and press Enter.
Initialization and Standard Configuration
> Extended Configuration
System Management (C-SPOC)
Problem Determination Tools
Extended Configuration
Move cursor to desired item and press Enter.
Discover HACMP-related Information from Configured Nodes
> Extended Topology Configuration
Extended Resource Configuration
Extended Cluster Service Settings
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning Worksheets
Import Cluster Configuration from Online Planning Worksheets File
Extended Verification and Synchronization
HACMP Cluster Test Tool
Extended Topology Configuration
Move cursor to desired item and press Enter.
Configure an HACMP Cluster
Configure HACMP Nodes
Configure HACMP Sites
Configure HACMP Networks
> Configure HACMP Communication Interfaces/Devices
Configure HACMP Persistent Node IP Label/Addresses
Configure HACMP Global Networks
Configure HACMP Network Modules
Configure Topology Services and Group Services
Show HACMP Topology
Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
> Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings
Remove the Vpath devices (diskhb)
15. Discover HACMP-related Information from Configured Nodes. > Extended Verification and Synchronization.
16. Smitty hacmp > Extended Resource Configuration > Extended Topology Configuration > Configure HACMP Communication Interfaces/Devices > Add Communication Interfaces/Devices > Add Discovered Communication Interface and Devices > Communication Devices > (Select hdisk device for heart beat )
17. Start the cluster services in primary node and verify the cluster status.
18. Start the cluster services in secondary node.
19. Inform to the application/Database Team to start the services.
20. Get the Confirmation Status.