HACMP Cluster can be managed by 1) Smit 2) WebSMIT
Config tasks
- Configure
Cluster Topology
- Then
HACMP Resources and Resources groups using Std Config path/ Extn Config
paths
- Verify
& Sync HACMP
- Test
Cluster
Config HACMP using Std Config Path
Initialization and Standard Config
SMIT Menu can be used to add Components to Cluster & ODM
Prerequisites & default
settings of this path are –
- Connectivity
must be established b/w nodes
- IP
aliasing is used as default mechanism for binding service IP labels /
addresses to the network interface.
- Once
basic completed, customize using Extnd Config path
AIX files modified by HACMP –
/etc/hosts (all cluster IP address are added)
DNS and NIS
are disabled during HACMP related name resolution. So IP must be added locally
/etc/iniitab is configured for IP address takeover.
/etc/services (Clcomd works on port 6191)
/etc/snmpd ,
etc/syslog.conf
Start and Shutdown Scripts
/usr/es/sbin/cluster/utilities/clstart script which is
called by /usr/es/bin/cluster/etc/rc.cluster script, invokes the AIX SRC to
start cluster daemons
Sys Mgmt (C-SPOC) ->Manage HACMP Services -> Start
Cluster Service, corresponding version of the script that starts cluster
service on each node
/usr/es/sbin/cluster/sbin/cl_start
/usr/sbin/shutdown calls /usr/es/sbin/cluster/etc/rc.shutdown
and subsequently calls /etc/rc.shutdown on other nodes
/usr/es/bin/cluster/utilities/clexit.rc
/usr/es/bin/cluster/etc/clinfo.rc script invoked by clinfo
daemon whenever a node on network event occurs, copy of this file must exist on
all nodes.
/usr/es/bin/cluster/etc/rhosts file
Clcomd runs on each node to mange communication
If rhosts file is empty (upon installation) then clcomd
accepts first connection and makes entry in file. Clcomd validates the
addresses.
If not empty it compares the clcomd compares the incoming
add with addresses/labels found in HACMP daemon and then rhosts file and only
allows listed connection. If no file there will be no communcation
/var/hacmp/adm/cluster.log
Error logs and HACMP related
events
/var/hacmp/adm/history/cluster.mmddyy Message
generated by hacmp scripts
/var/hacmp/clcomd/clcomd.log - time stamped messages
generated by cluster comm. Daemon (2mb)
/var/hacmp/clverify/current/nodename/* - contains logs from
a current execution of a cluster verify
/var/hacmp/clverify/pass/nodename/* - logs from last time
verification passed
/var/hacmp/clverify/pass.prev/nodename/* - logs from 2nd
last time verification passed
/var/hacmp/clverify/fail/nodename/* - logs from last time
verification failed.
/var/hacmp/clcomd/clcomddiag.log - Comm. messages when
tracing is turned on (18mb)
/var/hacmp/clverify/clverify.log Verbose output during verification
/var/hacmp/log/autoverify.log - Logging for auto cluster verification
/var/hacmp/log/clavan.log - App
Logs (stop / start script)
/var/hacmp/log/clinfo.log -
/clinfo.n [n=1,2…7], file installed on
both client & servers
/var/hacmp/cl_testtool.log - output
from tests
/var/hacmp/clconfigassist.log - log file for cluster config assist
/var/hacmp/log/clstrmgr.debug /debud.n [n=1,2..7] time formatted messages
by cluster mgr activity used by IBM
/var/hacmp/clstrmgr.debug.long /debug.long.n – high level logging b/w hacmp and RSCT
/var/hacmp/log/cspoc.log – logging
of executionof c-spoc commands on the local node
/var/hacmp/log/hacmp.out – output
generated by scripts for verbose set debuy level to high
/var/hacmp/log/migration.log – cluster mgr is in migration state
/var/hacmp/log/oracleesa.log – Oracle smart assist
/var/hacmp/log/sa.log – log
of smart assist
Monitoring
Hacmp checks if another instance of the app is running
/var/hacmp/log/clavan.log
/var/es/sbin/cluster/utilities/clavan
Clrginfo – Info
about resource group
Clrginfo –p displays node that has the highest priority
Clrginfo –v displays the resource grp’s startup, fallover
& fallback preferences
/usr/sbin/cluster/utilites/clrginfo –v
cldisp – displays app centric view of cluster config
cltopinfo – topology info
cspoc.log - /var/hacmp/log/cspoc.log
ps –ef |grep clinfoES
lssrc –g cluster
lssrc –s clinfoES
manually varyon a VG in passive mode
varyonvg –n –c –p <vgname>
Data Collection Utilities snap –e
(/usr/sbin/rsct/bin/phonix.snap)
Snap –e relies on clcomdES
RSCT command for testing disk heartbeat
/usr/sbin/rsct/bin/dhb_read
Dhb_read –p devicename
Dhb_read –p devicename –r receive mode
Dhb_read –p devicename –t send mode
Assumptions –
Hostnames are used as node names
Configure a basic 2 node cluster
Configure the topology
Configure the cluster resources
Configure the resource group
Put resources in groups
Adjust log viewing and mgmt
Verify and sync the cluster config
Display cluster config
Make adjustments
Test cluster
Maintaing cluster info services
clinfoES daemon
Staring clinfo on client
/usr/es/sbin/cluster/etc/rc.cluster or startsrc –s clinfoES
If using netview cannot enable clinfo to recive traps
If cluster does not stop with clstop command. AIX executes
/usr/es/sbin/cluster/utilities/clexit.rc to halt the system so data does not
get corrupt
Never ues kill -9 for clstrmgr
/etc/cluster/hacmp.term to change the default action after a
abnormal exit.. clexit.rc checks this fle.
If AIX is shutdown with –F or –r no take over if –h then
take over starts
Adding a Node in
Cluster –
Smit HACMP -> Extended Config -> Extended Topology Config
->Configure HACMP Nodes -> Add a Node to Cluster
Smit displays the “Add a Node” to cluster
Enter Name, Comm Path
To Sync Return to
Extd Config -> Select Extd Verification and Sync Option
On new node start cluster services to integrate into cluster
Add node to Resource group and
then Sync
Node calls node_up_local script,
which calls cl_mode3 script to activate the concurrent capable VG in a
concurrent access mode. Cl3_mode3 script calls varyonvg with –c flag
Removing a Cluster
Node-
-Stop Cluster service on node (usually done with Move
resource grp option)
-On another node enter “SMIT hacmp”
-Extended Config -> Extnd Topology Config ->Config
HACMP nodes ->Remove a node in HACMP cluster
- Select Node to remove, enter
-On local node – Run Extd Config, Extnd Verification and
Sync option to sync the cluster.
Uopn Sync the node definition is
removed.
Node calls node_down_local
script which calls cl_deactivate-vgs_script with varyoffvg
Changing Cluster
Name-
Smit HACMP -> Extnd Config -> Extnd Topology Config
-> Configure an HACMP Cluster
-> Configure and HACMP Cluster -> Add/Change/Show and
HACMP Cluster
-> Enter the Name change
Listing users on all
Cluster Nodes –
Smit cl-admin -> security and user management -> users
in an hacmp cluster ->List users in the cluster
Leave resource group blank
Cl_chpasswd to change password
Cluster Disk
Replacement (Root Privileges)-
Mirroring should be there
Replace disk with PVID, run chdev on all nodes
Smit cl_admin -> HACMP LVM -> Cluster Disk Replacement
-> Select Disk for Replacement (source disk) -> Select disk for
replacement (new disk)
Adding a Filesystem
to an HACMP cluster LV
Smit cl_admin
HACMP LVM -> shared FS -> Add JFS to a previously define LV
Creating a Shared FS with C-SPOC
-> VG need not be varied on for creatina FS
Smitty cl_admin
HACMP LVM -> Shared Sys -> Add a JFS, al other nodes
with run importvg –L
Creating a Shared VG
with CSPOC
All disks shd be
propely attached . Disks have PVID
->.smit cl_admin -. Hacmp lvm-> shared VG -> create
a shared VG or create a shared VG with Data path devices.
-> smit displays a list of
cluster nodes
-> select 2 or mode nodes
(system co-relates a list of free disks that are not part of VG)
-> select disks smit says ADD
A VG
Enter PVID (pvid of selected
disks), vg anme, pp size, vg major number, enable cross-site LVM mirroring
Add a LV to a Cluster using
C-SPOC
Smit cl_admin
ClusterLVM -> shared LV -> Add a shared LV
Smit shows Resource groups and
associated shared VGs.
-> select PV, add a shared LV path appears with
fields
- rerource gp name, vg namae,
ref code, no of lps, lv name, lv type
Mirroing a VF using
C-SPOC
-> Select cl_admin -> HACMP LVM -> Shared VG ->
Mirror a shared VG
-> Select enteries from list of nodes and PV
-> Smit displays Mirros a shared VG , hit enter
->Resource Gp name, VG name, Ref node, volumes (hdisk
names), # of copies
No comments:
Post a Comment