Support matrix / life cycle for IBM PowerHA (with a typical 3 year lifecycle):
| AIX 5.1 | AIX 5.2 | AIX 5.3 | AIX 6.1 | AIX 7.1 | AIX 7.2 | Release Date | End Of Support |
| HACMP 5.1 | Yes | Yes | Yes | No | No | No | 7/11/2003 | 9/1/2006 |
| HACMP 5.2 | Yes | Yes | Yes | No | No | No | 7/16/2004 | 9/30/2007 |
| HACMP 5.3 | No | ML4+ | ML2+ | Yes | No | No | 8/12/2005 | 9/30/2009 |
| HACMP 5.4.0 | No | TL8+ | TL4+ | No | No | No | 7/28/2006 | 9/30/2011 |
| HACMP 5.4.1 | No | TL8+ | TL4+ | Yes | Yes | No | 9/11/2007 | 9/30/2011 |
| PowerHA 5.5 | No | No | TL7+ | tl2 sp1+ | Yes | No | 11/14/2008 | 4/30/2012 |
| PowerHA 6.1 | No | No | TL9+ | tl2 sp1+ | Yes | No | 10/20/2009 | 4/30/2015 |
| PowerHA 7.1.0 | No | No | No | tl6+ | Yes | No | 9/10/2010 | 9/30/2014 |
| PowerHA 7.1.1 | No | No | No | tl7 sp2+ | tl1 sp2+ | No | 9/10/2010 | 4/30/2015 |
| PowerHA 7.1.2 | No | No | No | tl8 sp1+ | tl2 sp1+ | No | 10/3/2012 | 4/30/2016 |
| PowerHA 7.1.3 | No | No | No | tl9 sp1+ | tl3 sp1+ | No | 10/7/2013 | 4/30/2018 |
| PowerHA 7.2.0 | No | No | No | tl9 sp5+ | tl3 sp5+ tl4 sp1+ | tl0 sp1+ | 12/4/2015 | 4/30/2019 |
| PowerHA 7.2.1 | No | No | No | No | tl3+ | tl0 sp1+ | 12/16/2016 | 4/30/2020 |
| PowerHA 7.2.2 | No | No | No | No | tl4+ | tl0 sp1+ | 12/15/2017 | tbd |
Source:
PowerHA for AIX Version Compatibility MatrixIn PowerHA version 7.2.2, you can use a graphical user interface (GUI) to monitor your cluster environment.
The PowerHA GUI provides the following advantages over the PowerHA command line:
Monitor the status for all clusters, sites, nodes, and resource groups in your environment. Scan event summaries and read a detailed description for each event. If the event occurred because of an error or issue in your environment, you can read suggested solutions to fix the problem. Search and compare log files. Also, the format of the log file is easy to read and identify important information. View properties for a cluster such as the PowerHA SystemMirror version, name of sites and nodes, and repository disk information.
Check out a video that provides an overview for the PowerHA GUI at https://www.youtube.com/watch?v=d_QVvh2dcCM.
Information on how to install and start using it can be found on the IBM website.
The following procedure describes how to perform a command-line based upgrade of the Hardware Management Console (HMC) from version V8 R8.6.0 SP1 to V8 R8.7.0 SP1. This involves these two steps:
- First, upgrade to version V8 R8.7.0 (also known as MH01704).
- Next, update to service pack 1 of V8 R8.7.0
For the sake of this procedure, let's assume that you have the following two systems available to you (and assume these 2 systems can ping each other):
- The HMC, called hmc01, at IP address 172.16.52.100.
- A separate AIX system, called aix01, at IP address 172.16.52.101.
The separate AIX system will be used as a network source for the installation software. We'll put the HMC software upgrade/update files on that system, and then tell the HMC to fetch the software from that AIX system using SFTP. This is possible, as long as you have SSH connectivity correctly set up on the AIX system. Also, let's assume the root password on the AIX system has been set to P@ssw0rd. Note that this, of course, is not a good root password.
So, let's first perform the upgrade to version V8 R8.7.0. This is based on
https://www-01.ibm.com/support/docview.wss?uid=nas8N1020108. Download files img2a, img3a, base.img, disk1.img and hmcnetworkfiles.sum into a folder on the separate AIX system. You may download these files directly from
ftp://ftp.software.ibm.com/software/server/hmc/network/v8870/x86/ to the AIX system.
You can use FTP to download the files, by logging in anonymously to the IBM FTP server and using any password (it says to specify your complete email address, but in fact anything you type will be fine). For example:
# ftp ftp.software.ibm.com
Connected to dispsd-40-www3.boulder.ibm.com.
220 ProFTPD 1.3.5b Server (proftpd)
Name (ftp.software.ibm.com:root): anonymous
331 Anonymous login ok, send your complete email address as your password
Password:
230 Anonymous access granted, restrictions apply
ftp> bin
200 Type set to I
ftp> cd software/server/hmc/network/v8870/x86/
250 CWD command successful
ftp> promp
Interactive mode off.
ftp> mget *
200 PORT command successful
150 Opening BINARY mode data connection for img3a (34015945 bytes)
...
Downloading these files may take a while as they are several gigabytes in size.
Or, if you have wget installed on the AIX system, the following command can be used to get the individual files, for example:
# wget ftp://ftp.software.ibm.com/software/server/hmc/network/v8870/x86/*
Now that you have downloaded all the required files, for example in folder /HMC on the AIX system, make sure that the files can be read by everyone:
# chmod -R 755 /HMC
# chown -R root.system /HMC
Then, log in to the command line of the HMC:
# ssh -l hscroot 172.16.52.100
hscroot@172.16.52.100's password:
Last login: Wed Jan 17 22:24:49 2018
hscroot@hmcw01:~>
For this to work, you obviously need to know the password for the hscroot account on the HMC, and you need to have remote SSH access enabled on the HMC. If necessary to enable the remote SSH access, log in with a web browser to the GUI of the HMC at https://172.16.52.100 and change the remote access setting through the GUI (we'll not cover how to do this in this procedure though).
On the HMC, run the following command to save the upgrade data to disk:
# saveupgdata -r disk
Then, tell it to go download the upgrade files through SFTP from the AIX server:
# getupgfiles -r sftp -h 172.16.52.101 -u root --passwd 'P@ssw0rd' -d /HMC
Note here how the root password of the AIX system is set to P@ssw0rd, and that the files will be downloaded from the /HMC folder on the AIX system. Also note here, that with newer OpenSSH levels on AIX, root may not be allowed to start up a SFTP session to the AIX system remotely, and thus, in that case, it may be better to use a different user account (other than root) to download the files. Any account will do, as long as that account has access to the files in the /HMC folder (or any other folder name, where you have downloaded the HMC network installation files).
Downloading these files to the HMC may take a while. If you want, you can start up an additional SSH session to the HMC (log in to the HMC in a separate window), and then run the following command to monitor the progress of the files download to file system /hmcdump:
# monhmc -r disk 1
After a while, the prompt will be returned.
Then, set up the system for an altdisk boot:
# chhmc -c altdiskboot -s enable --mode upgrade
Then, reboot the system to initiate the upgrade:
# hmcshutdown -r -t now
This upgrade may take a while, like 15 minutes or so, depending on the size of the upgrade and model of the HMC. You may set up a simple ping to the HMC, so you can monitor when it shows back up online after the upgrade:
# ping 172.16.52.100
Once it start pinging again, you may start up a new SSH session to the HMC. Please note that even though you can log back in to the HMC, that the upgrade may not yet be entirely complete. Use the following command on the HMC to test if the upgrade is complete:
# lshmc -V
If this command returns "A connection to the Command Server failed", then the upgrade is still not yet complete. Please wait a while before proceeding, and repeat the lshmc command after a few minutes again. Once the lshmc command properly outputs the version information, then you may proceed. For example:
hscroot@hmc01:~> lshmc -V
"version= Version: 8
Release: 8.7.0
Service Pack: 0
HMC Build level 1709071101
","base_version=V8R8.7.0
"
At this point, the upgrade to version V8 R8.7.0 is complete, and you can proceed with the next step: Updating the HMC to service Pack 1 (also known as MH01725).
This service pack can be downloaded from
IBM Fix Central. On this site, search for your HMC model. For example, if you have a 7042 model HMC, type in the search window: "7042". Then select V8R8.7.0, and then download only MH01725. Do not download MH01704 (we already completed that step above). You'll have to download an update ISO image (for example: HMC_Update_V8R870_SP1_x86.iso), and 4 MH01725* files. Put these files in a separate folder on the AIX system, for example in /SP1.
On the HMC, run the following command to start the update:
# updhmc -t sftp -h 172.16.52.101 -u root --passwd 'P@ssw0rd' -f /SP1/HMC_Update_V8R870_SP1_x86.iso -r
This command will initiate the update, and the HMC will reboot by itself. This step may take another 15 minutes or so. You can check again, once the HMC is available after the reboot, that the update is complete, by running the "lshmc -V" command. The lshmc -V command should output that service pack 1 is installed.
For example:
hscroot@hmc01:~> lshmc -V
"version= Version: 8
Release: 8.7.0
Service Pack: 1
HMC Build level 1712090351
MH01725 - HMC 870 Service Pack 1 Release [x86_64]
","base_version=V8R8.7.0
"
At this point, both the upgrade and update are complete. You may want to log in to the GUI of the HMC using a web browser, and check for any alert messages, and close them out. Usually, the upgrade/update of an HMC may trigger a few alert messages, and there's no need for IBM to respond to them (if you're using the call-home feature of the HMC), as you already know that these messages occurred during the upgrade/update.
Please also note that with this release (V8 R8.7.0), there is no longer a classic interface, so the web-based GUI of the HMC may look somewhat different to you, if you're used to using the classic web-based GUI.
If using sftp isn't an option, for example because sftp isn't allowed or not available on any server, you can also first transfer the ISO image over to the HMC, and then run the update from the HMC itself.
This works as follows, assuming you want to update the HMC with fix MH01752:
First, download the ISO image from IBM fix central. You'll notice that for fix MH01752, the iso image has a filename called MH01752_x86.iso. Transfer this file over to the hardware management console - assuming here that your HMC is called "hmc01":
# scp MH01752_x86.sio hscroot@hmc01:~
Now the iso image file is in the home directory of user hscroot on the HMC. If you log in through ssh to the HMC, and just do a "ls", you'll see the file right there.
Next, issue the upgrade from the HMC command line. Be sure to use the "-c" option as well, as that will tell the HMC to delete the iso image file once the update has been completed:
# updhmc -t disk -f /home/hscroot/MH01752_x86.sio -r -c
That's it - that will update the HMC using the local iso image file on the HMC itself.
Configuring NTP on CentOS 6 (and similar versions) involves a number of steps - especially if you want to have it configured right and secure. Here's a quick guide how to do it:
First of all you have to determine the IP addresses of the NTP servers you are going to use. You may have to contact your network administrator to find out. Ensure that you get at least two time server IP addresses to use.
Then, install and verify the NTP packages:
# yum -y install ntp ntpdate
# yum -q ntp ntpdate
Edit file /etc/ntp.conf and ensure that option "broadcastclient" is commented out (which it is by default with a new installation).
Enable ntp and ntpdate at system boot time:
# chkconfig ntpd on
# chkconfig ntpdate on
Ensure that file /etc/ntp/step-tickers is empty. This will make sure that if ntpdate is run, that it will use one of the time servers configured in /etc/ntp.conf.
# cp /dev/null /etc/ntp/step-tickers
Add two time servers to /etc/ntp.conf, or use any of the pre-configured time servers in this file. Comment out the pre-configured servers, if you are using your own time servers.
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 1.2.3.4
server 5.6.7.8
Do not copy the example above. Use the IP addresses for each time server that you've received from your network administrator instead.
Enable NTP slewing (for slow time stepping if the time on the server is off, instead of suddenly making big time jump changes), by adding "-x" to OPTIONS in /etc/sysconfig/ntpd. Also add "SYNC_HWCLOCK=yes" in /etc/sysconfig/ntpdate to synchronize the hardware clock with any time changes.
Stop the NTP service, if it is running:
# service ntpd stop
Start the ntpdate service (this will synchronize the system clock and the hardware clock):
# service ntpdate start
Now, start the time service:
# service ntpd start
Wait a few minutes for the server to synchronize its time with the time servers. This may take anywhere between a few and 15 minutes. Then check the status of the time synchronization:
# ntpq -p
# ntpstat
The asterisk in front of the time server name in the "ntpq -p" output indicates that the client has reached time synchronization with that particular time server.
Done!
Whenever you have to connect through SSH to a lot of different servers, and you create a command for it like this:
# for h in $SERVER_LIST; do ssh $h "uptime"; done
You may run into an error that stops your command, especially when a new server is added to $SERVER_LIST, like this:
The authenticity of host 'myserver (1.2.3.4)' can't be established.
RSA key fingerprint is .....
Are you sure you want to continue connecting (yes/no)?
And you'll have to type "yes" every time this error is encountered.
So, how do you automate this, and not have to type "yes" with every new host?
The answer is to disable strict host key checking with the ssh command like this:
ssh -oStrictHostKeyChecking=no $h uptime
Please note that you should only do this with hosts that you're familiar with, and/or are in trusted networks, as it bypasses a security question.
Security Enhanced Linux, or short SELinux, is by default enabled on Red Hat Enterprise (and alike) Linux systems.
To determine the status of SELinux, simply run:
# sestatus
There will be times when it may be necessary to disable SELinux. Or for example, when a Linux system is not Internet facing, you may not need to have SELinux enabled.
From the command line, you can edit the /etc/sysconfig/selinux file. This file is a symbolic link to file /etc/selinux/config.
By default, option SELINUX will be set to enforcing in this file:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
By changing it to "permissive", you will disable SELinux:
SELINUX=permissive
Red Hat Enterprise Linux 7 and similar Linux distrobutions have a new command to set the hostname of the system easily. The command is hostnamectl. For example, to set the hostname of a RHEL 7 system to "flores", run:
# hostnamectl set-hostname flores
The hostnamectl command provides some other interesting features.
For example, it can be used to set the deployment type of the system, for example "development" or "production" or anything else you like to give it (as long as it's a single word. You can do so, for example by setting it to "production", by running:
# hostnamectl set-deployment production
Another option is to set the location of the system (and here you can use multiple words):
# hostnamectl set-location "third floor rack A12 U24"
To retrieve all this information, use hostnamectl as well to query the status:
root@(enemigo) selinux # hostnamectl status
Static hostname: flores
Icon name: computer-laptop
Chassis: laptop
Deployment: production
Location: third floor rack A12 U24
Machine ID: 4d8158f54d5166ff374bb372599351c4
Boot ID: ae8e7dccf14a492984fb5462c4da2aa2
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-693.2.2.el7.x86_64
Architecture: x86-64
A Red Hat Enterprise Linux system should have a single default gateway defined. However, sometimes, it does occur that a system has multiple default gateways. Here's information to detect multiple default gateways and how to get rid of them:
First, check the number of default gateways defined, by running the netstat command and looking for entries that start with 0.0.0.0:
# netstat -nr | grep ^0.0.0.0
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 em1
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 em2
In the example above, there are 2 default gateway entries, one to 192.168.0.1, and another one to 192.168.1.1.
Quite often, more than 1 default gateways will be defined on a RHEL system, if there are multiple network interfaces present, and a GATEWAY entry is defined in each of the network interface files in /etc/sysconfig/network-script/ifcfg-*:
# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-*
ifcfg-em1:GATEWAY=192.168.0.1
ifcfg-em2:GATEWAY=192.168.1.1
On a system with multiple network interfaces, it is best to define the default gateway in file /etc/sysconfig/network instead. This file is global network file. Put the following entries in this file, assuming your default gateway is 192.168.0.1 and the network interface to be used for the default gateway is em1:
GATEWAY=192.168.0.1
GATEWAYDEV=em1
Next, remove any GATEWAY entries in any of the ifcfg-* files in /etc/sysconfig/network-scripts.
Finally, restart the network service:
# service network restart
This should resolve multiple default gateways, and the output of the netstat command should now only show one single entry with 0.0.0.0.
Note: If the netstat command is not available on the system, you may also determine the number of default gateways, by running:
# ip route show | grep ^default
A few trick for the ping command to thoroughly test your network connectivity and check how much time a ping request takes:
Increase the interval of the ping requests from the default 1 second to, for example, 10 ping requests every second by using the -i option. As a test, to ping to 192.168.0.1, 10 times a second, run:
# ping -i .1 192.168.0.1
You can also go up to 1/100th of a second:
# ping -i .01 192.168.0.1
To increase the default packet size of 64 bites, use -s option, for example to ping 1 KB with every ping request, run:
# ping -s 1024 192.168.0.1
Or combine the -i and -s options:
# ping -s 1024 -i .01 192.168.0.1
This is a quick NFS configuration using RHEL without too much concerts about security or any fine tuning and access control. In our scenario, there are two hosts:
- NFS Server, IP 10.1.1.100
- NFS Client, IP 10.1.1.101
First, start with the NFS server:
On the NFS server, un the below commands to begin the NFS server installation:
[nfs-server] # yum install nfs-utils rpcbind
Next, for this procedure, we export an arbitrary directory called /opt/nfs. Create /opt/nfs directory:
[nfs-server] # mkdir -p /opt/nfs
Edit the /etc/exports file (which is the NFS exports file) to add the below line to export folder /opt/nfs to client 10.1.1.101:
/opt/nfs 10.1.1.101(no_root_squash,rw)
Next, make sure to open port 2049 on your firewall to allow client requests:
[nfs-server] # firewall-cmd --zone=public --add-port=2049/tcp --permanent
[nfs-server] # firewall-cmd --reload
Start the rpcbind and NFS server daemons in this order:
[nfs-server] # service rpcbind start; service nfs start
Check the NFS server status:
[nfs-server] # service nfs status
Redirecting to /bin/systemctl status nfs.service
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled;
vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
order-with-mounts.conf
Active: active (exited) since Tue 2017-11-14 09:06:21 CST; 1h 14min ago
Main PID: 2883 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Next, export all the file systems configured in /etc/exports:
[nfs-server] # exportfs -rav
And check the currently exported file systems:
[nfs-server] # exportfs -v
Next, continue with the NFS client:
Install the required packages:
[nfs-client] # yum install nfs-utils rpcbind
[nfs-client]# service rpcbind start
Create a mount point directory on the client, for example /mnt/nfs:
[nfs-client] # mkdir -p /mnt/nfs
Discover the NFS exported file systems:
[nfs-client] # showmount -e 10.1.1.100
Export list for 10.1.1.100:
/opt/nfs 10.1.1.101
Mount the previously NFS exported /opt/nfs directory:
[nfs-client] # mount 10.1.1.100:/opt/nfs /mnt/nfs
Test the correctness of the setup between the NFS server and the NFS client by creating a file in the NFS mounted directory on the client side:
[nfs-client] # cd /mnt/nfs/
[nfs-client] # touch testfile
[nfs-client] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
Move to the server side and check if the testfile file exists:
[nfs-server] # cd /opt/nfs/
[nfs-server] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
At this point it is working, but it is not set up to remain there permanently (as in: it will be gone when either the NFS server or NFS client is rebooted. To ensure it remains working even after a reboot, perform the following steps:
On the NFS server side, to have the NFS server service enabled at system boot time, run:
[nfs-server] # systemctl enable nfs-server
On the NFS server client side, add an entry to the /etc/fstab file, that will ensure the NFS file system is mounted at boot time:
10.1.1.100:/opt/nfs /mnt/nfs nfs4 soft,intr,nosuid 0 0
The options for the NFS file systems are as follows:
- soft = No hard mounting, avoids hanging file access commands on the NFS client, if the NFS servers is unavailable.
- intr = Allow NFS requests to be interrupted if the NFS server goes down or can't be reached.
- nosuid = This prevents remote users from gaining higher privileges by running a setuid program.
If you need to know on the NFS server side, which clients are using the NFS file system, you can use the netstat command, and search for both the NFS server IP address and port 2049:
[nfs-server] # netstat -an | grep 10.1.1.100:2049
This will tell you the established connections for each of the clients, for example:
tcp 0 0 10.1.1.100:2049 10.1.1.101:757 ESTABLISHED
In the example above you can see that IP address 10.1.1.101 on port 757 (NFS client) is connected to port 2049 on IP address 10.1.1.100 (NFS server).
Number of results found: 469.
Displaying results: 61 - 70.