This is a quick and dirty method of setting up an LPP source and SPOT of AIX 5.3 TL10 SP2, without having to swap DVD's into the AIX host machine. What you basically need is the actual AIX 5.3 TL10 SP2 DVD's from IBM, a Windows host, and access to your NIM server. This process basically works for every AIX level, and has been tested with versions up to AIX 7.2.
If you have actual AIX DVD's that IBM sent to you, create ISO images of the DVD's through Windows, e.g. by using MagicISO. Or, go to Entitled Software Support and download the ISO images there.
SCP these ISO image files over to the AIX NIM server, e.g. by using WinSCP.
We need a way to access the data in the ISO images on the NIM server, and to extract the filesets from it (see IBM Wiki).
For AIX 5 systems and older:
Create a logical volume that is big enough to hold the data of one DVD. Check with "lsvg rootvg" if you have enough space in rootvg and what the PP size is. In our example it is 64 MB. Thus, to hold an ISO image of roughly 4.7 GB, we would need roughly 80 LPs of 64 MB.
# /usr/sbin/mklv -y testiso -t jfs rootvg 80
Create filesystem on it:
# /usr/sbin/crfs -v jfs -d testiso -m /testiso -An -pro -tn -a frag=4096 -a nbpi=4096 -a ag=8
Create a location where to store all of the AIX filesets on the server:
# mkdir /sw_depot/5300-10-02-0943-full
Copy the ISO image to the logical volume:
# /usr/bin/dd if=/tmp/aix53-tl10-sp2-dvd1.iso of=/dev/rtestiso bs=1m
# chfs -a vfs=cdrfs /testiso
Mount the testiso filesystem and copy the data:
# mount /testiso
# bffcreate -d /testiso -t /sw_depot/5300-10-02-0943-full all
# umount /testiso
Repeat the above 5 steps for both DVD's. You'll end up with a folder of at least 4 GB.
Delete the iso logical volume:
# rmfs -r /testiso
# rmlv testiso
When you're using AIX 7 / AIX 6.1:
Significant changes have been made in AIX 7 and AIX 6.1 that add new support for NIM. In particular there is now the capability to use the loopmount command to mount iso images into filesystems. As an example:
# loopmount -i aixv7-base.iso -m /aix -o "-V cdrfs -o ro"
The above mounts the AIX 7 base iso as a filesystem called /aix.
So instead of going through the trouble of creating a logical volume, creating a file system, copying the ISO image to the logical volume, and mounting it (which is what you would have done on AIX 5 and before), you can do all of this with a single loopmount command.
Make sure to delete any left-over ISO images:
# rm -rf /tmp/aix53-tl10-sp2-dvd*iso
Define the LPP source (From the
NIM A to Z redbook):
# mkdir /export/lpp_source/LPPaix53tl10sp2
# nim -o define -t lpp_source -a server=master -a location=/export/lpp_source/LPPaix53tl10sp2 -a source=/sw_depot/5300-10-02-0943-full LPPaix53tl10sp2
Check with:
# lsnim -l LPPaix53tl10sp2
Rebuild the .toc:
# nim -Fo check LPPaix53tl10sp2
For newer AIX releases, e.g. AIX 7.1 and AIX 7.2, you may get a warning like:
Warning: 0042-354 c_mk_lpp_source: The lpp_source is missing a
bos.vendor.profile which is needed for the simages attribute. To add
a bos.vendor.profile to the lpp_source run the "update" operation
with "-a recover=yes" and specify a "source" that contains a
bos.vendor.profile such as the installation CD. If your master is not
at level 5.2.0.0 or higher, then manually copy the bos.vendor.profile
into the installp/ppc directory of the lpp_source.
If this happens, you can either do exactly what it says, copy the installp/ppc/bos.vendor.profile file from your source DVD ISO image into the installp/ppc directory of the LPP source. Or, you can remove the entire LPP source, then copy the installp/ppc/bos.vendor.profile form the DVD ISO image into the directory that contains the full AIX software set (in the example above: /sw_depot/5300-10-02-0943-full), and then re-create the LPP source. That should help to avoid the warning.
If you ignore this warning, then you'll notice that the next step (create a SPOT from the LPP source) will fail.
Define a SPOT from the LPP source:
# nim -o define -t spot -a server=master -a location=/export/spot/SPOTaix53tl10sp2 -a source=LPPaix53tl10sp2 -a installp_flags=-aQg SPOTaix53tl10sp2
Check the SPOT:
# nim -o check SPOTaix53tl10sp2
# nim -o lppchk -a show_progress=yes SPOTaix53tl10sp2
If you get the following message when you open a vterm:
The session is reserved for physical serial port communication.
Then this may be caused by the fact that your system is still in MDC, or
manufactoring default configuration mode. It can easily be resolved:
- Power down your frame.
- Power it back up to standby status.
- Then, when activating the default LPAR, choose "exit the MDC".
The compare_report command is a very useful utility to compare the software installed on two systems, for example for making sure the same software is installed on two nodes of a PowerHA cluster.
First, create the necessary reports:
# ssh node2 "lslpp -Lc" > /tmp/node2
# lslpp -Lc > /tmp/node1
Next, generate the report. There are four interesting options: -l, -h, -m and -n:
- -l Generates a report of base system installed software that is at a lower level.
- -h Generates a report of base system installed software that is at a higher level.
- -m Generates a report of filesets not installed on the other system.
- -n Generates a report of filesets not installed on the base system.
For example:
# compare_report -b /tmp/node1 -o /tmp/node2 -l
#(baselower.rpt)
#Base System Installed Software that is at a lower level
#Fileset_Name:Base_Level:Other_Level
bos.msg.en_US.net.ipsec:6.1.3.0:6.1.4.0
bos.msg.en_US.net.tcp.client:6.1.1.1:6.1.4.0
bos.msg.en_US.rte:6.1.3.0:6.1.4.0
bos.msg.en_US.txt.tfs:6.1.1.0:6.1.4.0
xlsmp.msg.en_US.rte:1.8.0.1:1.8.0.3
# compare_report -b /tmp/node1 -o /tmp/node2 -h
#(basehigher.rpt)
#Base System Installed Software that is at a higher level
#Fileset_Name:Base_Level:Other_Level
idsldap.clt64bit62.rte:6.2.0.5:6.2.0.4
idsldap.clt_max_crypto64bit62.rte:6.2.0.5:6.2.0.4
idsldap.cltbase62.adt:6.2.0.5:6.2.0.4
idsldap.cltbase62.rte:6.2.0.5:6.2.0.4
idsldap.cltjava62.rte:6.2.0.5:6.2.0.4
idsldap.msg62.en_US:6.2.0.5:6.2.0.4
idsldap.srv64bit62.rte:6.2.0.5:6.2.0.4
idsldap.srv_max_cryptobase64bit62.rte:6.2.0.5:6.2.0.4
idsldap.srvbase64bit62.rte:6.2.0.5:6.2.0.4
idsldap.srvproxy64bit62.rte:6.2.0.5:6.2.0.4
idsldap.webadmin62.rte:6.2.0.5:6.2.0.4
idsldap.webadmin_max_crypto62.rte:6.2.0.5:6.2.0.4
AIX-rpm:6.1.3.0-6:6.1.3.0-4
# compare_report -b /tmp/node1 -o /tmp/node2 -m
#(baseonly.rpt)
#Filesets not installed on the Other System
#Fileset_Name:Base_Level
Java6.sdk:6.0.0.75
Java6.source:6.0.0.75
Java6_64.samples.demo:6.0.0.75
Java6_64.samples.jnlp:6.0.0.75
Java6_64.source:6.0.0.75
WSBAA70:7.0.0.0
WSIHS70:7.0.0.0
# compare_report -b /tmp/node1 -o /tmp/node2 -n
#(otheronly.rpt)
#Filesets not installed on the Base System
#Fileset_Name:Other_Level
xlC.sup.aix50.rte:9.0.0.1
AIX-rpm is a "virtual" package which reflects what has been installed on the system by installp. It is created by the /usr/sbin/updtvpkg script when the rpm.rte is installed, and can be run anytime the administrator chooses (usually after installing something with installp that is required to satisfy some dependency by an RPM package).
Since AIX-rpm has to have some sort of version number, it simply reflects the level of bos.rte on the system where /usr/sbin/updtvpkg is being run. It's just informational - nothing should be checking the level of AIX-rpm.
AIX doesn't just automatically run /usr/sbin/updtvpkg every time that something gets installed or deinstalled because on some slower systems with lots of software installed, /usr/sbin/updtvpkg can take a LONG time.
If you want to run the command manually:
# /usr/sbin/updtvpkg
If you get an error similar to "cannot read header at 20760 for lookup" when running updtvpkg, run a rpm rebuilddb:
# rpm --rebuilddb
Once you run updtvpkg, you can run a rpm -qa to see your new AIX-rpm package.
A very good article about migrating AIX from version 5.3 to 6.1 can be found on the following page of IBM developerWorks:
http://www.ibm.com/developerworks/aix/library/au-migrate_nimadm/index.html?ca=drs
For a smooth nimadm process, make sure that you clean up as much filesets of your server as possible (get rid of the things you no longer need). The more filesets that need to be migrated, the longer the process will take. Also make sure that openssl/openssh is up-to-date on the server to be migrated; this is likely to break when you have old versions installed.
Very useful is also a gigabit Ethernet connection between the NIM server and the server to be upgraded, as the nimadm process copies over the client rootvg to the NIM server and back.
The log file for a nimadm process can be found on the NIM server in /var/adm/ras/alt_mig.
For example, if you wish to add the bos.alt_disk_install.rte fileset to a SPOT:
List the available spots:
# lsnim -t spot | grep 61
SPOTaix61tl05sp03 resources spot
SPOTaix61tl03sp07 resources spot
List the available lpp sources:
# lsnim -t lpp_source | grep 61
LPPaix61tl05sp03 resources lpp_source
LPPaix61tl03sp07 resources lpp_source
Check if the SPOT already has this file set:
# nim -o showres SPOTaix61tl05sp03 | grep -i bos.alt
No output is shown. The fileset is not part of the SPOT. Check if the LPP Source has the file set:
# nim -o showres LPPaix61tl05sp03 | grep -i bos.alt
bos.alt_disk_install.boot_images 6.1.5.2 I N usr
bos.alt_disk_install.rte 6.1.5.1 I N usr,root
Install the first fileset (bos.alt_disk_install.boot_images) in the SPOT. The other fileset is a prerequisite of the first
fileset and will be automatically installed as well.
# nim -o cust -a filesets=bos.alt_disk_install.boot_images
-a lpp_source=LPPaix61tl05sp03 SPOTaix61tl05sp03
Note: Use the -F option to force a fileset into the SPOT, if needed (e.g. when the SPOT is in use for a client).
Check if the SPOT now has the fileset installed:
# nim -o showres SPOTaix61tl05sp03 | grep -i bos.alt
bos.alt_disk_install.boot_images
bos.alt_disk_install.rte 6.1.5.1 C F Alternate Disk Installation
This is how to translate a hardware address to a physical location:
The command lscfg shows the hardware addresses of all hardware. For example, the following command will give you more detail on an individual device (e.g. ent1):
# lscfg -pvl ent1
ent1 U788C.001.AAC1535-P1-T2 2-Port 10/100/1000 Base-TX PCI-X Adapter
2-Port 10/100/1000 Base-TX PCI-X Adapter:
Network Address.............001125C5E831
ROM Level.(alterable).......DV0210
Hardware Location Code......U788C.001.AAC1535-P1-T2
PLATFORM SPECIFIC
Name: ethernet
Node: ethernet@1,1
Device Type: network
Physical Location: U788C.001.AAC1535-P1-T2
This ent1 device is an 'Internal Port'. If we check ent2 on the same box:
# lscfg -pvl ent2
ent2 U788C.001.AAC1535-P1-C13-T1 2-Port 10/100/1000 Base-TX PCI-X
2-Port 10/100/1000 Base-TX PCI-X Adapter:
Part Number.................03N5298
FRU Number..................03N5298
EC Level....................H138454
Brand.......................H0
Manufacture ID..............YL1021
Network Address.............001A64A8D516
ROM Level.(alterable).......DV0210
Hardware Location Code......U788C.001.AAC1535-P1-C13-T1
PLATFORM SPECIFIC
Name: ethernet
Node: ethernet@1
Device Type: network
Physical Location: U788C.001.AAC1535-P1-C13-T1
This is a device on a PCI I/O card.
For a physical address like U788C.001.AAC1535-P1-C13-T1:
- U788C.001.AAC1535 - This part identifies the 'system unit/drawer'. If your system is made up of several drawers, then look on the front and match the ID to this section of the address. Now go round the back of the server.
- P1 - This is the PCI bus number. You may only have one.
- C13 - Card Slot C13. They are numbered on the back of the server.
- T1 - This is port 1 of 2 that are on the card.
Your internal ports won't have the Card Slot numbers, just the T number, representing the port. This should be marked on the back of your server. E.g.: U788C.001.AAC1535-P1-T2 means unit U788C.001.AAC1535, PCI bus P1, port T2 and you should be able to see T2 printed on the back of the server.
In this section, we will configure the NIM master and create some basic installation resources:
- Ensure that Volume 1 of the AIX DVD is in the drive.
- Install the NIM master fileset:
# installp -agXd /dev/cd0 bos.sysmgt.nim
- Configure NIM master:
# smitty nim_config_env
Set fields as follows:
- "Primary Network Interface for the NIM Master": selected interface
- "Input device for installation images": "cd0"
- If you already have set up an /export file system, you may choose not to create new file systems for /export/lpp_source and /export/spot; It is up to you.
- Select to prepend the level to the LPP_SOURCE and SPOT names, so you can identify the level of AIX that was used to create the LPP_SOURCE and SPOT.
- "Remove all newly added NIM definitions if the operation fails": "yes"
- Press Enter.
- Exit when complete.
If you run into an issue here, where it says that the SPOT cannot be created, because the LPP_SOURCE is missing the simages (short for system images) attribute, because fileset bos.vendor.profile is missing, then this means it is telling you that the LPP_SOURCE doesn't include all the required filesets to create the SPOT. This looks like a bug because fileset bos.vendor.profile can be found on the AIX media. But it seems somehow it is not copied to the target LPP_SOURCE folder while the LPP_SOURCE is created. It has been seen in AIX 7.1 TL4. If you run into this, do the following:
- Check if bos.vendor.profile exists on the installation media. It should be in the installp/ppc folder.
- If so, rerun the steps above (starting with smitty nim_config_env), and while the LPP_SOURCE is being created, copy the bos.vendor.profile file yourself from the AIX installation media to the LPP_SOURCE target folder. For example, if your installation folder is /aix (assuming you have mounted the first AIX installation ISO image using loopmount on mount point /aix; and assuming you are using AIX 7.1 TL4), then run:
# cp /aix/installp/ppc/bos.vendor.profile /export/lpp_source/710-04lpp_source1/installp/ppc/bos.vendor.profile
- Initialize each NIM client:
# smitty nim_mkmac
Enter the host name of the appropriate LPAR. Set fields as follows:
- "Kernel to use for Network Boot": "mp"
- "Cable Type": "tp"
- Press Enter.
- Exit when complete.
A more extensive document about setting up NIM can be found here:
http://www-01.ibm.com/support/docview.wss?context=SWG10q1=setup+guide&uid=isg3T1010383A usefull command to update software on your AIX server is install_all_updates. It is similar to running smitty update_all, but it works from the command line. The only thing you need to provide is the directory name, for example:
# install_all_updates -d .
This installs all the software updates from the current directory. Of course, you will have to make sure the current directory contains any software. Don't worry about generating a Table Of Contents (.toc) file in this directory, because install_all_updates generates one for you.
By default, install_all_updates will apply the filesets. Use -c to commit any software. Also, by default, it will expand any file systems; use -x to prevent this behavior). It will install any requisites by default (use -n to prevent). You can use -p to run a preview, and you can use -s to skip the recommended maintenance or technology level verification at the end of the install_all_updates output. You may have to use the -Y option to agree to all licence agreements.
To install all available updates from the cdrom, and agree to all license agreements, and skip the recommended maintenance or technology level verification, run:
# install_all_updates -d /cdrom -Y -s
Use this procedure to quickly configure an HACMP cluster, consisting of 2 nodes and disk heartbeating.
Prerequisites:
Make sure you have the following in place:
- Have the IP addresses and host names of both nodes, and for a service IP label. Add these into the /etc/hosts files on both nodes of the new HACMP cluster.
- Make sure you have the HACMP software installed on both nodes. Just install all the filesets of the HACMP CD-ROM, and you should be good.
- Make sure you have this entry in /etc/inittab (as one of the last entries):
clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit
- In case you're using EMC SAN storage, make sure you configure you're disks correctly as hdiskpower devices. Or, if you're using a mksysb image, you may want to follow this procedure EMC ODM cleanup.
Steps:
- Create the cluster and its nodes:
# smitty hacmp
Initialization and Standard Configuration
Configure an HACMP Cluster and Nodes
Enter a cluster name and select the nodes you're going to use. It is vital here to have the hostnames and IP address correctly entered in the /etc/hosts file of both nodes.
- Create an IP service label:
# smitty hacmp
Initialization and Standard Configuration
Configure Resources to Make Highly Available
Configure Service IP Labels/Addresses
Add a Service IP Label/Address
Enter an IP Label/Address (press F4 to select one), and enter a Network name (again, press F4 to select one).
- Set up a resource group:
# smitty hacmp
Initialization and Standard Configuration
Configure HACMP Resource Groups
Add a Resource Group
Enter the name of the resource group. It's a good habit to make sure that a resource group name ends with "rg", so you can recognize it as a resource group. Also, select the participating nodes. For the "Fallback Policy", it is a good idea to change it to "Never Fallback". This way, when the primary node in the cluster comes up, and the resource group is up-and-running on the secondary node, you won't see a failover occur from the secondary to the primary node.
Note: The order of the nodes is determined by the order you select the nodes here. If you put in "node01 node02" here, then "node01" is the primary node. If you want to have this any other way, now is a good time to correctly enter the order of node priority.
- Add the Servie IP/Label to the resource group:
# smitty hacmp
Initialization and Standard Configuration
Configure HACMP Resource Groups
Change/Show Resources for a Resource Group (standard)
Select the resource group you've created earlier, and add the Service IP/Label.
- Run a verification/synchronization:
# smitty hacmp
Extended Configuration
Extended Verification and Synchronization
Just hit [ENTER] here. Resolve any issues that may come up from this synchronization attempt. Repeat this process until the verification/synchronization process returns "Ok". It's a good idea here to select to "Automatically correct errors".
- Start the HACMP cluster:
# smitty hacmp
System Management (C-SPOC)
Manage HACMP Services
Start Cluster Services
Select both nodes to start. Make sure to also start the Cluster Information Daemon.
- Check the status of the cluster:
# clstat -o
# cldump
Wait until the cluster is stable and both nodes are up.
Basically, the cluster is now up-and-running. However, during the Verification & Synchronization step, it will complain about not having a non-IP network. The next part is for setting up a disk heartbeat network, that will allow the nodes of the HACMP cluster to exchange disk heartbeat packets over a SAN disk. We're assuming here, you're using EMC storage. The process on other types of SAN storage is more or less similar, except for some differences, e.g. SAN disks on EMC storage are called "hdiskpower" devices, and they're called "vpath" devices on IBM SAN storage.
First, look at the available SAN disk devices on your nodes, and select a small disk, that won't be used to store any data on, but only for the purpose of doing the disk heartbeat. It is a good habit, to request your SAN storage admin to zone a small LUN as a disk heartbeating device to both nodes of the HACMP cluster. Make a note of the PVID of this disk device, for example, if you choose to use device hdiskpower4:
# lspv | grep hdiskpower4
hdiskpower4 000a807f6b9cc8e5 None
So, we're going to set up the disk heartbeat network on device hdiskpower4, with PVID 000a807f6b9cc8e5:
- Create an concurrent volume group:
# smitty hacmp
System Management (C-SPOC)
HACMP Concurrent Logical Volume Management
Concurrent Volume Groups
Create a Concurrent Volume Group
Select both nodes to create the concurrent volume group on by pressing F7 for each node. Then select the correct PVID. Give the new volume group a name, for example "hbvg".
- Set up the disk heartbeat network:
# smitty hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Networks
Add a Network to the HACMP Cluster
Select "diskhb" and accept the default Network Name.
- Run a discovery:
# smitty hacmp
Extended Configuration
Discover HACMP-related Information from Configured Nodes
- Add the disk device:
# smitty hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Communication Interfaces/Devices
Add Communication Interfaces/Devices
Add Discovered Communication Interface and Devices
Communication Devices
Select the disk device on both nodes by selecting the same disk on each node by pressing F7.
- Run a Verification & Synchronization again, as described earlier above. Then check with clstat and/or cldump again, to check if the disk heartbeat network comes online.
Number of results found for topic
Installation: 30.
Displaying results: 1 - 10.