When an Etherchannel has been configured on AIX, using a primary and a backup adapter for failover purposes, it is possible to force a failover between these adapters.
To do this, first check what the currently active adapter is within the Etherchannel. For example, if the Etherchannel is called ent4, run:
# entstat -d ent4 | grep "Active channel"
Active channel: backup adapter
As you can see, the Etherchannel is currently active on the backup adapter. To see the defined adapters for both primary and backup, run:
# lsattr -El ent4
Now, force it to fail over to the primary adapter:
# /usr/lib/methods/ethchan_config -f ent4
Please note that it is best to run this command from the console, as you may temporarily lose network connectivity when the Etherhannel is failing over. You may also notice messages being logged in the error report by running the errpt command in the form of "ETHERCHANNEL FAILOVER".
Now run the entstat command again to determine the active channel:
# entstat -d ent4 | grep "Active channel"
Active channel: primary channel
The following command will generate a random password of 9 characters:
# openssl rand -base64 9
A default sendmail configuration will do DNS queries for MX records. It does this even when its setup to use a Mail Relay Server for sending mail out. This will cause mail to fail if its not able to lookup the MX record, for example when the SMTP relay server is not known in DNS.
Setting up sendmail not to query DNS for MX records when using a DS (Smart Relay) consists of using "[ ]" brackets around the hostname (or IP address) Mail Relay Server configured in the /etc/mail/sendmail.cf.
To do so, edit the sendmail configuration file:
# vi /etc/mail/sendmail.cf
Search for the DS entry. For example:
DSsmtp.unixhealthcheck.com
Change it to:
DS[smtp.unixhealthcheck.com]
Then save the configuration file, and refresh sendmail:
# refresh -s sendmail
There are two commands that can be used to add a route on an AIX system.
The first one is route, and can be used to temporarily add a route to an AIX system. Meaning, if the system is rebooted, after the route has been added, the route will be lost again after the reboot.
The second command is chdev -l inet0 that can be used to permanently add a route on an AIX system. When this command is used, the route will persist during reboots, as this command writes to information of the route in to the ODM of AIX.
Let's say you have a need to add a route on a system to network 10.0.0.0. And that network uses a netmask of 255.255.255.0 (or "24" for the short mask notation). Finally, the gateway that can be used to access this network is 192.168.0.1. Obviously, please adjust this to your own situation.
To temporarily add a route on a system for this network, use the following route command:
# route add -net 10.0.0.0 -netmask 255.255.255.0 192.168.0.1
After running this command, you can use the
netstat -nr command to confirm that the route indeed has been set up:
# netstat -nr | grep 192.168.0.1
172.30.224/24 192.168.0.1 UG 0 0 en1 - -
To remove that route again, simply change the route command from "add" to "delete":
# route delete -net 10.0.0.0 -netmask 255.255.255.0 192.168.0.1
Again, confirm with the
netstat -nr command that the route indeed has been removed.
Now, as mentioned earlier, the route command will only temporarily (until the next reboot) add a route on the AIX system. To make things permanent, use the chdev command. This command takes the following form:
chdev -l inet0 -a route=net,-netmask,[your-netmask-goes-here],-static,[your-network-address-goes-here],[your-gateway-goes-here]
For example:
# chdev -l inet0 -a route=net,-netmask,255.255.255.0,-static,10.0.0.0,
192.168.0.1
inet0 changed
This time, again, you can confirm with the
netstat -nr command that the route has been set up. But now you can also confirm that the route has been added to the ODM, by using this command:
# lsattr -El inet0 -a route | grep 192.168.0.1
route net,-netmask,255.255.255.0,-static,10.0.0.0,192.168.0.1 Route True
At this point, you can reboot the system, and you'll notice that the route is still there, by repeating the
netstat -nr and
lsattr -El inet0 commands.
To remove this permanent route from the AIX system, simply change the chdev command above from "route" to "delroute":
# chdev -l inet0 -a delroute=net,-netmask,255.255.255.0,-static,10.0.0.0,
192.168.0.1
inet0 changed
Finally, again confirm using the
netstat -nr and
lsattr -El inet0 commands that the route indeed has been removed.
IBM has implemented a new feature implemented for JFS2 filesystems to prevent simultaneous mounting within PowerHA clusters.
While PowerHA can give concurrent access of volume groups to multiple systems, mounting a JFS2 filesystem on multiple nodes simultaneously will cause filesystem corruption. These simultaneous mount events can also cause a system crash, when the system detects a conflict between data or metadata in the filesystem and the in-memory state of the filesystem. The only exception to this is mounting the filesystem read-only, where files or directories can't be changed.
In AIX 7100-01 and 6100-07 a new feature called "Mount Guard" has been added to prevent simultaneous or concurrent mounts. If a filesystem appears to be mounted on another server, and the feature is enabled, AIX will prevent mounting on any other server. Mount Guard is not enabled by default, but is configurable by the system administrator. The option is not allowed to be set on base OS filesystems such as /, /usr, /var etc.
To turn on Mount Guard on a filesystem you can permanently enable it via /usr/sbin/chfs:
# chfs -a mountguard=yes /mountpoint
/mountpoint is now guarded against concurrent mounts.
The same option is used with crfs when creating a filesystem.
To turn off mount guard:
# chfs -a mountguard=no /mountpoint
/mountpoint is no longer guarded against concurrent mounts.
To determine the mount guard state of a filesystem:
# lsfs -q /mountpoint
Name Nodename Mount Pt VFS Size Options Auto Accounting
/dev/fslv -- /mountpoint jfs2 4194304 rw no no
(lv size: 4194304, fs size: 4194304, block size: 4096, sparse files: yes,
inline log: no, inline log size: 0, EAformat: v1, Quota: no, DMAPI:
no, VIX: yes, EFS: no, ISNAPSHOT: no, MAXEXT: 0, MountGuard: yes)
The /usr/sbin/mount command will not show the mount guard state.
When a filesystem is protected against concurrent mounting, and a second mount attempt is made you will see this error:
# mount /mountpoint
mount: /dev/fslv on /mountpoint:
Cannot mount guarded filesystem.
The filesystem is potentially mounted on another node
After a system crash the filesystem may still have mount flags enabled and refuse to be mounted. In this case the guard state can be temporarily overridden by the "noguard" option to the mount command:
# mount -o noguard /mountpoint
mount: /dev/fslv on /mountpoint:
Mount guard override for filesystem.
The filesystem is potentially mounted on another node.
Reference:
http://www-01.ibm.com/support/docview.wss?uid=isg3T1018853This is a quick and dirty method of setting up an LPP source and SPOT of AIX 5.3 TL10 SP2, without having to swap DVD's into the AIX host machine. What you basically need is the actual AIX 5.3 TL10 SP2 DVD's from IBM, a Windows host, and access to your NIM server. This process basically works for every AIX level, and has been tested with versions up to AIX 7.2.
If you have actual AIX DVD's that IBM sent to you, create ISO images of the DVD's through Windows, e.g. by using MagicISO. Or, go to Entitled Software Support and download the ISO images there.
SCP these ISO image files over to the AIX NIM server, e.g. by using WinSCP.
We need a way to access the data in the ISO images on the NIM server, and to extract the filesets from it (see IBM Wiki).
For AIX 5 systems and older:
Create a logical volume that is big enough to hold the data of one DVD. Check with "lsvg rootvg" if you have enough space in rootvg and what the PP size is. In our example it is 64 MB. Thus, to hold an ISO image of roughly 4.7 GB, we would need roughly 80 LPs of 64 MB.
# /usr/sbin/mklv -y testiso -t jfs rootvg 80
Create filesystem on it:
# /usr/sbin/crfs -v jfs -d testiso -m /testiso -An -pro -tn -a frag=4096 -a nbpi=4096 -a ag=8
Create a location where to store all of the AIX filesets on the server:
# mkdir /sw_depot/5300-10-02-0943-full
Copy the ISO image to the logical volume:
# /usr/bin/dd if=/tmp/aix53-tl10-sp2-dvd1.iso of=/dev/rtestiso bs=1m
# chfs -a vfs=cdrfs /testiso
Mount the testiso filesystem and copy the data:
# mount /testiso
# bffcreate -d /testiso -t /sw_depot/5300-10-02-0943-full all
# umount /testiso
Repeat the above 5 steps for both DVD's. You'll end up with a folder of at least 4 GB.
Delete the iso logical volume:
# rmfs -r /testiso
# rmlv testiso
When you're using AIX 7 / AIX 6.1:
Significant changes have been made in AIX 7 and AIX 6.1 that add new support for NIM. In particular there is now the capability to use the loopmount command to mount iso images into filesystems. As an example:
# loopmount -i aixv7-base.iso -m /aix -o "-V cdrfs -o ro"
The above mounts the AIX 7 base iso as a filesystem called /aix.
So instead of going through the trouble of creating a logical volume, creating a file system, copying the ISO image to the logical volume, and mounting it (which is what you would have done on AIX 5 and before), you can do all of this with a single loopmount command.
Make sure to delete any left-over ISO images:
# rm -rf /tmp/aix53-tl10-sp2-dvd*iso
Define the LPP source (From the
NIM A to Z redbook):
# mkdir /export/lpp_source/LPPaix53tl10sp2
# nim -o define -t lpp_source -a server=master -a location=/export/lpp_source/LPPaix53tl10sp2 -a source=/sw_depot/5300-10-02-0943-full LPPaix53tl10sp2
Check with:
# lsnim -l LPPaix53tl10sp2
Rebuild the .toc:
# nim -Fo check LPPaix53tl10sp2
For newer AIX releases, e.g. AIX 7.1 and AIX 7.2, you may get a warning like:
Warning: 0042-354 c_mk_lpp_source: The lpp_source is missing a
bos.vendor.profile which is needed for the simages attribute. To add
a bos.vendor.profile to the lpp_source run the "update" operation
with "-a recover=yes" and specify a "source" that contains a
bos.vendor.profile such as the installation CD. If your master is not
at level 5.2.0.0 or higher, then manually copy the bos.vendor.profile
into the installp/ppc directory of the lpp_source.
If this happens, you can either do exactly what it says, copy the installp/ppc/bos.vendor.profile file from your source DVD ISO image into the installp/ppc directory of the LPP source. Or, you can remove the entire LPP source, then copy the installp/ppc/bos.vendor.profile form the DVD ISO image into the directory that contains the full AIX software set (in the example above: /sw_depot/5300-10-02-0943-full), and then re-create the LPP source. That should help to avoid the warning.
If you ignore this warning, then you'll notice that the next step (create a SPOT from the LPP source) will fail.
Define a SPOT from the LPP source:
# nim -o define -t spot -a server=master -a location=/export/spot/SPOTaix53tl10sp2 -a source=LPPaix53tl10sp2 -a installp_flags=-aQg SPOTaix53tl10sp2
Check the SPOT:
# nim -o check SPOTaix53tl10sp2
# nim -o lppchk -a show_progress=yes SPOTaix53tl10sp2
Links / URLs regarding IBM AIX:
The dsh (distributed shell) is a very useful (and powerful) utility that can be used to run commands on multiple servers at the same time. By default it is not installed on AIX, but you can install it yourself:
First, install the dsm file sets. DSM is short for Distributed Systems Management, and these filesets include the dsh command. These file sets can be found on the AIX installation media. Install the following 2 filesets:
# lslpp -l | grep -i dsm
dsm.core 7.1.4.0 COMMITTED Distributed Systems Management
dsm.dsh 7.1.4.0 COMMITTED Distributed Systems Management
Next, we'll need to set up some environment variables that are being used by dsh. The best way to do it, is by putting them in the .profile of the root user (in ~root/.profile), so you won't have to bother setting these environment variables manually every time you log in:
# cat .profile
alias bdf='df -k'
alias cls="tput clear"
stty erase ^?
export TERM=vt100
# For DSH
export DSH_NODE_RSH=/usr/bin/ssh
export DSH_NODE_LIST=/root/hostlist
export DSH_NODE_OPTS="-q"
export DSH_REMOTE_CMD=/usr/bin/ssh
export DCP_NODE_RCP=/usr/bin/scp
export DSH_CONTEXT=DEFAULT
In the output from .profile above, you'll notice that variable DSH_NODE_LIST is set to /root/hostlist. You can update this to any file name you like. The DSH_NODE_LIST variable points to a text file with server names in them (1 per line), that you will use for the dsh command. Basically, every host name of a server that you put in the list that DSH_NODE_LIST refers to, will be used to run a command on using the dsh command. So, if you put 3 host names in the file, and then run a dsh command, that command will be executed on these 3 hosts in parallel.
Note: You may also use the environment variable WCOLL instead of DSH_NODE_LIST.
So, create file /root/hostlist (or any file that you've configured for environment variable DSH_NODE_LIST), and add host names in it. For example:
# cat /root/hostlist
host1
host2
host3
Next, you'll have to set up the ssh keys for every host in the hostlist file. The dsh command uses ssh to run commands, so you'll have to enable password-less ssh communication from the host where you've installed dsh on (let's call that the source host), to all the hosts where you want to run commands using dsh (and we'll call those the target hosts).
To set this up, follow these steps:
- Run "ssh-keygen -t rsa" as user root on the source and all target hosts.
- Next, copy the contenst of ~root/.ssh/id_rsa.pub from the source host into file ~root/.ssh/authorized_keys on all the target hosts.
- Test if you can ssh from the source hosts, to all the target hosts, by running: "ssh host1 date", for each target host. If you're using DNS, and have fully qualified domain names configured for your hosts, you will want to test by performing a ssh to the fully qualified domain name instead, for example: "ssh host1.domain.com". This is because dsh will also resolve host names through DNS, and thus use these instead of the short host names. You will be asked a question when you run ssh for the first time from the source host to the target host. Answer "yes" to add an entry to the known_host file.
Now, ensure you log out from the source hosts, and log back in again as root. Considering that you've set some environment variables in .profile for user root, it is necessary that file .profile gets read, which is during login of user root.
At this point, you should be able to issue a command on all the target hosts, at the same time. For example, to run the "date" command on all the servers:
# dsh date
Also, you can now copy files using dcp (notice the similarity between ssh and dsh, and scp and dcp), for example to copy a file /etc/exclude.rootvg from the source host to all the target hosts:
# dcp /etc/exclude.rootvg /etc/exclude.rootvg
Note: dsh and dcp are very powerful to run commands on multiple servers, or to copy files to multiple servers. However, keep in mind that they can be very destructive as well. A command, such as "dsh halt -q", will ensure you halt all the servers at the same time. So, you probably may want to triple-check any dsh or dcp commands that you want to run, before actually running them. That is, if you value your job, of course.
The following procedure can be used to copy the printer configuration from one AIX system to another AIX system. This has been tested using different AIX levels, and has worked great. This is particularly useful if you have more than just a few printer queues configured, and configuring all printer queues manually would be too cumbersome.
- Create a full backup of your system, just in case something goes wrong.
- Run lssrc -g spooler and check if qdaemon is active. If not, start it with startsrc -s qdaemon.
- Copy /etc/qconfig from the source system to the target system.
- Copy /etc/hosts from the source system to the target system, but be careful to not lose important entries in /etc/hosts on the target system (e.g. the hostname and IP address of the target system should be in /etc/hosts).
- On the target system, refresh the qconfig file by running: enq -d
- On the target system, remove all files in /var/spool/lpd/pio/@local/custom, /var/spool/lpd/pio/@local/dev and /var/spool/lpd/pio/@local/ddi.
- Copy the contents of /var/spool/lpd/pio/@local/custom on the source system to the target system into the same folder.
- Copy the contents of /var/spool/lpd/pio/@local/dev on the source system to the target system into the same folder.
- Copy the contents of /var/spool/lpd/pio/@local/ddi on the source system to the target system into the same folder.
- Create the following script, called newq.sh, and run it:
#!/bin/ksh
let counter=0
cp /usr/lpp/printers.rte/inst_root/var/spool/lpd/pio/@local/smit/* \
/var/spool/lpd/pio/@local/smit
cd /var/spool/lpd/pio/@local/custom
chmod 775 /var/spool/lpd/pio/@local/custom
for FILE in `ls` ; do
let counter="$counter+1"
chmod 664 $FILE
QNAME=`echo $FILE | cut -d':' -f1`
DEVICE=`echo $FILE | cut -d':' -f2`
echo $counter : chvirprt -q $QNAME -d $DEVICE
chvirprt -q $QNAME -d $DEVICE
done
- Test and confirm printing is working.
- Remove file newq.sh.
If you have a LPAR that is not booting from your NIM server, and you're certain the IP configuration on the client is correct, for example by completing a successful ping test, then you should have a look at the bootp process on the NIM server as a possible cause of the issue.
To accomplish this, you can put bootp into debug mode. Edit file /etc/inetd.conf, and comment out the bootps entry with a hash mark (#). This will help to avoid bootp being started by the inetd in response to a bootp request. Then refresh the inetd daemon, to pick up the changes to file /etc/inetd.conf:
# refresh -s inetd
Now check if any bootpd processes are running. If necessary, use kill -9 to kill them. Again check if no more bootpd processes are active. Now that bootp has stopped go ahead and bring up another PuTTY window on your NIM master. You'll need another window opened, because putting bootp into debug is going to lock the window, while it is active. Run the following command in that window:
# bootpd -d -d -d -d -s
Now you can retry to boot the LPAR from your NIM master, and you should see information scrolling by of what is going on.
Afterwards, once you've identified the issue, make sure to stop the bootpd process (just hit ctrl-c to make it stop), and change file /etc/inetd.conf back the way it was, and run refresh -s inetd to refresh it again.
Number of results found for topic
AIX: 231.
Displaying results: 1 - 10.