OpenSSL on AIX can be impacted by the Heartbleed bug. Only OpenSSL 1.0.1e (IBM AIX VRMFs - 1.0.1.500 & 1.0.1.501) is vulnerable to the Heartbleed bug (CVE-2014-010). All OpenSSL v0.9.8.xxxx and v12.9.8.xxxx are NOT vulnerable to this CVE.
IBM released OpenSSL 1.0.1g by the end of April 2014, which is the official fix.
The following is information about an ifix that was made available by IBM. The ifix is just a workaround, and currently IBM recommends upgrading to OpenSSL 1.0.1.511 instead (see below).
- This is a workaround compiled with the feature turned off.
- This is not OS dependent. It only depends on the OpenSSL level.
Below are the download and install/uninstall instructions.
The OpenSSL ifix doesn't require a reboot. However... It's a shared library update, so any daemons that use it will need to be restarted such as sshd. If you aren't sure what applications running on your machine use OpenSSL, it's recommended to reboot.
To download it, go to: https://testcase.software.ibm.com/ and log in as "Anonymous" (no password needed). Click on the "fromibm" folder, and then click on the "aix" folder. Scroll down the list until you find the following file and click on it to download:
0160_ifix.140409.epkg.Z
Once the download is complete, transfer the file to your AIX system. Log on to your AIX system, go to the directory where you put the file, and run the following command as the root user.
To preview the installation of 0160_ifix.140409.epkg.Z, please do the following:
# emgr -p -e 0160_ifix.140409.epkg.Z
To install the ifix, run the following:
# emgr -X -e 0160_ifix.140409.epkg.Z
If you need to uninstall the iFix for some reason, run the following command as root:
# emgr -r -L 0160_ifix.140409.epkg.Z
The following is more information, updated on June 13, 2014:
IBM has released several new levels for OpenSSL that address both the Heartbleed bug, as well as several other security vulnerabilities that have been identified recently.
We currently recommend downloading OpenSSL 1.0.1.511. This level can be used on AIX 5.3, 6.1 and 7.1. You can find OpenSSL in the IBM Web Download Pack at:
http://www-03.ibm.com/systems/power/software/aix/expansionpack/
Click on Downloads (on the right), log in with your IBM user ID (or register for one, if you don't already have an IBM user ID). Select openssl on the next page, and click on Continue at the bottom. Click Submit to accept IBM's privacy statement on the next page, and you'll be forwarded to a list of possible downloads. Here, click on "Download using http", and select the OpenSSL images for openssl-1.0.1.511.tar.Z. You probably also want to review the Readme beneath it as well.
You will download the openssl-1.0.1.511.tar.Z file. Transfer that onto your AIX systems into a separate folder.
Uncompress the file:
# gzip -d openssl-1.0.1.511.tar.Z
Now you will have a tar file.
Un-tar it:
# tar xf openssl-1.0.1.511.tar
That will give you folder openssl-1.0.1.511 within your current folder.
Go into that folder:
# cd openssl-1.0.1.511
Here you can find 3 filesets; run inutoc to generate the .toc file:
# ls
openssl.base openssl.license openssl.man.en_US
# inutoc .
Then install the filesets:
# update_all -d . -cY
Now, it should be installed. Before logging out, make sure you can access your system through ssh using a separate window.
For more information, see
http://heartbleed.com. Please ensure your UNIX Health Check level is up to date. Version 14.04.10 and up includes a check for your AIX systems to see if any are impacted by the Heartbleed bug.
The chrctcp command in not documented in AIX, but you can still use it to do nice things, especially when you are scripting. Some examples are:
To enable xntpd in /etc/rc.tcpip, and to start xntpd:
# chrctcp -S -a xntpd
To disable xntpd in /etc/rc.tcpip, and to stop xntpd:
# chrctcp -S -d xntpd
To enable xntpd in /etc/rc.tcpip, but not start xntpd:
# chrctcp -a xntpd
To disable xntpd in /etc/rc.tcpip, but to not stop xntpd:
# chrctcp -d xntpd
So, instead of manually editing /etc/rc.tcpip, you can use chrctcp to enable (uncomment), disable (comment) some services, and start and stop them in a single command.
As an AIX admin, you may not always know what switches a certain server is connected to. If you have Cisco switches, here's an interesting method to identify the switch your server is connected to.
First, run ifconfig to look up the interfaces that are in use:
# ifconfig -a | grep en | grep UP | cut -f1 -d:
en0
en4
en8
Okay, so on this system, you have interfaces en0, en4 and en8 active. So, if you want to determine the switch en4 is connected to, run this command:
# tcpdump -nn -v -i en4 -s 1500 -c 1 'ether[20:2] == 0x2000'
tcpdump: listening on en4, link-type 1, capture size 1500 bytes
After a while, it will display the following information:
11:40:14.176810 CDP v2, ttl: 180s, checksum: 692 (unverified)
Device-ID (0x01), length: 22 bytes: 'switch1.host.com'
Version String (0x05), length: 263 bytes:
Cisco IOS Software, Catalyst 4500 L3 Switch Software
(cat4500e-IPBASEK9-M), Version 12.2(52)XO, RELEASE SOFTWARE
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2009 by Cisco Systems, Inc.
Compiled Sun 17-May-09 18:51 by prod_rel_team
Platform (0x06), length: 16 bytes: 'cisco WS-C4506-E'
Address (0x02), length: 13 bytes: IPv4 (1) 111.22.33.44
Port-ID (0x03), length: 18 bytes: 'GigabitEthernet2/7'
Capability (0x04), length: 4 bytes: (0x00000029):
Router, L2 Switch, IGMP snooping
VTP Management Domain (0x09), length: 2 bytes: ''''
Native VLAN ID (0x0a), length: 2 bytes: 970
Duplex (0x0b), length: 1 byte: full
Management Addresses (0x16), length: 13 bytes: IPv4 (1)
111.22.33.44
unknown field type (0x1a), length: 12 bytes:
0x0000: 0000 0001 0000 0000 ffff ffff
47 packets received by filter
0 packets dropped by kernel
Note here that this will only work on Cisco switches, as it uses the Cisco Discovery Protocol (CDP).
The output above will help you determine, that en4 is connected to a network switch called 'switch1.host.com', with IP address '111.22.33.44', and that it is connected to port 'GigabitEthernet2/7' (most likely port 7 on blade 2 of this switch).
If you're running the same command on an Etherchannelled interface, keep in mind that it will only display the information of the active interface in the Etherchannel configuration. You may have to fail over the Etherchannel to a backup adapter, to determine the switch information for the backup adapter in the Etherchannel configuration.
If your LPAR has virtual Ethernet adapters, this will not work (the command will just hang). Instead, run the command on the VIOS instead.
Also note that you may need to run the command a couple of times, for tcpdump to discover the necessary information.
Another interesting way to use tcpdump is to discover what VLAN an network interface is connected to. For example, if you have 2 interfaces on an AIX system, and you would want to configure them in an Etherchannel, or you would want to use one of them as a production interface, and another as a standby interface. In that case, it is important to know that both interfaces are within the same VLAN. Obviously, you can ask your network team to validate, but it is also good to be able to validate on the host side. Also, you can just configure an IP address on it, and see if it will work. But for production systems, that may not always be possible.
The trick basically is, to run tcpdump on an interface, and check what network traffic can be discovered. For example, if you have 2 network interfaces, like these:
# netstat -ni | grep en[0,1]
en0 1500 link#2 0.21.5e.c0.d0.12 1426632806 0 86513680 0 0
en0 1500 10.27.18 10.27.18.64 1426632806 0 86513680 0 0
en1 1500 link#3 0.21.5e.c0.d0.13 20198022 0 7426576 0 0
en1 1500 10.27.130 10.27.130.10 20198022 0 7426576 0 0
In this case, interface en0 uses IP address 10.27.18.64, and is within the 10.27.18.x subnet. Interface en1 uses IP address 10.27.130.10, and is within the 10.27.130.x subnet (assuming both interfaces use a subnet mask of 255.255.255.0).
Now, if en0 is a production interface, and you would like to confirm that en1, the standby interface, can be used to fail over the production interface to, then you need to know that both of the interfaces are within the same VLAN. To determine that, for en1, run tcpdump, and check if any network traffic in the 10.27.18 subnet (used by en0) can be seen (press CTRL-C after seeing any such network traffic, to cancel the tcpdump command):
# tcpdump -i en1 -qn net 10.27.18
tcpdump: verbose output suppressed,
use -v or -vv for full protocol decode
listening on en1, link-type 1, capture size 96 bytes
07:27:25.842887 ARP, Request who-has 10.27.18.136
(ff:ff:ff:ff:ff:ff) tell 10.27.18.2, length 46
07:27:25.846134 ARP, Request who-has 10.27.18.135
(ff:ff:ff:ff:ff:ff) tell 10.27.18.2, length 46
07:27:25.917068 IP 10.27.18.2.1985 > 224.0.0.2.1985: UDP, length 20
07:27:25.931376 IP 10.27.18.3.1985 > 224.0.0.2.1985: UDP, length 20
^C
24 packets received by filter
0 packets dropped by kernel
After seeing this, you know for sure that on interface en1, even though it has an IP address in subnet 10.27.130.x, network traffic for 10.27.18.x subnet can be seen, and thus that failing over the production interface IP address from en0 to en1 should work just fine.
You will encounter them from time to time: files with weird filenames, such as spaces, escape codes, or uncommon characters. These often can be very difficult to remove.
For example, files with a space at the end:
# touch "a file "
# ls
a file
It's not such a problem, if you created the file yourself, and you KNOW there is space at the end. Otherwise, it can be quite difficult to remove it:
# rm "a file"
rm: a file: A file or directory in the path name does not exist.
It can be even more ugly if there is a ^M in the file name:
# touch 'a^Mfile'
# ls a*
a
file
# ls file
ls: 0653-341 The file file does not exist.
And it will quickly become horrible if there are unprintable characters in file names, or a combination of all of the above. Or how about a file called "-rf /". Would do dare run the command: "rm -rf /" on your system, not knowing if this will wipe out all files, or just remove the file with the filename "-rf /"?
So, if you have a file with an awkward filename, or simply don't know the file name of a file, because it contains unprintable characters, escape codes, slashes, spaces or tabs, how do you
safely remove it?
Well, you can remove files by inode. First, discover the inode of a file:
# ls -alsi
12294 0 -rw-r--r-- 1 root system 0 May 07 15:38 a file
In the example above, the inode has number 12294. Then simply remove it using the find command, and the -exec option for the find command:
# find . -inum 12294 -ls
12294 0 -rw-r--r-- 1 root system 0 May 7 15:38 ./a file
# find . -inum 12294 -exec rm {} \;
A core dump is the process by which the current state of a program is preserved in a file before a program is ended because of an unexpected error.
Core dumps are usually associated with programs that have encountered an unexpected, system-detected fault, such as a segmentation fault or a severe user error. An application programmer can use the core dump to diagnose and correct the problem. The core files are binary files that are specific to the AIX operating system.
To generate a core file of a running program, you can use the gencore command. Before you do so, make sure that the system is set to allow applications to generate full core files. By default this will be disabled, to avoid applications quickly filling up file systems.
# lsattr -E -l sys0 -a fullcore
fullcore false Enable full CORE dump True
# chdev -l sys0 -a fullcore=true
sys0 changed
Also check your umlimts, to ensure that the user is set to allow large files to be generated. And check the available space in the file system where you want to write the core file.
Next, generate the core file of a running program, for example of a process with ID 65274068. Note that the gencore command creates a core file without terminating the process.
# gencore 65274068 /tmp/core_65274068
Once the core file has been generated, be sure to set fullcore back to false:
# chdev -l sys0 -a fullcore=false
sys0 changed
# lsattr -E -l sys0 -a fullcore
fullcore false Enable full CORE dump True
Now you can use the snapcore command to gather the core file, the program, and any libraries used by the program into one pax archive, which can be sent to a vendor for further analysis. Using the -d option of the snapcore command you specify where the archive will be written.
# file core_65274068
core_65274068: AIX core file fulldump 64-bit, user
# snapcore -d /tmp core_65274068 /path/to/the/program
Core file "core_65274068" created by "user"
pass1() in progress ....
Calculating space required .
Total space required is 4605936 kbytes ..
Checking for available space ...
Available space is 33787748 kbytes
pass1 complete.
pass2() in progress ....
Collecting fileset information .
Collecting error report of CORE_DUMP errors ..
Creating readme file ..
Creating archive file ...
Compressing archive file ....
pass2 completed.
Snapcore completed successfully. Archive created in /tmp.
Check the file:
# ls -l
-rw-rw-rw- 1 root system 12183573 Mar 22 08:50 core_65274068
-rw-r--r-- 1 root system 12594032 Mar 22 08:50 snapcore_663646.pax.Z
# file snapcore_663646.pax.Z
snapcore_663646.pax.Z: compressed data block compressed 16 bit
The resulting snapcore file can then be sent to Technical Support. It can then be uncompressed and untarred (tar can work on pax images).
Core files have the habit to be scattered all over the server, depending on what processes are running, what the working directories are of processes, and which of the processes dump a core file. That is often very annoying, and you may have to use the find command to find all the core files to clean them up.
There is a way to create a centralized repository for your core files, and you can use some not so very well known user settings to do just that.
First, create a location where you can store core files, for example, create a file system /corefiles with plenty of space:
# df -g /corefiles
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/corefilelv 19.92 8.56 58% 94 1% /corefiles
Now, change the default core settings to point to this location:
# chsec -f /etc/security/user -s default -a core_path=on -a core_pathname=/corefiles -a core_compress=on -a core_naming=on
The command above changes the default settings for the core files - for all users. You can obviously do the same for individual users instead, just change "default" to whatever user you want to set this for.
The four options in the chsec command above are:
core_compressEnables or disables core file compression. Valid values for this attribute are On and Off. If this attribute has a value of On, compression is enabled; otherwise, compression is disabled. The default value of this attribute is Off. This will help you save disk space.
core_pathEnables or disables core file path specification. Valid values for this attribute are On and Off. If this attribute has a value of On, core files will be placed in the directory specified by core_pathname (the feature is enabled); otherwise, core files are placed in the user's current working directory. The default value of this attribute is Off. You'll need to set this if you wish to specify a specific directory to store core files.
core_pathnameSpecifies a location to be used to place core files, if the core_path attribute is set to On. If this is not set and core_path is set to On, core files will be placed in the user's current working directory. This attribute is limited to 256 characters. This is where you specifiy the directory to store core files.
core_namingSelects a choice of core file naming strategies. Valid values for this attribute are On and Off. A value of On enables core file naming in the form core.pid.time, which is the same as what the CORE_NAMING environment variable does. A value of Off uses the default name of core. This will create core files with the name in a form of core.pid.time, where time is ddhhmmss (pid = the process id; dd = day of the month, hh = hours, mm = mintues, ss = seconds). You can leave out this option and instead set environment variable CORE_NAMING to true.
Doing so - and after restarting any applications (or the whole server), your core files should now be all stored in /corefiles - that is, if you have any processes that generate core files of course.
Note: The same can be achieved with the chcore command:
# chcore -c on -p on -l /corefiles -n on -d
Validate the settings as follows:
# grep -p default /etc/security/user | grep core
core_compress = on
core_path = on
core_naming = on
core_pathname = /corefiles
On Linux, you sometimes may run into an issue where you can't change permissions of a file, even though you're root, and you have access. For example:
# ls -asl authorized_keys
8 -rw------- 1 root root 6325 Sep 17 02:48 authorized_keys
# chmod 700 authorized_keys
chmod: changing permissions of `authorized_keys': Operation not
permitted
# whoami
root
This is usually caused by the Extendef File System Attributes, especially if package
e2fsprogs is installed. Two commands that will come in handy here are
/usr/bin/chattr and
/usr/bin/lsattr.
The most common attributes are:
- A - When the file is accessed the atime record is not modified. This avoids a certain amount of disk I/O.
- a - When this file is opened, it is opened in append only mode for writing.
- i - This file cannot be modified, renamed or deleted.
For example:
# lsattr authorized_keys
----i-------- authorized_keys
This shows that the immutable flag (i) is in place on the file, and thus the reason why the file can't be modified. To remove it, use chattr:
# chattr -i authorized_keys
# lsattr authorized_keys
------------- authorized_keys
Now any commands to modify the file, will work:
# chmod 700 authorized_keys
Virtual clients running on a IVM (Integrated Virtualization Manager) do not have a direct atached serial console nor a virtual window which can be opened via an HMC. So how do you access the console?
You can log on as the padmin user on the VIOS which is serving the client you want to logon to its console. Just log on to the VIOS, switch to user padmin:
# su - padmin
Then run the lssyscfg command to list the available LPARs and their IDs on this VIOS:
# lssyscfg -r lpar -F name,lpar_id
Alternatively you can log on to the IVM using a web browser and click on "View/Modify Partitions" which will also show LPAR names and their IDs.
Use the ID of the LPAR you wish to access:
# mkvt -id [lparid]
This should open a console to the LPAR. If you receive a message "Virtual terminal is already connected", then the session is already in use. If you are sure no one else is using it, you can use the rmvt command to force the session to close.
# rmvt -id [lparid]
After that you can try the mkvt command again.
When finished log off and type "~." (tilde dot) to end the session. Sometimes this will also close the session to the VIOS itself and you may need to logon to the VIOS again.
Not very much known is the machstat command in AIX that can be used to display the status of the Power Status Register, and thus can be helpful to identify any issues with either Power or Cooling.
# machstat -f
0 0 0
If it returns all zeroes, everything is fine. Anything else is not good. The first digit (the so-called EPOW Event) indicates the type of problem:
| EPOW Event | Description |
| 0 | normal operation |
| 1 | non-critical cooling problem |
| 2 | non-critical power problem |
| 3 | severe power problem - halt system |
| 4 | severe problems - halt immediately |
| 5 | unhandled issue |
| 7 | unhandled issue |
Another way to determine if the system may have a power or cooling issue, is by looking at a crontab entry in the root user's crontab:
# crontab -l root | grep -i powerfail
0 00,12 * * * wall%rc.powerfail:2::WARNING!!! The system is now operating with a power problem. This message will be walled every 12 hours. Remove this crontab entry after the problem is resolved.
If a powerfail message is present in the crontab of user root, this may indicate that there is an issue to be looked into. Contact your IBM representative to check the system out. Afterwards, make sure to remove the powerfail entry from the root user's crontab.
Want to know which LVM commands were run on a system? Simply run the following command, and get a list of the LVM command history:
# alog -o -t lvmcfg
To filter out only the actual commands:
# alog -o -t lvmcfg | grep -v -E "workdir|exited|tellclvmd"
[S 06/11/13-16:52:02:236 lvmstat.c 468] lvmstat -v testvg
[S 06/11/13-16:52:02:637 lvmstat.c 468] lvmstat -v rootvg
[S 07/20/13-15:02:15:076 extendlv.sh 789] extendlv testlv 400
[S 07/20/13-15:02:33:199 chlv.sh 527] chlv -x 4096 testlv
[S 08/22/13-12:29:16:807 chlv.sh 527] chlv -e x testlv
[S 08/22/13-12:29:26:150 chlv.sh 527] chlv -e x fslv00
[S 08/22/13-12:29:46:009 chlv.sh 527] chlv -e x loglv00
[S 08/22/13-12:30:55:843 reorgvg.sh 590] reorgvg
The information for this LVM command history is stored in /var/adm/ras/lvmcfg.log. You can check the location for a circular log, by running:
# alog -t lvmcfg -L
#file:size:verbosity
/var/adm/ras/lvmcfg.log:51200:3
More detail can also be found in the lvmt log, by running:
# alog -t lvmt -o
Number of results found: 469.
Displaying results: 121 - 130.