Topics: AIX, System Admin

Export and import PuTTY sessions

PuTTY itself does not provide a means to export the list of sessions, nor a way to import the sessions from another computer. However, it is not so difficult, once you know that PuTTY stores the session information in the Windows Registry.

To export the Putty sessions, run:

regedit /e "%userprofile%\desktop\putty-sessions.reg" HKEY_CURRENT_USER\Software\SimonTatham\PuTTY\Sessions
Or, to export just all settings (and not only the sessions, run:
regedit /e "%userprofile%\desktop\putty.reg" HKEY_CURRENT_USER\Software\SimonTatham
This will create either a putty-sessions.reg or putty.reg file on your Windows dekstop. You can transfer these files over to another computer, and after installing PuTTY on the other computer, simply double-click on the reg file, to have the Windows Registry entries added. Then, if you start up PuTTY, all the sessions information should be there.

Topics: AIX, Storage, System Admin

Identifying a Disk Bottleneck Using filemon

This blog will display the steps required to identify an IO problem in the storage area network and/or disk arrays on AIX.

Note: Do not execute filemon with AIX 6.1 Technology Level 6 Service Pack 1 if WebSphere MQ is running. WebSphere MQ will abnormally terminate with this AIX release.

Running filemon: As a rule of thumb, a write to a cached fiber attached disk array should average less than 2.5 ms and a read from a cached fiber attached disk array should average less than 15 ms. To confirm the responsiveness of the storage area network and disk array, filemon can be utilized. The following example will collect statistics for a 90 second interval.

# filemon -PT 268435184 -O pv,detailed -o /tmp/filemon.rpt;sleep 90;trcstop

Run trcstop command to signal end of trace.
Tue Sep 15 13:42:12 2015
System: AIX 6.1 Node: hostname Machine: 0000868CF300
[filemon: Reporting started]
# [filemon: Reporting completed]

[filemon: 90.027 secs in measured interval]
Then, review the generated report (/tmp/filemon.rpt).
# more /tmp/filemon.rpt
.
.
.
------------------------------------------------------------------------
Detailed Physical Volume Stats   (512 byte blocks)
------------------------------------------------------------------------

VOLUME: /dev/hdisk11  description: XP MPIO Disk P9500   (Fibre)
reads:                  437296  (0 errs)
  read sizes (blks):    avg     8.0 min       8 max       8 sdev     0.0
  read times (msec):    avg   11.111 min   0.122 max  75.429 sdev   0.347
  read sequences:       1
  read seq. lengths:    avg 3498368.0 min 3498368 max 3498368 sdev     0.0
seeks:                  1       (0.0%)
  seek dist (blks):     init 3067240
  seek dist (%tot blks):init 4.87525
time to next req(msec): avg   0.206 min   0.018 max 461.074 sdev   1.736
throughput:             19429.5 KB/sec
utilization:            0.77

VOLUME: /dev/hdisk12  description: XP MPIO Disk P9500   (Fibre)
writes:                 434036  (0 errs)
  write sizes (blks):   avg     8.1 min       8 max      56 sdev     1.4
  write times (msec):   avg   2.222 min   0.159 max  79.639 sdev   0.915
  write sequences:      1
  write seq. lengths:   avg 3498344.0 min 3498344 max 3498344 sdev     0.0
seeks:                  1       (0.0%)
  seek dist (blks):     init 3067216
  seek dist (%tot blks):init 4.87521
time to next req(msec): avg   0.206 min   0.005 max 536.330 sdev   1.875
throughput:             19429.3 KB/sec
utilization:            0.72
.
.
.
In the above report, hdisk11 was the busiest disk on the system during the 90 second sample. The reads from hdisk11 averaged 11.111 ms. Since this is less than 15 ms, the storage area network and disk array were performing within scope for reads.

Also, hdisk12 was the second busiest disk on the system during the 90 second sample. The writes to hdisk12 averaged 2.222 ms. Since this is less than 2.5 ms, the storage area network and disk array were performing within scope for writes.

Other methods to measure similar information:

You can use the topas command using the -D option to get an overview of the most busiest disks on the system:
# topas -D
In the output, columns ART and AWT provide similar information. ART stands for the average time to receive a response from the hosting server for the read request sent. And AWT stands for the average time to receive a response from the hosting server for the write request sent.

You can also use the iostat command, using the -D (for drive utilization) and -l (for long listing mode) options:
# iostat -Dl 60
This will provide an overview over a 60 second period of your disks. The "avg serv" column under the read and write sections will provide you average service times for reads and writes for each disk.

An occasional peak value recorded on a system, doesn't immediately mean there is a disk bottleneck on the system. It requires longer periods of monitoring to determine if a certain disk is indeed a bottleneck for your system.

Topics: AIX, System Admin

Commands to create printer queues

Here are some commands to add a printer to an AIX system. Let's assume that the hostname of the printer is "printer", and that you've added an entry for this "printer" in /etc/hosts, or that you've added it to DNS, so it can be resolved to an IP address. Let's also assume that the queue you wish to make will be called "printerq", and that your printer can communicate on port 9100.

In that case, to create a generic printer queue, the command will be:

# /usr/lib/lpd/pio/etc/piomkjetd mkpq_jetdirect -p 'generic' -D asc \
-q 'printerq' -h 'printer' -x '9100'

In case you wish to set it up as a postscript printer, called "printerqps", then the command will be:
# /usr/lib/lpd/pio/etc/piomkjetd mkpq_jetdirect -p 'generic' -D ps \
-q 'printerqps' -h 'printer' -x '9100'

Topics: AIX, Monitoring, Networking, Red Hat / Linux, Security, System Admin

Determining type of system remotely

If you run into a system that you can't access, but is available on the network, and have no idea what type of system that is, then there are few tricks you can use to determine the type of system remotely.

The first one, is by looking at the TTL (Time To Live), when doing a ping to the system's IP address. For example, a ping to an AIX system may look like this:

# ping 10.11.12.82
PING 10.11.12.82 (10.11.12.82) 56(84) bytes of data.
64 bytes from 10.11.12.82 (10.11.12.82): icmp_seq=1 ttl=253 time=0.394 ms
...
TTL (Time To Live) is a timer value included in packets sent over networks that tells the recipient how long to hold or use the packet before discarding and expiring the data (packet). TTL values are different for different Operating Systems. So, you can determine the OS based on the TTL value. A detailed list of operating systems and their TTL values can be found here. Basically, a UNIX/Linux system has a TTL of 64. Windows uses 128, and AIX/Solaris uses 254.

Now, in the example above, you can see "ttl=253". It's still an AIX system, but there's most likely a router in between, decreasing the TTL with one.

Another good method is by using nmap. The nmap utility has a -O option that allows for OS detection:
# nmap -O -v 10.11.12.82 | grep OS
Initiating OS detection (try #1) against 10.11.12.82 (10.11.12.82)
OS details: IBM AIX 5.3
OS detection performed.
Okay, so it isn't a perfect method either. We ran the nmap command above against an AIX 7.1 system, and it came back as AIX 5.3 instead. And sometimes, you'll have to run nmap a couple of times, before it successfully discovers the OS type. But still, we now know it's an AIX system behind that IP.

Another option you may use, is to query SNMP information. If the device is SNMP enabled (it is running a SNMP daemon and it allows you to query SNMP information), then you may be able to run a command like this:
# snmpinfo -h 10.11.12.82 -m get -v sysDescr.0
sysDescr.0 = "IBM PowerPC CHRP Computer
Machine Type: 0x0800004c Processor id: 0000962CG400
Base Operating System Runtime AIX version: 06.01.0008.0015
TCP/IP Client Support  version: 06.01.0008.0015"
By the way, the example for SNMP above is exactly why UNIX Health Check generally recommends to disable SNMP, or at least to dis-allow providing such system information trough SNMP by updating the /etc/snmpdv3.conf file appropriately, because this information can be really useful to hackers. On the other hand, your organization may use monitoring that relies of SNMP, in which case it needs to be enabled. But then you stil have the opportunity of changing the SNMP community name to something else (the default is "public"), which also limits the remote information gathering possibilities.

Topics: AIX, System Admin

Resolving IBM.DRM software errors

If you see several SRC_RSTRT errors in the error report regarding IBM.DRM or IBM.AuditRM, using identifiers CB4A951F or BA431EB7, and detecting module "srchevn.c", then you are probably having a system that has been cloned in the past from another system, and the RSCT software is using the keys of the original system.

The solution is this:

# /usr/sbin/rsct/bin/rmcctrl -z 
# /usr/sbin/rsct/bin/rmcctrl -d 
# /usr/sbin/rsct/install/bin/recfgct -s 
# /usr/sbin/rsct/bin/rmcctrl -A 
# /usr/sbin/rsct/bin/rmcctrl -p 
This will generate new keys, and will solve the errors in the error report. Just to make sure, reboot your system, and they should no longer show up in the error report after the reboot.

Topics: Red Hat / Linux, System Admin

RHSM: Too many content sets for certificate

How to fix subscription-manager error "Too many content sets for certificate Red Hat Enterprise Linux Server" using RHN and be able to revert back to Red Hat Subscription Management after updating.

Step 1: Clean up the subscription-manager if needed:

# subscription-manager unsubscribe --all
# subscription-manager unregister
# subscription-manager clean
Step 2: Register to Red Hat Network (RHN) using rhn_register:
# rhn_register
Note: You will need your RH login and password to complete the wizard.

Step 3: Validate RHN registration of the system:
# yum repolist
Note: Look at Loaded plugins in the output and "rhnplugin" should be listed.

Step 4: Update subscription-manager* and python-rhsm* packages: # yum list updates subscription-manager* python-rhsm* Note: The output may vary depending on your system and installed packages.

Example output below:
Updated Packages
python-rhsm.x86_64 1.12.5-2.el6 rhel-x86_64-server-6
subscription-manager.x86_64 1.12.14-9.el6_6 rhel-x86_64-server-6
subscription-manager-firstboot.x86_64 1.12.14-9.el6_6 rhel-x86_64-server-6
subscription-manager-gnome.x86_64 0.99.19.4-1.el6_3 rhel-x86_64-server-6
# yum update subscription-manager* python-rhsm*
Note: Answer the questions when prompted. Validate the updates were applied successfully by examining the output.

Step 5: Unregister from RHN in preparation to register with subscription-manager:
  1. In the online Red Hat Portal, login.
  2. Access Subscription Management.
  3. Access RHN Classic Management -> All Registered Systems.
  4. Click on System Entitlements (you need to see check boxes next to systems).
  5. Select the check box next to the system you are working on.
  6. Click the "Unentitle" button at bottom middle of page.
  7. Validate the entitlement has been removed for the system.
  8. Perform the below command on the system's CLI:
    # rm /etc/sysconfig/rhn/systemid
Step 6: Register system with subscription-manager:

Note: Validate that no subscriptions are showing active.
# subscription-manager list --available
Note: A message similar to below should be displayed.
This system is not yet registered. Try 'subscription-manager register --help' for more information.
Register the system using your credentials to RHSM:
# subscription-manager register --username=xxxxxx --password='xxxxxx'
Note: You will need your Red Hat Portal Username and Password for the account the system will be registered under. Make note of the ID that the system will be registered when this command returns.

Validate that the subscription-manager plugin is loaded
# yum repolist
Look at Loaded plugins in the output where "subscription-manager" should be listed.

Validate that subscriptions are showing available now:
# subscription-manager list --available
Validate the Subscription Name, SKU, Contract, Account and Pool ID are showing up correctly. Make note of the "Pool ID" that will be required to subscribe in the next task. Register the system using one of the pools above:
# subscription-manager subscribe --pool='[POOL_ID_Number]'
Note: Where "[POOL_ID_Number]" should be obtained from the preceding task.

Make sure a message stating "Successfully attached a subscription for" the system is shown.

Step 7: Validate that the system is now consuming a subscription:
# subscription-manager list --consumed
Validate the Subscription Name, SKU, Contract, Account and Pool ID are correct.
# subscription-manager list
Note: The Status should show "Subscribed".

Step 8: Validate in Red Hat Portal that the new system shows up as well.

In Red Hat Portal:
  1. In the online Red Hat Portal, login.
  2. Access Subscription Management.
  3. Access Red Hat Subscription Management -> Subscriber Inventory -> Click on Systems.
  4. Examine the Systems inventory to validate the new system is now visible and shows a subscription attached.

Topics: AIX, Red Hat / Linux, Security, System Admin

System-wide separated shell history files for each user and session

Here's how you can set up your /etc/profile in order to create a separate shell history file for each user and each login session. This is very useful when you need to know who exactly ran a specific command at a point in time. For Red Hat Linux, put the updates in either /etc/profile or /etc/bashrc.

Put this in /etc/profile on all servers:

# HISTFILE
# execute only if interactive
if [ -t 0 -a "${SHELL}" != "/bin/bsh" ]
then
 d=`date "+%H%M.%m%d%y"`
 t=`tty | cut -c6-`
 u=`who am i | awk '{print $1}'`
 w=`who -ms | awk '{print $NF}' | sed "s/(//g" | sed "s/)//g"`
 y=`tty | cut -c6- | sed "s/\//-/g"`
 mkdir $HOME/.history.$USER 2>/dev/null
 export HISTFILE=$HOME/.history.$USER/.sh_history.$USER.$u.$w.$y.$d
 find $HOME/.history.$USER/.s* -type f -ctime +91 -exec rm {} \; 2>/dev/null

 H=`uname -n | cut -f1 -d'.'`
 mywhoami=`whoami`
 if [ ${mywhoami} = "root" ] ; then
  PS1='${USER}@(${H}) ${PWD##/*/} # '
 else
  PS1='${USER}@(${H}) ${PWD##/*/} $ '
 fi
fi

# Time out after 60 minutes
# Use readonly if you don't want users to be able to change it.
# readonly TMOUT=3600
TMOUT=3600
export TMOUT
When using ksh, put this in /etc/environment, to turn on time stamped history files:
# Added for extended shell history
EXTENDED_HISTORY=ON
When using bash, put this in /etc/bashrc, to enable time-stamped output when running the "history" command:
HISTTIMEFORMAT='%F %T '; export HISTTIMEFORMAT
This way, *every* user on the system will have a separate shell history in the .history directory of their home directory. Each shell history file name shows you which account was used to login, which account was switched to, on which tty this happened, and at what date and time this happened.

Shell history files are also time-stamped internally. For AIX, you can run "fc -t" to show the shell history time-stamped. For Red Hat, you can run: "history". Old shell history files are cleaned up after 3 months, because of the find command in the example above. Plus, user accounts will log out automatically after 60 minutes (3600 seconds) of inactivity, by setting the TMOUT variable to 3600. You can avoid running into a time-out by simply typing "read" or "\" followed by ENTER on the command line, or by adding "TMOUT=0" to a user's .profile, which essentially disables the time-out for that particular user.

One issue that you now may run into on AIX, is that because a separate history file is created for each login session, that it will become difficult to run "fc -t", because the fc command will only list the commands from the current session, and not those written to a different history file. To overcome this issue, you can set the HISTFILE variable to the file you want to run "fc -t" for:
# export HISTFILE=.sh_history.root.user.10.190.41.116.pts-4.1706.120210
Then, to list all the commands for this history file, make sure you start a new shell and run the "fc -t" command:
# ksh "fc -t -10"
This will list the last 10 commands for that history file.

Topics: Monitoring, PowerHA / HACMP

Cluster status webpage

How do you monitor multiple HACMP/PowerHA clusters? You're probably familiar with the clstat or the xclstat commands. These are nice, but not sufficient when you have more than 8 HACMP/PowerHA clusters to monitor, as it can't be configured to monitor more than 8 clusters. It's also difficult to get an overview of ALL clusters in a SINGLE look with clstat. IBM included a clstat.cgi in HACMP 5 to show the cluster status on a webpage. This still doesn't provide an overview in a single look, as the clstat.cgi shows a long listing of all clusters, and it is just like clstat limited to monitoring only 8 clusters.

The HACMP/PowerHA cluster status can be retrieved via SNMP (this is actually what clstat does too). Using the IP addresses of a cluster and the snmpinfo command, you can remotely retrieve cluster status information, and use that information to build a webpage. We've written a script for this purpose. By using colors for the status of the clusters and the nodes (green = ok, yellow = something is happening, red = error), you can get a quick overview of the status of all the HACMP/PowerHA clusters.


Per cluster you can see: the cluster name, the cluster ID, HACMP version and the status of the cluster and all its nodes. It will also show you where any resource groups are active.

You can download the script here. This is version 1.6. Untar the file that you download. There is a README in the package, that will tell you how you can configure the script. This script has been tested with HACMP version 4, 5, 6, and up to PowerHA version 7.1.3.4.

Topics: Red Hat / Linux, System Admin

Install GNOME GUI on RHEL 7 Linux Server

If you have performend a RHEL 7 Linux Server installation and did not include Graphical User Interface (GUI) you can do it later directly from command line using yum command and selecting an appropriate installation group. To list all available installation groups on Redhat 7 Linux use:

# yum group list
From the above list select Server with GUI installation group:
# yum groupinstall 'Server with GUI'
Just because gnome desktop environment is a default GUI on RHEL 7 linux system the above command will install gnome. Alternatively, you can run the below command to only install core GNOME packages:
# yum groupinstall 'X Window System' 'GNOME'
Once the installation is finished, you need to change system's runlevel to runlevel 5. Changing runlevel on RHEL 7 is done by use of systemctl command. The below command will change runlevel from runlevel 3 to runelevel 5 on RHEL 7:
# systemctl enable graphical.target --force
Depending on your previous installations you may need to accept Redhat License after you reboot your system. Once you boot to your system you can check GNOME version using:
# gnome-shell --version
Source: http://linuxconfig.org/install-gnome-gui-on-rhel-7-linux-server.

Topics: Red Hat / Linux, System Admin

How to create Local Repositories in RHEL

This is a short procedure that will tell you how to set up a local repository (repo) for use by the yum command, to install packages from onto your system. In this procedure, we assume you have the RHEL installation DVD inserted into your virtual or physical drive.

Mount the drive:

# mkdir /cdrom
# mount /dev/cdrom /cdrom
Then create the repo file in /etc/yum.repos.d, called local.repo:
# cd /etc/yum.repos.d
# vi local.repo
[local]
name=Local Repo
baseurl=file:////cdrom
enabled=1
gpgcheck=0
protect=1
From now on you can use this local repository to install software, such as wireshark:
# yum install wireshark

Number of results found: 469.
Displaying results: 101 - 110.