You may be familiar with suspending a process that is running in the foreground by pressing CTRL-Z. It will suspend the process, until you type "fg", and the process will resume again.
# sleep 400
After pressing CTRL-Z, you'll see:
[1] + Stopped (SIGTSTP) sleep 400
Then type "fg" to resume the process:
# fg
sleep 400
But what if you wish to suspend a process that is not attached to a terminal, and is running in the background? This is where the kill command is useful. Using signal 17, you can suspend a process, and using signal 19 you can resume a process.
This is how it works: First look up the process ID you wish to suspend:
# sleep 400 &
[1] 8913102
# ps -ef | grep sleep
root 8913102 10092788 0 07:10:30 pts/1 0:00 sleep 400
root 14680240 10092788 0 07:10:34 pts/1 0:00 grep sleep
Then suspend the process with signal 17:
# kill -17 8913102
[1] + Stopped (SIGSTOP) sleep 400 &
To resume it again, send signal 19:
# kill -19 8913102
The use of $RANDOM in Korn Shell can be very useful. Korn shell built-in $RANDOM can generate random numbers in the range 0:32767. At every call a new random value is generated:
# echo $RANDOM
19962
# echo $RANDOM
19360
The $RANDOM Korn shell built-in can also be used to generate numbers within a certain range, for example, if you want to run the sleep command for a random number of seconds.
To sleep between 1 and 600 seconds (up to 10 minutes):
# sleep $(print $((RANDOM%600+1)))
To know quickly how many virtual processors are active, run:
# echo vpm | kdb
For example:
# echo vpm | kdb
...
VSD Thread State.
CPU VP_STATE SLEEP_STATE PROD_TIME: SECS NSECS CEDE_LAT
0 ACTIVE AWAKE 0000000000000000 00000000 00
1 ACTIVE AWAKE 0000000000000000 00000000 00
2 ACTIVE AWAKE 0000000000000000 00000000 00
3 ACTIVE AWAKE 0000000000000000 00000000 00
4 DISABLED AWAKE 00000000503536C7 261137E1 00
5 DISABLED SLEEPING 0000000051609EAF 036D61DC 02
6 DISABLED SLEEPING 0000000051609E64 036D6299 02
7 DISABLED SLEEPING 0000000051609E73 036D6224 02
There are 2 ways for reading the Diagnostics log file, located in /var/adm/ras/diag:
The first option uses the diag tool. Run:
# diag
Then hit ENTER and select "Task Selection", followed by "Display Previous Diagnostic Results" and "Display Previous Results".
The second option is to use diagrpt. Run:
# /usr/lpp/diagnostics/bin/diagrpt -s 010101
To display only the last entry, run:
# /usr/lpp/diagnostics/bin/diagrpt -o
To create a system backup of a Virtual I/O Server (VIOS), run the following commands (as user root):
# /usr/ios/cli/ioscli viosbr -backup -file vios_config_bkup
-frequency daily -numfiles 10
# /usr/ios/cli/ioscli backupios -nomedialib -file /mksysb/$(hostname).mksysb -mksysb
The first command (viosbr) will create a backup of the configuration information to /home/padmin/cfgbackups. It will also schedule the command to run every day, and keep up to 10 files in /home/padmin/cfgbackups.
The second command is the mksysb equivalent for a Virtual I/O Server: backupios. This command will create the mksysb image in the /mksysb folder, and exclude any ISO repositiory in rootvg, and anything else excluded in /etc/exclude.rootvg.
It is useful to run the following commands before you create your (at least) weekly mksysb image:
# lsvg -o | xargs -i mkvgdata {}
# tar -cvf /sysadm/vgdata.tar /tmp/vgdata
Add these commands to your mksysb script, just before running the mksysb command. What this does is to run the mkvgdata command for each online volume group. This will generate output for a volume group in /tmp/vgdata. The resulting output is then tar'd and stored in the /sysadm folder or file system. This allows information regarding your volume groups, logical volumes, and file systems to be included in your mksysb image.
To recreate the volume groups, logical volumes and file systems:
- Run:
# tar -xvf /sysadm/vgdata.tar
- Now edit /tmp/vgdata/{volume group name}/{volume group name}.data file and look for the line with "VG_SOURCE_DISK_LIST=". Change the line to have the hdisks, vpaths or hdiskpowers as needed.
- Run:
# restvg -r -d /tmp/vgdata/{volume group name}/{volume group name}.data
Make sure to remove file systems with the rmfs command before running restvg, or it will not run correctly. Or, you can just run it once, run the exportvg command for the same volume group, and run the restvg command again. There is also a "-s" flag for restvg that lets you shrink the file system to its minimum size needed, but depending on when the vgdata was created, you could run out of space, when restoring the contents of the file system. Just something to keep in mind.
Here's a quick way to remove all the printer queues from an AIX system:
/usr/lib/lpd/pio/etc/piolsvp -p | grep -v PRINTER | \
while read queue device rest ; do
echo $queue $device
rmquedev -q$queue -d$device
rmque -q$queue
done
What if you want to get the 7th line of a text file. For example, you could get the 7th line of the /etc/hosts file, by using the head and tail commands, like this:
# head -7 /etc/hosts | tail -1
# Licensed Materials - Property of IBM
An even easier way to do it, is:
# sed -n 7p /etc/hosts
# Licensed Materials - Property of IBM
Once you've successfully set up live partition mobility on a couple of servers, you may want to script the live partition mobility migrations, and at that time, you'll need the commands to perform this task on the HMC.
In the example below, we're assuming you have multiple managed systems, managed through one HMC. Without, it would be difficult to move an LPAR from one managed system to another.
First of all, to see the actual state of the LPAR that is to be migrated, you may want to start the nworms program, which is a small program that displays wriggling worms along with the serial number on your display. This allows you to see the serial number of the managed system that the LPAR is running on. Also, the worms will change color, as soon as the LPM migration has been completed.
For example, to start nworms with 5 worms and an acceptable speed on a Power7 system, run:
# ./nworms 5 50000
Next, log on through ssh to your HMC, and see what managed systems are out there:
> lssyscfg -r sys -F name
Server1-8233-E8B-SN066001R
Server2-8233-E8B-SN066002R
Server3-8233-E8B-SN066003R
It seems there are 3 managed systems in the example above.
Now list the status of the LPARs on the source system, assuming you want to migrate from Server1-8233-E8B-SN066001R, moving an LPAR to Server2-8233-E8B-SN066002R:
> lslparmigr -r lpar -m Server1-8233-E8B-SN066001R
name=vios1,lpar_id=3,migration_state=Not Migrating
name=vios2,lpar_id=2,migration_state=Not Migrating
name=lpar1,lpar_id=1,migration_state=Not Migrating
The example above shows there are 2 VIO servers and 1 LPAR on server Server1-8233-E8B-SN066001R.
Validate if it is possible to move lpar1 to Server2-82330E8B-SN066002R:
> migrlpar -o v -t Server2-8233-E8B-SN066002R -m
Server1-8233-E8B-SN066001R --id 1
> echo $?
0
The example above shows a validation (-o v) to the target server (-t) from the source server (-m) for the LPAR with ID 1, which we know from the lslparmigr command is our LPAR lpar1. If the command returns a zero, the validation has completed successfully.
Now perform the actual migration:
> migrlpar -o m -t Server2-8233-E8B-SN066002R
-m Server1-8233-E8B-SN066001R -p lpar1 &
This will take a couple a minutes, and the migration is likely to take longer, depending on the size of memory of the LPAR.
To check the state:
> lssyscfg -r lpar -m Server1-8233-E8B-SN066001R -F name,state
Or to see the number of bytes transmitted and remaining to be transmitted, run:
> lslparmigr -r lpar -m Server1-8233-E8B-SN066001R -F name,migration_state,bytes_transmitted,bytes_remaining
Or to see the reference codes (which you can also see on the HMC gui):
> lsrefcode -r lpar -m Server2-8233-E8B-SN066002R
lpar_name=lpar1,lpar_id=1,time_stamp=06/26/2012 15:21:24,
refcode=C20025FF,word2=00000000
lpar_name=vios1,lpar_id=2,time_stamp=06/26/2012 15:21:47,
refcode=,word2=03400000,fru_call_out_loc_codes=
lpar_name=vios2,lpar_id=3,time_stamp=06/26/2012 15:21:33,
refcode=,word2=03D00000,fru_call_out_loc_codes=
After a few minutes the lslparmigr command will indicate that the migration has been completed. And now that you know the commands, it's fairly easy to script the migration of multiple LPARs.
The default value of hcheck_interval for VSCSI hdisks is set to 0, meaning that health checking is disabled. The hcheck_interval attribute of an hdisk can only be changed online if the volume group to which the hdisk belongs, is not active. If the volume group is active, the ODM value of the hcheck_interval can be altered in the CuAt class, as shown in the following example for hdisk0:
# chdev -l hdisk0 -a hcheck_interval=60 -P
The change will then be applied once the system is rebooted. However, it is possible to change the default value of the hcheck_interval attribute in the PdAt ODM class. As a result, you won't have to worry about its value anymore and newly discovered hdisks will automatically get the new default value, as illustrated in the example below:
# odmget -q 'attribute = hcheck_interval AND uniquetype = \
PCM/friend/vscsi' PdAt | sed 's/deflt = \"0\"/deflt = \"60\"/' \
| odmchange -o PdAt -q 'attribute = hcheck_interval AND \
uniquetype = PCM/friend/vscsi'
Number of results found: 469.
Displaying results: 131 - 140.