If you're trying to restoring an mksysb through NIM and constantly get the same error when trying to restore a mksysb on different systems:
0042-006 niminit (To-master) rcmd connection refused
This may be caused by the "nimesis" daemon not running on the NIM server. Make sure it's enabled in /etc/inittab on the NIM server:
# grep nim /etc/inittab
nim:2:wait:/usr/bin/startsrc -g nim >/dev/console 2>&1
The next part describes a problem where you would want to do a search on a file system to find all directories in it, and to start a backup session per directory found, but not more than 20 backup sessions at once. Usually you would use the "find" command to find those directories, with the "-exec" parameter to execute the backup command. But in this case, it would result in possibly more than 20 active backup sessions at once, which might overload the system.
So, you can create a script that does a "find" and dumps the output to a file first, and then starts reading that file and initiating 20 backups in parallel. But then, the backup can't start, before the "find" command completes, which may take quite a long time, especially if run on a file system with a large number of files. So how do you do "find" commands and backups in parallel? Solve this problem with a pipeline.
Create a pipeline:
# rm -f /tmp/pipe
# mknod /tmp/pipe p
Issue the find command:
# find [/filesystem] -type d -exec echo {} \; > /tmp/pipe
So now you have a command which writes to the pipeline, but can't continue until some other process is reading from the pipeline.
Create another script that reads from the pipe and issues the backup sessions:
cat /tmp/pipe | while read entry
do
# Wait until less than 20 backup sessions are active
while [ $(jobs -p|wc -l|awk '{print $1}') -ge 20 ]
do
sleep 5
done
# start backup session in the background
[backup-command] &
echo Started backup of $entry at `date`
done
# wait for all backup sessions to end
wait
echo `date`: Backup complete
This way, while the "find" command is executing, already backup sessions are started, thus saving time to wait until the "find" command completes.
AIX 5.1 and higher includes a set of System V commands. E.g. the Sun Solaris command "ptree" has been included in AIX as "proctree". Information about all System V commands in AIX can be found in the AIX 5.x differences guides.
This command will show you the process tree of a specific user:
# proctree username
If you wish to restrict the maximum number of login sessions for a specific user, you can do this by modifying the .profile of that user:
A=`w| grep $LOGNAME | wc -l`
if [ $A -ge 3 ] ; then
exit
fi
This example restricts the number of logins to three. Make sure the user can't modify his/her own .profile by restricting access rights.
To test the connection of a server with its time servers (e.g. to check if no firewall is blocking NTP communication), run the following command:
# ntpq -c peers [ip-address-of-timeserver]
Older pSeries systems (Power4) are equipped with environmental sensors. You can read the sensor values using:
# /usr/lpp/diagnostics/bin/uesensor -l
You can use these sensors to monitor your systems and your computer rooms. It isn't very difficult to create a script to monitor these environmental sensors regularly and to display it on a webpage, updating it automatically. Newer systems (LPAR based) are not equipped with these environmental sensors. For PC systems several products exist, which attach to either a RJ45 or a parallel port and which can be used to monitor temperatures.

Sometimes situations occur where a logical volume is deleted, but the ODM is not up to date. E.g. when "lsvg -l" doesn't show you the logical volume, but the lslv command can still show information about the logical volume. Not good.
To resolve this issue, first try:
# synclvodm -v [volume group name]
If that doesn't work, try this: (in the example below logical volume hd7 is used). Save the ODM information of the logical volume:
# odmget -q name=hd7 CuDv | tee -a /tmp/CuDv.hd7.out
# odmget -q name=hd7 CuAt | tee -a /tmp/CuAt.hd7.out
If you mess things up, you can allways use the following command to restore the ODM information:
# odmadd /tmp/[filename]
Delete the ODM information of the logical volume:
# odmdelete -o CuDv -q name=hd7
# odmdelete -o CuAt -q name=hd7
Then, remove the device entry of the logical volume in the /dev directory (if present at all).
For high-disk performance systems, such as SSA, it is wise to enable the fast write on the disks. To check which disks are fast write enabled, type:
# smitty ssafastw
Fast write needs cache memory on the SSA adapter. Check your amount of cache memory on the SSA adapter:
# lscfg -vl ssax
Where 'x' is the number of your SSA adapter. 128MB of SDRAM will suffice. Having 128MB of SDRAM memory makes sure you can use the full 32MB of cache memory.
To enable the fast write the disk must not be in use. So either the volume groups are varied offline, or the disk is taken out of the volume group. Use the following command to enable the fast write cache:
# smitty chgssardsk
To find the known recommended technology levels:
# oslevel -rq
To find all filesets lower than a certain technology level:
# oslevel -rl 5300-07
To find all filesets higher than a certain technology level level:
# oslevel -rg 5200-05
Number of results found: 469.
Displaying results: 461 - 469.