When you suspect a performance problem, PerfPMR can be run. This is a tool generally used by IBM support personal to resolve performance related issues. The download site for this tool is:
ftp://ftp.software.ibm.com/aix/tools/perftools/perfpmr
When you wish to create a very large file for test purposes, try this command:
# dd if=/dev/zero bs=1024 count=2097152 of=./test.large.file
This wil create a file consisting of 2097152 blocks of 1024 bytes, which is 2GB. You can change the count value to anything you like.
Be aware of the fact, that if you wish to create files larger than 2GB, that your file system needs to be created as a "large file enabled file system", otherwise the upper file size limit is 2GB (under JFS; under JFS2 the upper limit is 64GB). Also check the ulimit values of the user-id you use to create the large file: set the file limit to -1, which is unlimited. Usually, the file limit is default set to 2097151 in /etc/security/limits, which stands for 2097151 blocks of 512 bytes = 1GB.
Another way to create a large file is:
# /usr/sbin/lmktemp ./test.large.file 2147483648
This will create a file of 2147483648 bytes (which is 1024 * 2097152 = 2GB).
You can use this large file for adapter throughput testing purposes:
Write large sequential I/O test:
# cd /BIG
# time /usr/sbin/lmktemp 2GBtestfile 2147483648
Divide 2048/#seconds for MB/sec write speed.
Read large sequential I/O test:
# umount /BIG
(This will flush file from memory)
# mount /BIG
# time cp 2GBtestfile /dev/null
Divide 2048/#seconds for MB/sec read speed.
Tip: Run
nmon (select
a for adapter) in another window. You will see the throughput for each adapter.
More information on JFS and JFS2 can be found
here.
The next part describes a problem where you would want to do a search on a file system to find all directories in it, and to start a backup session per directory found, but not more than 20 backup sessions at once. Usually you would use the "find" command to find those directories, with the "-exec" parameter to execute the backup command. But in this case, it would result in possibly more than 20 active backup sessions at once, which might overload the system.
So, you can create a script that does a "find" and dumps the output to a file first, and then starts reading that file and initiating 20 backups in parallel. But then, the backup can't start, before the "find" command completes, which may take quite a long time, especially if run on a file system with a large number of files. So how do you do "find" commands and backups in parallel? Solve this problem with a pipeline.
Create a pipeline:
# rm -f /tmp/pipe
# mknod /tmp/pipe p
Issue the find command:
# find [/filesystem] -type d -exec echo {} \; > /tmp/pipe
So now you have a command which writes to the pipeline, but can't continue until some other process is reading from the pipeline.
Create another script that reads from the pipe and issues the backup sessions:
cat /tmp/pipe | while read entry
do
# Wait until less than 20 backup sessions are active
while [ $(jobs -p|wc -l|awk '{print $1}') -ge 20 ]
do
sleep 5
done
# start backup session in the background
[backup-command] &
echo Started backup of $entry at `date`
done
# wait for all backup sessions to end
wait
echo `date`: Backup complete
This way, while the "find" command is executing, already backup sessions are started, thus saving time to wait until the "find" command completes.
Number of results found for topic
Performance: 13.
Displaying results: 11 - 13.