Sometimes, you just need that one single file from a mksysb image backup. It's really not that difficult to accomplish this.
First of all, go to the directory that contains the mksysb image file:
# cd /sysadm/iosbackup
In this example, were using the mksysb image of a Virtual I/O server, created using iosbackup. This is basically the same as a mksysb image from a regular AIX system. The image file for this mksysb backup is called vio1.mksysb
First, try to locate the file you're looking for; For example, if you're looking for file nimbck.ksh:
# restore -T -q -l -f vio1.mksysb | grep nimbck.ksh
New volume on vio1.mksysb:
Cluster size is 51200 bytes (100 blocks).
The volume number is 1.
The backup date is: Thu Jun 9 23:00:28 MST 2011
Files are backed up by name.
The user is padmin.
-rwxr-xr-x- 10 staff May 23 08:37 1801 ./home/padmin/nimbck.ksh
Here you can see the original file was located in /home/padmin.
Now recover that one single file:
# restore -x -q -f vio1.mksysb ./home/padmin/nimbck.ksh
x ./home/padmin/nimbck.ksh
Note that it is important to add the dot before the filename that needs to be recovered. Otherwise it won't work. Your file is now restored to ./home/padmin/nimbck.ksh, which is a relative folder from the current directory you're in right now:
# cd ./home/padmin
# ls -als nimbck.ksh
4 -rwxr-xr-x 1 10 staff 1801 May 23 08:37 nimbck.ksh
The savevg command can be used to backup user volume groups. All logical volume information is archived, as well as JFS and JFS2 mounted filesystems. However, this command cannot be used to backup raw logical volumes.
Save the contents of a raw logical volume onto a file using:
# dd if=/dev/lvname of=/file/system/lvname.dd
This will create a copy of logical volume "lvname" to a file "lvname.dd" in file system /file/system. Make sure that wherever you write your output file to (in the example above to /file/system) has enough disk space available to hold a full copy of the logical volume. If the logical volume is 100 GB, you'll need 100 GB of file system space for the copy.
If you want to test how this works, you can create a logical volume with a file system on top of it, and create some files in that file system. Then unmount he filesystem, and use dd to copy the logical volume as described above.
Then, throw away the file system using "rmfs -r", and after that has been completed, recreate the logical volume and the file system. If you now mount the file system, you will see, that it is empty. Unmount the file system, and use the following dd command to restore your backup copy:
# dd if=/file/system/lvname.dd of=/dev/lvname
Then, mount the file system again, and you will see that the contents of the file system (the files you've placed in it) are back.
There's a simple command to list information about a mksysb image, called lsmksysb:
# lsmksysb -lf mksysb.image
VOLUME GROUP: rootvg
BACKUP DATE/TIME: Mon Jun 6 04:00:06 MST 2011
UNAME INFO: AIX testaix1 1 6 0008CB1A4C00
BACKUP OSLEVEL: 6.1.6.0
MAINTENANCE LEVEL: 6100-06
BACKUP SIZE (MB): 49920
SHRINK SIZE (MB): 17377
VG DATA ONLY: no
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 32 64 2 open/syncd N/A
hd8 jfs2log 1 2 2 open/syncd N/A
hd4 jfs2 8 16 2 open/syncd /
hd2 jfs2 40 80 2 open/syncd /usr
hd9var jfs2 40 80 2 open/syncd /var
hd3 jfs2 40 80 2 open/syncd /tmp
hd1 jfs2 8 16 2 open/syncd /home
hd10opt jfs2 8 16 2 open/syncd /opt
dumplv1 sysdump 16 16 1 open/syncd N/A
dumplv2 sysdump 16 16 1 open/syncd N/A
hd11admin jfs2 1 2 2 open/syncd /admin
One of the best tools to look at LVM usage is with lvmstat. It can report the bytes read and written to logical volumes. Using that information, you can determine which logical volumes are used the most.
Gathering LVM statistics is not enabled by default:
# lvmstat -v data2vg
0516-1309 lvmstat: Statistics collection is not enabled for
this logical device. Use -e option to enable.
As you can see by the output here, it is not enabled, so you need to actually enable it for each volume group prior to running the tool using:
# lvmstat -v data2vg -e
The following command takes a snapshot of LVM information every second for 10 intervals:
# lvmstat -v data2vg 1 10
This view shows the most utilized logical volumes on your system since you started the data collection. This is very helpful when drilling down to the logical volume layer when tuning your systems.
# lvmstat -v data2vg
Logical Volume iocnt Kb_read Kb_wrtn Kbps
appdatalv 306653 47493022 383822 103.2
loglv00 34 0 3340 2.8
data2lv 453 234543 234343 89.3
What are you looking at here?
- iocnt: Reports back the number of read and write requests.
- Kb_read: Reports back the total data (kilobytes) from your measured interval that is read.
- Kb_wrtn: Reports back the amount of data (kilobytes) from your measured interval that is written.
- Kbps: Reports back the amount of data transferred in kilobytes per second.
You can use the -d option for lvmstat to disable the collection of LVM statistics.
A common issue on AIX servers is, that logical volumes are configured on only one single disk, sometimes causing high disk utilization on a small number of disks in the system, and impacting the performance of the application running on the server.
If you suspect that this might be the case, first try to determine which disks are saturated on the server. Any disk that is in use more than 60% all the time, should be considered. You can use commands such as iostat, sar -d, nmon and topas to determine which disks show high utilization. If the do, check which logical volumes are defined on that disk, for example on an IBM SAN disk:
# lspv -l vpath23
A good idea always is to spread the logical volumes on a disk over multiple disk. That way, the logical volume manager will spread the disk I/O over all the disks that are part of the logical volume, utilizing the queue_depth of all disks, greatly improving performance where disk I/O is concerned.
Let's say you have a logical volume called prodlv of 128 LPs, which is sitting on one disk, vpath408. To see the allocation of the LPs of logical volume prodlv, run:
# lslv -m prodlv
Let's also assume that you have a large number of disks in the volume group, in which prodlv is configured. Disk I/O usually works best if you have a large number of disks in a volume group. For example, if you need to have 500 GB in a volume group, it is usually a far better idea to assign 10 disks of 50 GB to the volume group, instead of only one disk of 512 GB. That gives you the possibility of spreading the I/O over 10 disks instead of only one.
To spread the disk I/O prodlv over 8 disks instead of just one disk, you can create an extra logical volume copy on these 8 disks, and then later on, when the logical volume is synchronized, remove the original logical volume copy (the one on a single disk vpath408). So, divide 128 LPs by 8, which gives you 16LPs. You can assign 16 LPs for logical volume prodlv on 8 disks, giving it a total of 128 LPs.
First, check if the upper bound of the logical volume is set ot at least 9. Check this by running:
# lslv prodlv
The upper bound limit determines on how much disks a logical volume can be created. You'll need the 1 disk, vpath408, on which the logical volume already is located, plus the 8 other disks, that you're creating a new copy on. Never ever create a copy on the same disk. If that single disk fails, both copies of your logical volume will fail as well. It is usually a good idea to set the upper bound of the logical volume a lot higher, for example to 32:
# chlv -u 32 prodlv
The next thing you need to determine is, that you actually have 8 disks with at least 16 free LPs in the volume group. You can do this by running:
# lsvg -p prodvg | sort -nk4 | grep -v vpath408 | tail -8
vpath188 active 959 40 00..00..00..00..40
vpath163 active 959 42 00..00..00..00..42
vpath208 active 959 96 00..00..96..00..00
vpath205 active 959 192 102..00..00..90..00
vpath194 active 959 240 00..00..00..48..192
vpath24 active 959 243 00..00..00..51..192
vpath304 active 959 340 00..89..152..99..00
vpath161 active 959 413 14..00..82..125..192
Note how in the command above the original disk, vpath408, was excluded from the list.
Any of the disks listed, using the command above, should have at least 1/8th of the size of the logical volume free, before you can make a logical volume copy on it for prodlv.
Now create the logical volume copy. The magical option you need to use is "-e x" for the logical volume commands. That will spread the logical volume over all available disks. If you want to make sure that the logical volume is spread over only 8 available disks, and not all the available disks in a volume group, make sure you specify the 8 available disks:
# mklvcopy -e x prodlv 2 vpath188 vpath163 vpath208 \
vpath205 vpath194 vpath24 vpath304 vpath161
Now check again with "mklv -m prodlv" if the new copy is correctly created:
# lslv -m prodlv | awk '{print $5}' | grep vpath | sort -dfu | \
while read pv ; do
result=`lspv -l $pv | grep prodlv`
echo "$pv $result"
done
The output should similar like this:
vpath161 prodlv 16 16 00..00..16..00..00 N/A
vpath163 prodlv 16 16 00..00..00..00..16 N/A
vpath188 prodlv 16 16 00..00..00..00..16 N/A
vpath194 prodlv 16 16 00..00..00..16..00 N/A
vpath205 prodlv 16 16 16..00..00..00..00 N/A
vpath208 prodlv 16 16 00..00..16..00..00 N/A
vpath24 prodlv 16 16 00..00..00..16..00 N/A
vpath304 prodlv 16 16 00..16..00..00..00 N/A
Now synchronize the logical volume:
# syncvg -l prodlv
And remove the original logical volume copy:
# rmlvcopy prodlv 1 vpath408
Then check again:
# lslv -m prodlv
Now, what if you have to extend the logical volume prodlv later on with another 128 LPs, and you still want to maintain the spreading of the LPs over the 8 disks? Again, you can use the "-e x" option when running the logical volume commands:
# extendlv -e x prodlv 128 vpath188 vpath163 vpath208 \
vpath205 vpath194 vpath24 vpath304 vpath161
You can also use the "-e x" option with the mklv command to create a new logical volume from the start with the correct spreading over disks.
If an AIX server is backed up by Veritas NetBackup, then this is how you can enable logging of the backups on your AIX client:
First, make sure the necessary folders exist in /usr/openv/netbackup/logs, and the access is set to 777, by running:
# mkdir bp bparchive bpbackup bpbkar bpcd bpdbsbora
# mkdir bpfilter bphdb bpjava-msvc bpjava-usvc bpkeyutil
# mkdir bplist bpmount bpnbat bporaexp bporaexp64
# mkdir bporaimp bporaimp64 bprestore db_log dbclient
# mkdir symlogs tar user_ops
# chmod 777 *
Then, you have to change the default debug level in /usr/openv/netbackup/bp.conf, by adding:
VERBOSE = 2
By default, VERBOSE is set to one, which means there isn't any logging at all, so that is not helpful. You can go up to "VERBOSE = 5", but that may create very large log files, and this may fill up the file system. In any case, check how much disk space is available in /usr before enabling the logging of the Veritas NetBackup client.
Backups through Veritas NetBackup are initiated through inetd:
# egrep "bpcd" /etc/services
bpcd 13782/tcp # VERITAS NetBackup
bpcd 13782/udp # VERITAS NetBackup
# grep bpcd /etc/inetd.conf
bpcd stream tcp nowait root /usr/openv/netbackup/bin/bpcd bpcd
Now all you have to do is wait for the NetBackup server (the one listed in /usr/openv/netbackup/bp.conf) to start the backup on the AIX client. After the backup has run, you should at least find a log file in the bpcd and bpbkar folders in /usr/openv/netbackup.
Here is how to retrieve client and group information from EMC Networker using nsradmin:
First, start nsradmin as user root:
# /bin/nsradmin -s networkerserver
(Note: replace "networkerserver" for the actual host name of your EMC Networker Server).
To select information of a specific client, for example "testserver", type:
nsradmin> print type: nsr client; name: testserver
You can furthur limit the attributes that you're seeing, by using the show sub-command. For example if you only wish to see the save set and the group, type:
nsradmin> show save set
nsradmin> show group
nsradmin> show name
nsradmin> print type: nsr client; name: testserver
name: testserver.domain.com;
group: aixprod;
save set: /, /usr, /var, /tmp, /home, /opt, /roothome;
If you wish to retrieve information regarding a group, type:
nsradmin> show
Will show all attributes
nsradmin> print type: nsr group; name: aixprod
If you like to get more information about the types you can print information of, type:
nsradmin> types
This is how to stop EMC Networker:
# /bin/nsr_shutdown
And this is how you start it (taken from /etc/inittab):
# echo "sh /etc/rc.nsr" | at now
To perform recoveries from EMC (or Legato) Networker on the command line, you can use the recover command. The recover command runs in two modes:
Interactive mode: Interactive mode is the default mode for the recover command. This mode places you in a shell-like environment that allows you to use subcommands. These commands let you navigate the client file index to select and recover files and directories.
Non-interactive mode: In non-interactive mode, the files specified on the command line are recovered automatically without browsing. To activate non-interactive mode, use the -a option.
Using recover in Interactive Mode:
Login to the server you need to recover the file for and then type recover. This will place you in the recover shell environment. You can also type recover [pathname] to set your initial working directory (recover /etc), the default is the current working directory.
# recover /etc
Current working directory is /etc
recover>
Note: If you do not get a recover prompted when you type recover, add a -s servername option:
# recover -s hostname
The following commands let you navigate a client file index to select and recover files and directories:
- ls
Lists information about the given files and directories. When no name argument is provided, ls lists the contents of the current directory. When you specify a directory as name, the content of that directory is displayed.
- cd
Changes the current working directory. The default is the directory in which you executed recover.
- pwd
Prints the full pathname of the current working directory.
- add [name.. ]
Adds the current directory or the named files or directories to the recover list. If a directory is specified, it is added with all of the subordinate files to the recover list.
- delete [name..]
Deletes the current directory or the named files or directories from the recover list. If a directory is specified, that directory and all of the subordinate files are deleted from the recover list.
- versions [name..]
List all available versions for a file or directory. If no name is given the current working directory is used.
- changetime
Change the backup browse time to recover files before the last backup. You will be prompted for new time. Time can be entered as December 15, 2009 or 12/15/2009.
- list
Displays the files on the recover list.
- recover
Recovers all files on the recover list from the Networker server. Upon completion, the recover list is empty.
- exit
Exits immediately from the recovery.
- quit
Exits immediately from the recover. Files on the recover list are not recovered.
Using recover in Non-interactive mode:
In non-interactive mode, the files specified on the command line are recovered automatically without browsing. To activate non-interactive mode, use the -a option. For example:
Recover the /etc/hosts file from the most recent backup:
# recover -a /etc/hosts
Using the recover Command in Directed Recoveries:
To relocate recovered files use the -d destination option with the recover command:
# recover -a -d /restore /etc/hosts
Recovering 1 file from /etc/ into /restore
Requesting 1 file(s), this may take a while...
./hosts
Received 1 file(s) from NSR server `networker'
Recover completion time: Thu Nov 18 14:39:15 2009
Using the recover Command to recover a file from a specific date:
Enter the recover shell by typing recover. Locate the file you need to restore using the ls and cd commands. List the versions for the file using the versions command, and use the changetime command to change to the day the file was backed up. Add the file to the recovery list using the add command.
# recover
Current working directory is /
recover> versions /etc/hosts
Versions of `/etc/hosts':
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Mon Aug 9 20:02:53 EDT 2010
location: 004049
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Fri Aug 6 21:11:07 EDT 2010
location: DD0073 at DDVTL
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Mon Aug 2 20:06:48 EDT 2010
location: 004242 at rd=ntwrkrstgnd1:ATL
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Fri Jul 30 21:09:15 EDT 2010
location: DD0054 at DDVTL
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Mon Jul 26 20:10:20 EDT 2010
location: 004095
recover> changetime 8/1/2010
6497:recover: time changed to Sun Aug 1 23:59:59 EDT 2010
recover> add /etc/hosts
/etc
1 file(s) marked for recovery
recover> recover
Recovering 1 file into its original location
Volumes needed (all on-line):
DD0054 at \\.\Tape20
Total estimated disk space needed for recover is 4 KB.
Requesting 1 file(s), this may take a while...
./hosts
./hosts file exists, overwrite (n, y, N, Y) or rename (r, R) [n]? y
Overwriting ./hosts
Received 1 file(s) from NSR server `networker'
Recover completion time: Thu Aug 12 17:34:06 EDT 2010
Using the -f option we can recover files from the command line without having to answer questions if we want to overwrite any existing files. For example, if you wish to recover the entire /etc file system into /tmp:
# recover -f -d /tmp/ -a /etc/
All the files will be recovered to /tmp/etc.
The -c option can be used to recover files from different client. For example, if you wish to recover the entire /etc file system of server "otherclient" to /tmp:
# recover -f -c otherclient -d /tmp/ -a /etc/
The -t option can be used to do a point-in-time recover of a file and/or file system. For example, to recover the /etc/hosts file of 09/05/2010 at noon:
# recover -s networkerserver -t "09/05/2010 12:00" -a /etc/hosts
Recovering multiple files is also possible. For example, if you wish to recover 2 mksysb images:
# recover -f -c client -s server -a mksysb.image1 mksysb.image2
Here's a script you can use to run mksysb backups of your clients to a NFS server. It is generally a good idea to set up a NIM server and also use this NIM server as a NFS server. All your clients should then be configured to create their mksysb backups to the NIM/NFS server, using the script that you can download here: nimbck.ksh.
By doing this, the latest mksysb images are available on the NIM server. This way, you can configure a mksysb resource on the NIM server (use: smitty nim_mkres) pointing to the mksysb image of a server, for easy recovery.
Number of results found for topic
Backup & restore: 42.
Displaying results: 11 - 20.