Here is how to retrieve client and group information from EMC Networker using nsradmin:
First, start nsradmin as user root:
# /bin/nsradmin -s networkerserver
(Note: replace "networkerserver" for the actual host name of your EMC Networker Server).
To select information of a specific client, for example "testserver", type:
nsradmin> print type: nsr client; name: testserver
You can furthur limit the attributes that you're seeing, by using the show sub-command. For example if you only wish to see the save set and the group, type:
nsradmin> show save set
nsradmin> show group
nsradmin> show name
nsradmin> print type: nsr client; name: testserver
name: testserver.domain.com;
group: aixprod;
save set: /, /usr, /var, /tmp, /home, /opt, /roothome;
If you wish to retrieve information regarding a group, type:
nsradmin> show
Will show all attributes
nsradmin> print type: nsr group; name: aixprod
If you like to get more information about the types you can print information of, type:
nsradmin> types
To get more information on what a specific process is doing, you can get the truss command. That may be very useful, for example when a process appears to be hanging.
For example, if you want to know what the "recover" process is doing, first look up the PID of this process:
# ps -ef | grep -i recover | grep -v grep
root 348468 373010 0 17:30:25 pts/1 0:00 recover -f -a /etc
Then, run the truss command using that PID:
cscnimmaster# truss -p 348468
kreadv(0, 0x00000000, 0, 0x00000000) (sleeping...)
This way, you can see the process is actually sleeping.
This is how to stop EMC Networker:
# /bin/nsr_shutdown
And this is how you start it (taken from /etc/inittab):
# echo "sh /etc/rc.nsr" | at now
This is how to translate a hardware address to a physical location:
The command lscfg shows the hardware addresses of all hardware. For example, the following command will give you more detail on an individual device (e.g. ent1):
# lscfg -pvl ent1
ent1 U788C.001.AAC1535-P1-T2 2-Port 10/100/1000 Base-TX PCI-X Adapter
2-Port 10/100/1000 Base-TX PCI-X Adapter:
Network Address.............001125C5E831
ROM Level.(alterable).......DV0210
Hardware Location Code......U788C.001.AAC1535-P1-T2
PLATFORM SPECIFIC
Name: ethernet
Node: ethernet@1,1
Device Type: network
Physical Location: U788C.001.AAC1535-P1-T2
This ent1 device is an 'Internal Port'. If we check ent2 on the same box:
# lscfg -pvl ent2
ent2 U788C.001.AAC1535-P1-C13-T1 2-Port 10/100/1000 Base-TX PCI-X
2-Port 10/100/1000 Base-TX PCI-X Adapter:
Part Number.................03N5298
FRU Number..................03N5298
EC Level....................H138454
Brand.......................H0
Manufacture ID..............YL1021
Network Address.............001A64A8D516
ROM Level.(alterable).......DV0210
Hardware Location Code......U788C.001.AAC1535-P1-C13-T1
PLATFORM SPECIFIC
Name: ethernet
Node: ethernet@1
Device Type: network
Physical Location: U788C.001.AAC1535-P1-C13-T1
This is a device on a PCI I/O card.
For a physical address like U788C.001.AAC1535-P1-C13-T1:
- U788C.001.AAC1535 - This part identifies the 'system unit/drawer'. If your system is made up of several drawers, then look on the front and match the ID to this section of the address. Now go round the back of the server.
- P1 - This is the PCI bus number. You may only have one.
- C13 - Card Slot C13. They are numbered on the back of the server.
- T1 - This is port 1 of 2 that are on the card.
Your internal ports won't have the Card Slot numbers, just the T number, representing the port. This should be marked on the back of your server. E.g.: U788C.001.AAC1535-P1-T2 means unit U788C.001.AAC1535, PCI bus P1, port T2 and you should be able to see T2 printed on the back of the server.
In this section, we will configure the NIM master and create some basic installation resources:
- Ensure that Volume 1 of the AIX DVD is in the drive.
- Install the NIM master fileset:
# installp -agXd /dev/cd0 bos.sysmgt.nim
- Configure NIM master:
# smitty nim_config_env
Set fields as follows:
- "Primary Network Interface for the NIM Master": selected interface
- "Input device for installation images": "cd0"
- If you already have set up an /export file system, you may choose not to create new file systems for /export/lpp_source and /export/spot; It is up to you.
- Select to prepend the level to the LPP_SOURCE and SPOT names, so you can identify the level of AIX that was used to create the LPP_SOURCE and SPOT.
- "Remove all newly added NIM definitions if the operation fails": "yes"
- Press Enter.
- Exit when complete.
If you run into an issue here, where it says that the SPOT cannot be created, because the LPP_SOURCE is missing the simages (short for system images) attribute, because fileset bos.vendor.profile is missing, then this means it is telling you that the LPP_SOURCE doesn't include all the required filesets to create the SPOT. This looks like a bug because fileset bos.vendor.profile can be found on the AIX media. But it seems somehow it is not copied to the target LPP_SOURCE folder while the LPP_SOURCE is created. It has been seen in AIX 7.1 TL4. If you run into this, do the following:
- Check if bos.vendor.profile exists on the installation media. It should be in the installp/ppc folder.
- If so, rerun the steps above (starting with smitty nim_config_env), and while the LPP_SOURCE is being created, copy the bos.vendor.profile file yourself from the AIX installation media to the LPP_SOURCE target folder. For example, if your installation folder is /aix (assuming you have mounted the first AIX installation ISO image using loopmount on mount point /aix; and assuming you are using AIX 7.1 TL4), then run:
# cp /aix/installp/ppc/bos.vendor.profile /export/lpp_source/710-04lpp_source1/installp/ppc/bos.vendor.profile
- Initialize each NIM client:
# smitty nim_mkmac
Enter the host name of the appropriate LPAR. Set fields as follows:
- "Kernel to use for Network Boot": "mp"
- "Cable Type": "tp"
- Press Enter.
- Exit when complete.
A more extensive document about setting up NIM can be found here:
http://www-01.ibm.com/support/docview.wss?context=SWG10q1=setup+guide&uid=isg3T1010383A usefull command to update software on your AIX server is install_all_updates. It is similar to running smitty update_all, but it works from the command line. The only thing you need to provide is the directory name, for example:
# install_all_updates -d .
This installs all the software updates from the current directory. Of course, you will have to make sure the current directory contains any software. Don't worry about generating a Table Of Contents (.toc) file in this directory, because install_all_updates generates one for you.
By default, install_all_updates will apply the filesets. Use -c to commit any software. Also, by default, it will expand any file systems; use -x to prevent this behavior). It will install any requisites by default (use -n to prevent). You can use -p to run a preview, and you can use -s to skip the recommended maintenance or technology level verification at the end of the install_all_updates output. You may have to use the -Y option to agree to all licence agreements.
To install all available updates from the cdrom, and agree to all license agreements, and skip the recommended maintenance or technology level verification, run:
# install_all_updates -d /cdrom -Y -s
To perform recoveries from EMC (or Legato) Networker on the command line, you can use the recover command. The recover command runs in two modes:
Interactive mode: Interactive mode is the default mode for the recover command. This mode places you in a shell-like environment that allows you to use subcommands. These commands let you navigate the client file index to select and recover files and directories.
Non-interactive mode: In non-interactive mode, the files specified on the command line are recovered automatically without browsing. To activate non-interactive mode, use the -a option.
Using recover in Interactive Mode:
Login to the server you need to recover the file for and then type recover. This will place you in the recover shell environment. You can also type recover [pathname] to set your initial working directory (recover /etc), the default is the current working directory.
# recover /etc
Current working directory is /etc
recover>
Note: If you do not get a recover prompted when you type recover, add a -s servername option:
# recover -s hostname
The following commands let you navigate a client file index to select and recover files and directories:
- ls
Lists information about the given files and directories. When no name argument is provided, ls lists the contents of the current directory. When you specify a directory as name, the content of that directory is displayed.
- cd
Changes the current working directory. The default is the directory in which you executed recover.
- pwd
Prints the full pathname of the current working directory.
- add [name.. ]
Adds the current directory or the named files or directories to the recover list. If a directory is specified, it is added with all of the subordinate files to the recover list.
- delete [name..]
Deletes the current directory or the named files or directories from the recover list. If a directory is specified, that directory and all of the subordinate files are deleted from the recover list.
- versions [name..]
List all available versions for a file or directory. If no name is given the current working directory is used.
- changetime
Change the backup browse time to recover files before the last backup. You will be prompted for new time. Time can be entered as December 15, 2009 or 12/15/2009.
- list
Displays the files on the recover list.
- recover
Recovers all files on the recover list from the Networker server. Upon completion, the recover list is empty.
- exit
Exits immediately from the recovery.
- quit
Exits immediately from the recover. Files on the recover list are not recovered.
Using recover in Non-interactive mode:
In non-interactive mode, the files specified on the command line are recovered automatically without browsing. To activate non-interactive mode, use the -a option. For example:
Recover the /etc/hosts file from the most recent backup:
# recover -a /etc/hosts
Using the recover Command in Directed Recoveries:
To relocate recovered files use the -d destination option with the recover command:
# recover -a -d /restore /etc/hosts
Recovering 1 file from /etc/ into /restore
Requesting 1 file(s), this may take a while...
./hosts
Received 1 file(s) from NSR server `networker'
Recover completion time: Thu Nov 18 14:39:15 2009
Using the recover Command to recover a file from a specific date:
Enter the recover shell by typing recover. Locate the file you need to restore using the ls and cd commands. List the versions for the file using the versions command, and use the changetime command to change to the day the file was backed up. Add the file to the recovery list using the add command.
# recover
Current working directory is /
recover> versions /etc/hosts
Versions of `/etc/hosts':
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Mon Aug 9 20:02:53 EDT 2010
location: 004049
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Fri Aug 6 21:11:07 EDT 2010
location: DD0073 at DDVTL
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Mon Aug 2 20:06:48 EDT 2010
location: 004242 at rd=ntwrkrstgnd1:ATL
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Fri Jul 30 21:09:15 EDT 2010
location: DD0054 at DDVTL
4 -rw-rw-r-- root system 2006 Mar 31 16:32 hosts
save time: Mon Jul 26 20:10:20 EDT 2010
location: 004095
recover> changetime 8/1/2010
6497:recover: time changed to Sun Aug 1 23:59:59 EDT 2010
recover> add /etc/hosts
/etc
1 file(s) marked for recovery
recover> recover
Recovering 1 file into its original location
Volumes needed (all on-line):
DD0054 at \\.\Tape20
Total estimated disk space needed for recover is 4 KB.
Requesting 1 file(s), this may take a while...
./hosts
./hosts file exists, overwrite (n, y, N, Y) or rename (r, R) [n]? y
Overwriting ./hosts
Received 1 file(s) from NSR server `networker'
Recover completion time: Thu Aug 12 17:34:06 EDT 2010
Using the -f option we can recover files from the command line without having to answer questions if we want to overwrite any existing files. For example, if you wish to recover the entire /etc file system into /tmp:
# recover -f -d /tmp/ -a /etc/
All the files will be recovered to /tmp/etc.
The -c option can be used to recover files from different client. For example, if you wish to recover the entire /etc file system of server "otherclient" to /tmp:
# recover -f -c otherclient -d /tmp/ -a /etc/
The -t option can be used to do a point-in-time recover of a file and/or file system. For example, to recover the /etc/hosts file of 09/05/2010 at noon:
# recover -s networkerserver -t "09/05/2010 12:00" -a /etc/hosts
Recovering multiple files is also possible. For example, if you wish to recover 2 mksysb images:
# recover -f -c client -s server -a mksysb.image1 mksysb.image2
The EOM date (end of marketing) has been announced for AIX 5.3: 04/11; meaning that AIX 5.3 will no longer be marketed by IBM from April 2011, and that it is now time for customers to start thinking about upgrading to AIX 6.1. The EOS (end of service) date for AIX 5.3 is 04/12, meaning AIX 5.3 will be serviced by IBM until April 2012. After that, IBM will only service AIX 5.3 for an additional fee. The EOL (end of life) date is 04/16, which is the end of life date at April 2016. The final technology level for AIX 5.3 is technology level 12. Some service packs for TL12 will be released though.
IBM has also announced EOM and EOS dates for HACMP 5.4 and PowerHA 5.5, so if you're using any of these versions, you also need to upgrade to PowerHA 6.1:
- Sep 30, 2010: EOM HACMP 5.4, PowerHA 5.5
- Sep 30, 2011: EOS HACMP 5.4
- Sep 30, 2012: EOS HACMP 5.5
Use this procedure to quickly configure an HACMP cluster, consisting of 2 nodes and disk heartbeating.
Prerequisites:
Make sure you have the following in place:
- Have the IP addresses and host names of both nodes, and for a service IP label. Add these into the /etc/hosts files on both nodes of the new HACMP cluster.
- Make sure you have the HACMP software installed on both nodes. Just install all the filesets of the HACMP CD-ROM, and you should be good.
- Make sure you have this entry in /etc/inittab (as one of the last entries):
clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit
- In case you're using EMC SAN storage, make sure you configure you're disks correctly as hdiskpower devices. Or, if you're using a mksysb image, you may want to follow this procedure EMC ODM cleanup.
Steps:
- Create the cluster and its nodes:
# smitty hacmp
Initialization and Standard Configuration
Configure an HACMP Cluster and Nodes
Enter a cluster name and select the nodes you're going to use. It is vital here to have the hostnames and IP address correctly entered in the /etc/hosts file of both nodes.
- Create an IP service label:
# smitty hacmp
Initialization and Standard Configuration
Configure Resources to Make Highly Available
Configure Service IP Labels/Addresses
Add a Service IP Label/Address
Enter an IP Label/Address (press F4 to select one), and enter a Network name (again, press F4 to select one).
- Set up a resource group:
# smitty hacmp
Initialization and Standard Configuration
Configure HACMP Resource Groups
Add a Resource Group
Enter the name of the resource group. It's a good habit to make sure that a resource group name ends with "rg", so you can recognize it as a resource group. Also, select the participating nodes. For the "Fallback Policy", it is a good idea to change it to "Never Fallback". This way, when the primary node in the cluster comes up, and the resource group is up-and-running on the secondary node, you won't see a failover occur from the secondary to the primary node.
Note: The order of the nodes is determined by the order you select the nodes here. If you put in "node01 node02" here, then "node01" is the primary node. If you want to have this any other way, now is a good time to correctly enter the order of node priority.
- Add the Servie IP/Label to the resource group:
# smitty hacmp
Initialization and Standard Configuration
Configure HACMP Resource Groups
Change/Show Resources for a Resource Group (standard)
Select the resource group you've created earlier, and add the Service IP/Label.
- Run a verification/synchronization:
# smitty hacmp
Extended Configuration
Extended Verification and Synchronization
Just hit [ENTER] here. Resolve any issues that may come up from this synchronization attempt. Repeat this process until the verification/synchronization process returns "Ok". It's a good idea here to select to "Automatically correct errors".
- Start the HACMP cluster:
# smitty hacmp
System Management (C-SPOC)
Manage HACMP Services
Start Cluster Services
Select both nodes to start. Make sure to also start the Cluster Information Daemon.
- Check the status of the cluster:
# clstat -o
# cldump
Wait until the cluster is stable and both nodes are up.
Basically, the cluster is now up-and-running. However, during the Verification & Synchronization step, it will complain about not having a non-IP network. The next part is for setting up a disk heartbeat network, that will allow the nodes of the HACMP cluster to exchange disk heartbeat packets over a SAN disk. We're assuming here, you're using EMC storage. The process on other types of SAN storage is more or less similar, except for some differences, e.g. SAN disks on EMC storage are called "hdiskpower" devices, and they're called "vpath" devices on IBM SAN storage.
First, look at the available SAN disk devices on your nodes, and select a small disk, that won't be used to store any data on, but only for the purpose of doing the disk heartbeat. It is a good habit, to request your SAN storage admin to zone a small LUN as a disk heartbeating device to both nodes of the HACMP cluster. Make a note of the PVID of this disk device, for example, if you choose to use device hdiskpower4:
# lspv | grep hdiskpower4
hdiskpower4 000a807f6b9cc8e5 None
So, we're going to set up the disk heartbeat network on device hdiskpower4, with PVID 000a807f6b9cc8e5:
- Create an concurrent volume group:
# smitty hacmp
System Management (C-SPOC)
HACMP Concurrent Logical Volume Management
Concurrent Volume Groups
Create a Concurrent Volume Group
Select both nodes to create the concurrent volume group on by pressing F7 for each node. Then select the correct PVID. Give the new volume group a name, for example "hbvg".
- Set up the disk heartbeat network:
# smitty hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Networks
Add a Network to the HACMP Cluster
Select "diskhb" and accept the default Network Name.
- Run a discovery:
# smitty hacmp
Extended Configuration
Discover HACMP-related Information from Configured Nodes
- Add the disk device:
# smitty hacmp
Extended Configuration
Extended Topology Configuration
Configure HACMP Communication Interfaces/Devices
Add Communication Interfaces/Devices
Add Discovered Communication Interface and Devices
Communication Devices
Select the disk device on both nodes by selecting the same disk on each node by pressing F7.
- Run a Verification & Synchronization again, as described earlier above. Then check with clstat and/or cldump again, to check if the disk heartbeat network comes online.
ISOs may be loaded from an HTTP server using Virtual Media with the iLO command-line interface.
Example:
$ ssh -l ilo-admin 10.215.14.5
User:ilo-admin logged-in to ilo.(10.215.14.5)
iLO Advanced 1.82 pass1 at 15:53:34 Aug 22 2005
Server Name: ilo
Server Power: On
>hpiLO-> vm cdrom insert http://10.251.20.20/RHEL4.6-i386-ES.iso
Note: use IPs when specifying an HTTP server.
>hpiLO-> vm cdrom get
VM Applet = Disconnected
Boot Option = NO_BOOT
Write Protect = Yes
Image Inserted = Connected
Image URL = http://10.251.20.20/RHEL4.6-i386-ES.iso
Note: the "NO_BOOT" means that the system will not boot off the "connected" image. And the "Image URL" option is not shown with ILO version 1.
>hpiLO-> vm cdrom set boot_once
Note: The next boot will be from the connected image. You can also use "vm cdrom set connect" to permanently connect the ISO image. If you want to get rid of the ISO image, use "vm cdrom eject".
>hpiLO-> power reset
Now, the server will reboot and boot off the ISO image. If you ever run into a situation where it won't boot of the ISO image, but simply skips over booting from the CDROM, then make sure to check if any physical cables are connected to the server, for example, KVM cables or USB keyboards. If this is the case, the server will not boot of any virtual media. Unplug those cables and reboot again, to make the server boot of the ISO image.
Number of results found: 469.
Displaying results: 201 - 210.