If you run:
# powermt display dev=all
And you notice that there are "dead" paths, then these are the commands to run in order to set these paths back to "alive" again, of course, AFTER ensuring that any SAN related issues are resolved.
To have PowerPath scan all devices and mark any dead devices as alive, if it finds that a device is in fact capable of doing I/O commands, run:
# powermt restore
To delete any dead paths, and to reconfigure them again:
# powermt reset
# powermt config
Or you could run:
# powermt check
From powerlink.emc.com:
- Before making any changes, collect host logs to document the current configuration. At a minimum, save the following:
inq, lsdev -Cc disk, lsdev -Cc adapter, lspv, and lsvg
- Shutdown the application(s), unmount the file system(s), and varyoff all volume groups except for rootvg. Do not export the volume groups.
# varyoffvg <vg_name>
Check with lsvg -o (confirm that only rootvg is varied on)
If no PowerPath, skip all steps with power names.
- For CLARiiON configuration, if Navisphere Agent is running, stop it:
# /etc/rc.agent stop
- Remove paths from Powerpath configuration:
# powermt remove hba=all
- Delete all hdiskpower devices:
# lsdev -Cc disk -Fname | grep power | xargs -n1 rmdev -dl
- Remove the PowerPath driver instance:
# rmdev -dl powerpath0
- Delete all hdisk devices:
For Symmetrix devices, use this command:
# lsdev -CtSYMM* -Fname | xargs -n1 rmdev -dl
For CLARiiON devices, use this command:
# lsdev -CtCLAR* -Fname | xargs -n1 rmdev -dl
- Confirm with lsdev -Cc disk that there are no EMC hdisks or hdiskpowers.
- Remove all Fiber driver instances:
# rmdev -Rdl fscsiX
(X being driver instance number, i.e. 0,1,2, etc.)
- Verify through lsdev -Cc driver that there are no more fiber driver instances (fscsi).
- Change the adapter instances in Defined state
# rmdev -l fcsX
(X being adapter instance number, i.e. 0,1,2, etc.)
- Create the hdisk entries for all EMC devices:
# emc_cfgmgr
or# cfgmgr -vl fcsx
(x being each adapter instance which was rebuilt). Skip this part if no PowerPath.
- Configure all EMC devices into PowerPath:
# powermt config
- Check the system to see if it now displays correctly:
# powermt display
# powermt display dev=all
# lsdev -Cc disk
# /etc/rc.agent start
An easy way to see the status of your SAN devices is by using the following command:
# powermt display
Symmetrix logical device count=6
CLARiiON logical device count=0
Hitachi logical device count=0
Invista logical device count=0
HP xp logical device count=0
Ess logical device count=0
HP HSx logical device count=0
==============================================================
- Host Bus Adapters - --- I/O Paths ---- ------ Stats ------
### HW Path Summary Total Dead IO/Sec Q-IOs Errors
==============================================================
0 fscsi0 optimal 6 0 - 0 0
1 fscsi1 optimal 6 0 - 0 0
To get more information on the disks, use:
# powermt display dev=all
It will sometimes occur that a file system reports storage to be in use, while you're unable to find which file exactly is using that storage. This may occur when a process has used disk storage, and is still holding on to it, without the file actually being there anymore for whatever reason.
A good way to resolve such an issue, is to reboot the server. This way, you'll be sure the process is killed, and the disk storage space is released. However, if you don't want to use such drastic measures, here's a little script that may help you trying to find the process that may be responsible for an inode without a filename. Make sure you have lsof installed on your server.
#!/usr/bin/ksh
# Make sure to enter a file system to scan
# as the first attribute to this script.
FILESYSTEM=$1
LSOF=/usr/sbin/lsof
# A for loop to get a list of all open inodes
# in the filesystem using lsof.
for i in `$LSOF -Fi $FILESYSTEM | grep ^i | sed s/i//g` ; do
# Use find to list associated inode filenames.
if [ `find $FILESYSTEM -inum $i` ] ; then
echo > /dev/null
else
# If filename cannot be found,
# then it is a suspect and check lsof output for this inode.
echo Inode $i does not have an associated filename:
$LSOF $FILESYSTEM | grep -e $i -e COMMAND
fi
done
Everybody is usually quite familiar with how to open an X11 windows GUI on a Windows PC. It involves running an X-server on the PC, for example Xming. Install this with all default settings. Make sure you have PuTTY installed on your PC before installing Xming. Then on your PC run Xlaunch, and make sure to set your DISPLAY to a higher value, for example "10" and to check "No Access Control".
Log in to the UNIX host through PuTTY, and before starting the session to your UNIX host, go to "Connection" -> "SSH" -> "X11" in PuTTY and select "Enable X11 forwarding", and then click "Open". Once logged in, set the DISPLAY variable to the IP address of your PC and set the correct display, for example:
# export DISPLAY="154.18.20.31:10"
And then, to test, run xclock or xeyes:
# xeyes
The program xeyes should open on your window.
Now, how do you get around opening an X window if you have to go through a jumpserver first to get to the correct UNIX server, where you would like to start an X-based program? That's not too difficult also. After logging in on the UNIX jumpserver, following the procedure described above, issue the following command:
# ssh -X -Y -C otherunixhost
Of course, replace "otherunixhost" with the hostname of the UNIX server you'd like to connect to through your jump server. Then, again, to test, run "xeyes" or "xclock" to test. It should open on your PC. Now you have X11 forwarding from a UNIX server, to a jumpserver, and back to your PC, in fact double X11 forwarding.
With POWER6, the default addresses of the service processors have changed. This only applies to environments where the managed system was powered on before the HMC was configured to act as an DHCP server. The Service Processors may get their IP-Addresses by three different mechanisms:
- Addresses received from a DHCP Server.
- Fixed addresses given to the interfaces using the ASMI.
- Default addresses if neither of the possibilities above is used.
The default addresses are different betweeen POWER5 and POWER6 servers. With POWER5 we have the following addresses:
Port HMC1: 192.168.2.147/24
Port HMC2: 192.168.3.147/24
The POWER6 systems use the following addresses:
First Service Processor:
Port HMC1: 169.254.2.147/24
Port HMC2: 169.254.3.147/24
Second Service Processor:
Port HMC1: 169.254.2.146/24
Port HMC2: 169.254.3.146/24
Link:
System p Operations Guide for ASMI and for Nonpartitioned Systems.
If you decide to update to HMC release 7.3.5, Fix Central only supplies you the ISO images.
This procedure describes how you can update your HMC using the network without having to sit physically in front of the console.
First, check if this new HMC level is supported by the firmware levels of your supported systems using this link. If you're certain you can upgrade to V7.3.5, then make sure to download all the required mandatory fixes from IBM Fix Central. Don't download the actual base level of HMC v7.3.5 of 3 Gigabytes. We'll download that directly to the HMC later on.
Then, perform the "Save upgrade data" task using either the Web interface or the command line. Then get the required files from the IBM server using ftp protocol using the following command:
# getupgfiles -h ftp.software.ibm.com -u anonymous --passwd ftp
-d /software/server/hmc/network/v7350
Hint: Once this procedure gets interrupted for any reason, you need to reboot your HMC before restarting it. Otherwise, some files will remain in the local download directory which will lead to incorrect checksums.
You can check the progress of the procedure using the command ls -l /hmcdump in a different terminal. Once it has finished, you will see a prompt without any additional messages and the directory will be empty (the files will be copied to a different location).
Then tell the HMC to boot from an alternate media by issuing the following command:
# chhmc -c altdiskboot -s enable --mode upgrade
Finally reboot your HMC with the following command from the console:
# hmcshutdown -r -t now
The installation should start automatically with the reboot. Once it has finished you should be able to login from remote again. The whole procedure takes up to one hour. Once you have finished you should add in any case the mandatory efixes for HMC 7.3.5 as ISO images. You can update the HMC with these fixes through the HMC. For more information, please visit
this page.
If your HMC is located behind a firewall and your only access is through SSH, then you have to use SSH tunneling to get browser-based access to your HMC. The ports you need to use for setting up the SSH tunnel are: 22, 23, 8443, 9960, 9735, 657, 443, 2300, 2301, 2302 and 12443. This applies to version 7 and up of the HMC. For example, if you're using a jump server to get access to the HMC, you need to run:
# ssh -l user -g -L 12443:10.48.32.99:12443 -L 8443:10.48.32.99:8443 -L 9960:10.48.32.99:9960 -L 9735:10.48.32.99:9735 -L 2300:10.48.32.99:2300 -L 2301:10.48.32.99:2301 -L 443:10.48.32.99:443 -L 2302:10.48.32.99:2302 -L 657:10.48.32.99:657 -L 22:10.48.32.99:22 -L 23:10.48.32.99:23 jumpserver.domain.com -N
When you've run the command above (and have logged in to your jumpserver), then point the browser to https://jumpserver.domain.com.
You can do something similar within PuTTY on your desktop system. Basically create a new PuTTY session to your HMC, and then in the SSH tunnel section, enter an entry for each port to the HMC, e.g. add port 12443 to 10.48.32.99:12443. Repeat this for all ports mentioned above and then save your PuTTY session. After that, login to your session, and open a browser to https://localhost, which should then redirect you to your HMC's web GUI.
Linux allows binding multiple network interfaces into a single channel/NIC using special kernel module called bonding. According to official bonding documentation, The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.
Setting up bounding is easy with RHEL v4.0. Red Hat Linux stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create bond0 config file:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
Append following lines to it:
DEVICE=bond0
IPADDR=192.168.1.20
NETWORK=192.168.1.0
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
Replace above IP address with your actual IP address. Save file and exit to shell prompt. Now open the configuration files for eth0 and eth1 in the same directory using the vi text editor and make sure file read as follows for eth0 interface:
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
Repeat the same for the ifcfg-eth1 file, of course, set the DEVICE to eth1. Then, make sure that the following two lines are added to either /etc/modprobe.conf or /etc/modules.conf (see
this page or
also this page for more information):
alias bond0 bonding
options bond0 mode=1 miimon=100
Then load the bonding module:
# modprobe bonding
Restart networking service in order to bring up bond0 interface:
# service network restart
Verify everything is working:
# less /proc/net/bonding/bond0
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:59
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:63
Check the relation between vpaths and hdisks:
# lsvpcfg
Check the status of the adapters according to SDD:
# datapath query adapter
Check on stale partitions:
# lsvg -o | lsvg -i | grep -i stale
Number of results found: 469.
Displaying results: 431 - 440.