The vSphere web GUI is a nice visual tool, but if you need to retrieve vCenter information in bulk or perform mass operations across VMs, then a command line tool such as govc in invaluable. You can find the repo for govc at https://github.com/vmware/govmomi/tree/master/govc, along with installation instructions. govc is written in Go, which means it has support on Linux as well as most other platforms.
To perform a quick install on Linux, run this command:
$ sudo curl -L -o - \
"https://github.com/vmware/govmomi/releases/latest/download/govc_$(uname -s)_$(uname \
-m).tar.gz" | sudo tar -C /usr/local/bin -xvzf - govc
Next, you'll want to set up basic connectivity to the vCenter, and for this purpose, you can use a set of environment variables, so the CLI knows how to connect to the vCenter.
# vCenter host
export GOVC_URL=myvcenter.name.com
# vCenter credentials
export GOVC_USERNAME=myuser
export GOVC_PASSWORD=MyP4ss
# disable cert validation
export GOVC_INSECURE=true
Next, you can try out a few basic commands:
$ govc about
Name: VMware ESXi
Vendor: VMware, Inc.
Version: 6.7.0
Build: 8169922
OS type: vmnix-x86
API type: HostAgent
API version: 6.7
Product ID: embeddedEsx
UUID
$ govc datacenter.info
Name: mydc
Path: /mydc
Hosts: 1
Clusters: 0
Virtual Machines: 3
Networks: 1
Datastores: 1
$ govc ls
/mydc/vm
/mydc/network
/mydc/host
/mydc/datastore
Next, set a variable $dc, so that we can use it later:
$ dc=$govc ls /)
Now you can request various information from the vCenter. For example:
Network:
$ govc ls -l=true $dc/network
ESXi Cluster:
# cluster name
govc ls $dc/host
# details on cluster, all members and their cpu/mem utilization
govc host.info [clusterPath]
# all members listed (type: HostSystem, ResourcePool)
govc ls -l=true [clusterPath]
# for each cluster member of type HostSystem, individual stats
govc host.info [memberPath]
Datastores:
# top level datastores (type: Datastore and StoragePod)
govc ls -l=true $dc/datastore
# for atomic Datastore type, get capacity
govc datastore.info [datastorePath]
# get StoragePod overall utilization
govc datastore.cluster.info [storagePodPath]
# get list of storage pod members
govc ls [storagePodPath]
# then get capacity of each member
govc datastore.info [storagePodMemberPath]
VM information:
# show basic info on any VM names that start with 'myvm'
govc vm.info myvm*
# show basic info on single VM
govc vm.info myvm-001
# use full path to get detailed VM metadata
vmpath=$(govc vm.info myvm-001 | grep "Path:" | awk {'print $2'})
govc ls -l -json $vmpath
Shtudown VM, power up VM:
# gracefully shutdown guest OS using tools
govc vm.power -s=true myvm-001
# force immediate powerdown
govc vm.power -off=true myvm-001
# power VM back on
govc vm.power -on=true myvm-001
There is no API to accomplish renaming a domain (or system) using virsh. The well known graphical tool "virt-manager" (or "Virtual Machine Manager") on Red Hat Enterprise Linux therefore also does not offer the possibility to rename a domain.
In order to do that, you have to stop the virtual machine and edit the XML file as follows:
# virsh dumpxml machine.example.com > machine.xml
# vi machine.xml
Edit the name between the name tags at the beginning of the XML file.
When completed, remove the domain and define it again:
# virsh undefine machine.example.com
# virsh define machine.xml
Virtual clients running on a IVM (Integrated Virtualization Manager) do not have a direct atached serial console nor a virtual window which can be opened via an HMC. So how do you access the console?
You can log on as the padmin user on the VIOS which is serving the client you want to logon to its console. Just log on to the VIOS, switch to user padmin:
# su - padmin
Then run the lssyscfg command to list the available LPARs and their IDs on this VIOS:
# lssyscfg -r lpar -F name,lpar_id
Alternatively you can log on to the IVM using a web browser and click on "View/Modify Partitions" which will also show LPAR names and their IDs.
Use the ID of the LPAR you wish to access:
# mkvt -id [lparid]
This should open a console to the LPAR. If you receive a message "Virtual terminal is already connected", then the session is already in use. If you are sure no one else is using it, you can use the rmvt command to force the session to close.
# rmvt -id [lparid]
After that you can try the mkvt command again.
When finished log off and type "~." (tilde dot) to end the session. Sometimes this will also close the session to the VIOS itself and you may need to logon to the VIOS again.
To create a system backup of a Virtual I/O Server (VIOS), run the following commands (as user root):
# /usr/ios/cli/ioscli viosbr -backup -file vios_config_bkup
-frequency daily -numfiles 10
# /usr/ios/cli/ioscli backupios -nomedialib -file /mksysb/$(hostname).mksysb -mksysb
The first command (viosbr) will create a backup of the configuration information to /home/padmin/cfgbackups. It will also schedule the command to run every day, and keep up to 10 files in /home/padmin/cfgbackups.
The second command is the mksysb equivalent for a Virtual I/O Server: backupios. This command will create the mksysb image in the /mksysb folder, and exclude any ISO repositiory in rootvg, and anything else excluded in /etc/exclude.rootvg.
Once you've successfully set up live partition mobility on a couple of servers, you may want to script the live partition mobility migrations, and at that time, you'll need the commands to perform this task on the HMC.
In the example below, we're assuming you have multiple managed systems, managed through one HMC. Without, it would be difficult to move an LPAR from one managed system to another.
First of all, to see the actual state of the LPAR that is to be migrated, you may want to start the nworms program, which is a small program that displays wriggling worms along with the serial number on your display. This allows you to see the serial number of the managed system that the LPAR is running on. Also, the worms will change color, as soon as the LPM migration has been completed.
For example, to start nworms with 5 worms and an acceptable speed on a Power7 system, run:
# ./nworms 5 50000
Next, log on through ssh to your HMC, and see what managed systems are out there:
> lssyscfg -r sys -F name
Server1-8233-E8B-SN066001R
Server2-8233-E8B-SN066002R
Server3-8233-E8B-SN066003R
It seems there are 3 managed systems in the example above.
Now list the status of the LPARs on the source system, assuming you want to migrate from Server1-8233-E8B-SN066001R, moving an LPAR to Server2-8233-E8B-SN066002R:
> lslparmigr -r lpar -m Server1-8233-E8B-SN066001R
name=vios1,lpar_id=3,migration_state=Not Migrating
name=vios2,lpar_id=2,migration_state=Not Migrating
name=lpar1,lpar_id=1,migration_state=Not Migrating
The example above shows there are 2 VIO servers and 1 LPAR on server Server1-8233-E8B-SN066001R.
Validate if it is possible to move lpar1 to Server2-82330E8B-SN066002R:
> migrlpar -o v -t Server2-8233-E8B-SN066002R -m
Server1-8233-E8B-SN066001R --id 1
> echo $?
0
The example above shows a validation (-o v) to the target server (-t) from the source server (-m) for the LPAR with ID 1, which we know from the lslparmigr command is our LPAR lpar1. If the command returns a zero, the validation has completed successfully.
Now perform the actual migration:
> migrlpar -o m -t Server2-8233-E8B-SN066002R
-m Server1-8233-E8B-SN066001R -p lpar1 &
This will take a couple a minutes, and the migration is likely to take longer, depending on the size of memory of the LPAR.
To check the state:
> lssyscfg -r lpar -m Server1-8233-E8B-SN066001R -F name,state
Or to see the number of bytes transmitted and remaining to be transmitted, run:
> lslparmigr -r lpar -m Server1-8233-E8B-SN066001R -F name,migration_state,bytes_transmitted,bytes_remaining
Or to see the reference codes (which you can also see on the HMC gui):
> lsrefcode -r lpar -m Server2-8233-E8B-SN066002R
lpar_name=lpar1,lpar_id=1,time_stamp=06/26/2012 15:21:24,
refcode=C20025FF,word2=00000000
lpar_name=vios1,lpar_id=2,time_stamp=06/26/2012 15:21:47,
refcode=,word2=03400000,fru_call_out_loc_codes=
lpar_name=vios2,lpar_id=3,time_stamp=06/26/2012 15:21:33,
refcode=,word2=03D00000,fru_call_out_loc_codes=
After a few minutes the lslparmigr command will indicate that the migration has been completed. And now that you know the commands, it's fairly easy to script the migration of multiple LPARs.
The default value of hcheck_interval for VSCSI hdisks is set to 0, meaning that health checking is disabled. The hcheck_interval attribute of an hdisk can only be changed online if the volume group to which the hdisk belongs, is not active. If the volume group is active, the ODM value of the hcheck_interval can be altered in the CuAt class, as shown in the following example for hdisk0:
# chdev -l hdisk0 -a hcheck_interval=60 -P
The change will then be applied once the system is rebooted. However, it is possible to change the default value of the hcheck_interval attribute in the PdAt ODM class. As a result, you won't have to worry about its value anymore and newly discovered hdisks will automatically get the new default value, as illustrated in the example below:
# odmget -q 'attribute = hcheck_interval AND uniquetype = \
PCM/friend/vscsi' PdAt | sed 's/deflt = \"0\"/deflt = \"60\"/' \
| odmchange -o PdAt -q 'attribute = hcheck_interval AND \
uniquetype = PCM/friend/vscsi'
Product | Version | End of Support |
PowerVM VIOS Enterprise Edition | 2.2.x | not announced |
PowerVM VIOS Express Edition | 2.2.x | not announced |
PowerVM VIOS Standard Edition | 2.2.x | not announced |
PowerVM VIOS Enterprise Edition | 2.1.x | Sep 30, 2012 |
PowerVM VIOS Express Edition | 2.1.x | Sep 30, 2012 |
PowerVM VIOS Standard Edition | 2.1.x | Oct 30, 2012 |
Virtual I/O Server | 1.5.x | Sep 30, 2011 |
Virtual I/O Server | 1.4.x | Sep 30, 2010 |
Virtual I/O Server | 1.3.x | Sep 30, 2009 |
Virtual I/O Server | 1.2.x | Sep 30, 2008 |
Virtual I/O Server | 1.1.x | Sep 30, 2008 |
Source:
http://www-01.ibm.com/software/support/aix/lifecycle/index.htmlThe following is a description of how you can set up a private network between two VIO clients on one hardware frame.
Servers to set up connection: server1 and server2
Purpose: To be used for Oracle interconnect (for use by Oracle RAC/CRS)
IP Addresses assigned by network team:
192.168.254.141 (server1priv)
192.168.254.142 (server2priv)
Subnetmask: 255.255.255.0
VLAN to be set up: PVID 4. This number is basically randomly chosen; it could have been 23 or 67 or whatever, as long as it is not yet in use. Proper documentation of your VIO setup and the defined networks, is therefore important.
Steps to set this up:
- Log in to HMC GUI as hscroot.
- Change the default profile of server1, and add a new virtual Ethernet adapter. Set the port virtual Ethernet to 4 (PVID 4). Select "This adapter is required for virtual server activation". Configuration -> Manage Profiles -> Select "Default" -> Actions -> Edit -> Select "Virtual Adapters" tab -> Actions -> Create Virtual Adapter -> Ethernet adapter -> Set "Port Virtual Ethernet" to 4 -> Select "This adapter is required for virtual server activation." -> Click Ok -> Click Ok -> Click Close.
- Do the same for server2.
- Now do the same for both VIO clients, but this time do "Dynamic Logical Partitioning". This way, we don't have to restart the nodes (as we previously have only updated the default profiles of both servers), and still get the virtual adapter.
- Run cfgmgr on both nodes, and see that you now have an extra Ethernet adapter, in my case ent1.
- Run "lscfg -vl ent1", and note the adapter ID (in my case C5) on both nodes. This should match the adapter IDs as seen on the HMC.
- Now configure the IP address on this interface on both nodes.
- Add the entries for server1priv and server2priv in /etc/hosts on both nodes.
- Run a ping: ping server2priv (from server1) and vice versa.
- Done!
Steps to throw it away:
- On each node: deconfigure the en1 interface:
# ifconfig en1 detach
- Rmdev the devices on each node:
# rmdev -dl en1
# rmdev -dl ent1
- Remove the virtual adapter with ID 5 from the default profile in the HMC GUI for server1 and server2.
- DLPAR the adapter with ID 5 out of server1 and server2.
- Run cfgmgr on both nodes to confirm the adapter does not re-appear. Check with:
# lsdev -Cc adapter
- Done!
The most popular innovation of IBM AIX Version 6.1 is clearly workload partitioning (WPARs). Once you get past the marketing hype, you'll need to determine the value that WPARs can provide in your environment. What can WPARs do that Logical Partitions (LPARs) could not? How and when should you use WPARs? Equally as important, when should you not use Workload Partitioning. Finally, how do you create, configure, and administer workload partitions?
For a very good introduction to WPARs, please refer to the following article: https://www.ibm.com/developerworks/aix/library/au-wpar61aix/ or download the PDF version here.
This article describes the differences between system and application WPARs, the various commands available, such as mkwpar, lswpar, startwpar and clogin. It also describes how to create and manage file systems and users, and it discusses the WPAR manager. It ends with an excellent list of references for further reading.
Prior to the introduction of POWER5 systems, it was only possible to create as many separate logical partitions (LPARs) on an IBM system as there were physical processors. Given that the largest IBM eServer pSeries POWER4 server, the p690, had 32 processors, 32 partitions were the most anyone could create. A customer could order a system with enough physical disks and network adapter cards, so that each LPAR would have enough disks to contain operating systems and enough network cards to allow users to communicate with each partition.
The Advanced POWER Virtualization feature of POWER5 platforms, makes it possible to allocate fractions of a physical CPU to a POWER5 LPAR. Using virtual CPU's and virtual I/O, a user can create many more LPARs on a p5 system than there are CPU's or I/O slots. The Advanced POWER Virtualization feature accounts for this by allowing users to create shared network adapters and virtual SCSI disks. Customers can use these virtual resources to provide disk space and network adapters for each LPAR they create on their POWER5 system.

There are three components of the Advanced POWER Virtualization feature: Micro-Partitioning, shared Ethernet adapters, and virtual SCSI. In addition, AIX 5L Version 5.3 allows users to define virtual Ethernet adapters permitting inter-LPAR communication.
Micro-Partitioning
An element of the IBM POWER Virtualization feature called Micro-Partitioning can divide a single processor into many different processors. In POWER4 systems, each physical processor is dedicated to an LPAR. This concept of dedicated processors is still present in POWER5 systems, but so is the concept of shared processors. A POWER5 system administrator can use the Hardware Management Console (HMC) to place processors in a shared processor pool. Using the HMC, the administrator can assign fractions of a CPU to individual partitions. If one LPAR is defined to use processors in the shared processor pool, when those CPUs are idle, the POWER Hypervisor makes them available to other partitions. This ensures that these processing resources are not wasted. Also, the ability to assign fractions of a CPU to a partition means it is possible to partition POWER5 servers into many different partitions. Allocation of physical processor and memory resources on POWER5 systems is managed by a system firmware component called the POWER Hypervisor.
Virtual Networking
Virtual networking on POWER5 hardware consists of two main capabilities. One capability is provided by a software IEEE 802.1q (VLAN) switch that is implemented in the Hypervisor on POWER5 hardware. Users can use the HMC to add Virtual Ethernet adapters to their partition definitions. Once these are added and the partitions booted, the new adapters can be configured just like real physical adapters, and the partitions can communicate with each other without having to connect cables between the LPARs. Users can separate traffic from different VLANs by assigning different VLAN IDs to each virtual Ethernet adapter. Each AIX 5.3 partition can support up to 256 Virtual Ethernet adapters.
In addition, a part of the Advanced POWER virtualization virtual networking feature allows users to share physical adapters between logical partitions. These shared adapters, called Shared Ethernet Adapters (SEAs), are managed by a Virtual I/O Server partition which maps physical adapters under its control to virtual adapters. It is possible to map many physical Ethernet adapters to a single virtual Ethernet adapter, thereby eliminating a single physical adapter as a point of failure in the architecture.
There are a few things users of virtual networking need to consider before implementing it. First, virtual networking ultimately uses more CPU cycles on the POWER5 machine than when physical adapters are assigned to a partition. Users should consider assigning a physical adapter directly to a partition when heavy network traffic is predicted over a certain adapter. Secondly, users may want to take advantage of larger MTU sizes that virtual Ethernet allows, if they know that their applications will benefit from the reduced fragmentation and better performance that larger MTU sizes offer. The MTU size limit for SEA is smaller than Virtual Ethernet adapters, so users will have to carefully choose an MTU size so that packets are sent to external networks with minimum fragmentation.
Virtual SCSI
The Advanced POWER Virtualization feature called virtual SCSI allows access to physical disk devices which are assigned to the Virtual I/O Server (VIOS). The system administrator uses VIOS logical volume manager commands to assign disks to volume groups. The administrator creates logical volumes in the Virtual I/O Server volume groups. Either these logical volumes or the physical disks themselves may ultimately appear as physical disks (hdisks) to the Virtual I/O Server's client partitions, once they are associated with virtual SCSI host adapters. While the Virtual I/O Server software is packaged as an additional software bundle that a user purchases separately from the AIX 5.3 distribution, the virtual I/O client software is a part of the AIX 5.3 base installation media, so an administrator does not need to install any additional filesets on a Virtual SCSI client partition.
Number of results found for topic
Virtualization: 11.
Displaying results: 1 - 10.