Topics: Scripting, Virtualization

Using govc

The vSphere web GUI is a nice visual tool, but if you need to retrieve vCenter information in bulk or perform mass operations across VMs, then a command line tool such as govc in invaluable. You can find the repo for govc at https://github.com/vmware/govmomi/tree/master/govc, along with installation instructions. govc is written in Go, which means it has support on Linux as well as most other platforms.

To perform a quick install on Linux, run this command:

$ sudo curl -L -o - \
"https://github.com/vmware/govmomi/releases/latest/download/govc_$(uname -s)_$(uname \
-m).tar.gz" | sudo tar -C /usr/local/bin -xvzf - govc
Next, you'll want to set up basic connectivity to the vCenter, and for this purpose, you can use a set of environment variables, so the CLI knows how to connect to the vCenter.
# vCenter host
export GOVC_URL=myvcenter.name.com
# vCenter credentials
export GOVC_USERNAME=myuser
export GOVC_PASSWORD=MyP4ss
# disable cert validation
export GOVC_INSECURE=true
Next, you can try out a few basic commands:
$ govc about
Name:         VMware ESXi
Vendor:       VMware, Inc.
Version:      6.7.0
Build:        8169922
OS type:      vmnix-x86
API type:     HostAgent
API version:  6.7
Product ID:   embeddedEsx
UUID

$ govc datacenter.info
Name:                mydc
  Path:              /mydc
  Hosts:             1
  Clusters:          0
  Virtual Machines:  3
  Networks:          1
  Datastores:        1

$ govc ls
/mydc/vm
/mydc/network
/mydc/host
/mydc/datastore
Next, set a variable $dc, so that we can use it later:
$ dc=$govc ls /)
Now you can request various information from the vCenter. For example:

Network:
$ govc ls -l=true $dc/network

ESXi Cluster:
# cluster name
govc ls $dc/host
# details on cluster, all members and their cpu/mem utilization
govc host.info [clusterPath]

# all members listed (type: HostSystem, ResourcePool)
govc ls -l=true [clusterPath]
# for each cluster member of type HostSystem, individual stats
govc host.info [memberPath]

Datastores:
# top level datastores (type: Datastore and StoragePod)
govc ls -l=true $dc/datastore

# for atomic Datastore type, get capacity
govc datastore.info [datastorePath]

# get StoragePod overall utilization
govc datastore.cluster.info [storagePodPath]
# get list of storage pod members
govc ls [storagePodPath]
# then get capacity of each member
govc datastore.info [storagePodMemberPath]

VM information:
# show basic info on any VM names that start with 'myvm'
govc vm.info myvm*

# show basic info on single VM
govc vm.info myvm-001

# use full path to get detailed VM metadata
vmpath=$(govc vm.info myvm-001 | grep "Path:" | awk {'print $2'})
govc ls -l -json $vmpath

Shtudown VM, power up VM:
# gracefully shutdown guest OS using tools
govc vm.power -s=true myvm-001
# force immediate powerdown
govc vm.power -off=true myvm-001 

# power VM back on
govc vm.power -on=true myvm-001

Topics: Networking, Red Hat / Linux

Wget: Resume a broken download

Wget is a utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies.

When downloading files using wget, you may experience that a download is interrupted, e.g. because of network and/or power related issues. Especially, when downloading large files, this may become problematic, because, when you re-issue the wget command to download a large file, it will start from the beginning again.

However... there is a "--continue" option available for wegt, to continue getting a partially downloade file. This is useful when you want to finish a download started by a previous instance of wget. Make sure you run the wget command with the --continue option in the same directory where the first download started.

For example:

# wget --continue https://repo.almalinux.org/8/isos/x86_64/AlmaLinux-8.3-x86_64-dvd.iso

Topics: Docker, Kubernetes, Red Hat / Linux, System Admin

Prune old Docker data

When using Docker, or when using Docker as a container run-time for Kubernetes, over time, some unused data may build up on the system that runs Docker, for example on worker nodes of a Kubernetes cluster. This unused data may include images that were once downloaded locally, but are no longer used, for example, when a deployment to Kubernetes was once done, but later removed. This unused data may become quite a lot of data, and file systems may over time fill up because of this.

There is a simple Docker command that will prune all the unused data, and this command is:

# docker system prune -a
If you don't want to worry about pruning any unused Docker data, then schedule a cron job on your system as user root, like this:
0 */12 * * * * docker system prune -a -f

Topics: Red Hat / Linux, System Admin

Using curl with a proxy

You can tunnel through an HTTP proxy using curl, using the -p command line option. This can be very usueful, if your organization uses a proxy to connect to the Internet.

What you'll need to know first it the full host name / URL of the proxy, as well as the port that it is available on, for example:

proxy.example.com:80
Next, run curl using the following options to access a site on the Internet. The example below assumes that the proxy is proxy.example.com:80 - please replace with the actual hostname and port combination of your proxy. Also, the command below gets the main page of Google - please replace it with the URL you are trying to connect to.
# curl -p -x http://proxy.examle.com:80 https://www.google.com

Topics: Red Hat / Linux, System Admin

TMUX

TMUX is short for Terminal Multiplexer. It is a way to run commands on multiple windows at the same time, or to split the terminal window in multiple panes. Espcially, if you need to configure multiple nodes the same way, and thus have to run the same commands on different hosts, this tool might come in handy.

First, ensure it is installed.

# yum -y install tmux
Next, just start it, by running:
# tmux
This, in itself will not do much, except for displaying a bar at the bottom of the screen.

The key combination "CTRL + b" is the default prefix in TMUX. If you want to type any command to TMUX, then type "CTRL + b" first, and then use any of the following commands:

"split pane horizontally
%split pane vertically
arrow keyswitch between panes
ccreate a new window
nmove to the next window (which you can divide into panes again)
pmove to the previous window

To exit a window, simply type exit, or hit "CTRL + d".

To enable synchronization, e.g. after logging into 3 nodes in 3 panes within a window, run:
CTRL+b
:
set synchronize-panes on
To undo this, go through it again:
CTRL+b
:
set synchronize-panes off

Topics: Red Hat / Linux, System Admin

Display the number of CPU

To display the number of CPUs available on the system, use the folowing command:

# nproc
You can also use the following command:
# grep processor /proc/cpuinfo | wc -l

Topics: Hardware, Red Hat / Linux, System Admin

Reset iDRAC from OS

Sometimes, e.g. after network related changes, it may be necessary to reset the iDRAC. If the iDRAC is no longer available, or if it is not responding, then it would be very difficult to reset the iDRAC at this point.

As an alternative, one can reset the iDRAC from the OS using the following command:

# racadm racreset

Topics: Red Hat / Linux, Security

Monitor SSH logins

To monitor SSH logins on a Linux server, run the following command:

# journalctl -S @0 -u sshd
If you wish to continuously monitor the traffic, add the -f option. This will "tail" the output:
# journalctl -S @0 -u sshd -f

Topics: Networking, Security

Testing for open ports

Something every system administrator will have to do from time to time, is to test if a certain port is open and reachable over the network.

There are a few ways to go about doing that.

The most common way is to use the nc or ncat or netcat utility. It can easily be used to test a port on a system. For example, to test if port 22 (for ssh) is open on a remote system, run:

# nc -zv systemname 22
Replace "systemname" with the hostname or IP address of the system to be tested. You may also test it locally on a system, by using "localhost".

This will show something like this (if the port is open):
# nc -zv localhost 22
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to ::1:22.
Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.
The nc utility is part of the nmap-ncat RPM package, and can be installed as follows:
# yum -y install nmap-ncat
If installing this utility is not an option, you can also directly test the port, by running this:
# bash -c "</dev/tcp/cielo/22"
If that works, no output is show, and the return-code will be 0 (zero). If it doesn't work, because the port is closed, you'll see an error message, and the return-code will be 1 (one). For example:
# bash -c "</dev/tcp/localhost/23"
bash: connect: Connection refused
bash: /dev/tcp/localhost/23: Connection refused
# echo $?
1
Finally, one other method of testing for open ports, is by using telnet, if it is installed on the system. This can be done by specifying the hostname and the port to connect to:
$ telnet localhost 22
Trying ::1...
Connected to localhost.
Escape character is '^]'.
SSH-2.0-OpenSSH_7.4

Topics: Security, System Admin

Entropy

If you run this command on Linux:

# cat /proc/sys/kernel/random/entropy_avail
it returns a number that indicates how much "entropy" is available to the kernel. But what exactly is "entropy", and what unit is this entropy measured in? What is it used for? You may have heard that a low number of entropy is not good. How low is "low" and what "bad" things will happen if it is? What's a good range for it to be at? How is it determined?

Entropy is similar to "randomness". A Linux system gathers "real" random numbers by keeping an eye on different events: network activity, hard drive rotation speeds, hardware random number generator (if available), key-clicks, and so on. If feeds those to the kernel entropy pool, which is used by /dev/random. Applications which use crypto functions, use /dev/random as their entropy source, or in other words, the randomness source.

If /dev/random runs out of available entropy, it's unable to serve out more randomness and the application waiting for the randomness may stall until more random bits are available. On Red Hat Enterprise Linux systems, you can see that RPM package rng-tools is installed, and that a rngd - random nubmer generator deamon - is active. This deamon feeds semi-random numbers from /dev/urandom to /dev/random in case /dev/random runs out of "real" entropy. On Ubuntu based systems, there is no rngd. If more entropy is needed you can install haveged, which can achieve the same. Note that haveged is not available for Red Hat Enterprise Linux based systems, because these systems already have rngd.

Some applications (such as applications using encryption) need random numbers. You can generate random numbers using an algorithm - but although these seem random in one sense they are totally predictable in another. For instance if I give you the digits 58209749445923078164062862089986280348253421170679, they look pretty random. But if you realize they are actually digits of PI, then you may know the next one is going to be 8.

For some applications this is okay, but for other applications (especially security related ones) people want genuine unpredictable randomness - which can't be generated by an algorithm (i.e. program), since that is by definition predictable. This is a problem in that your computer essentially is a program, so how can it possibly get genuine random numbers? The answer is by measuring genuinely random events from the outside world - for example gaps between your key strokes and using these to inject genuine randomness into the otherwise predictable random number generator. The "entropy pool" could be thought of as the store of this randomness which gets built up by the keystrokes (or whatever is being used) and drained by the generation of random numbers.

The value stored in /proc/sys/kernel/random/entropy_avail is the measure of bits currently available to be read from /dev/random. It takes time for the computer to read entropy from its environment. If you have 4096 bits of entropy available and you "cat /dev/random" you can expect to be able to read 512 bytes of entropy (4096 bits) before the file blocks while it waits for more entropy. For example, if you "cat /dev/random" your entropy will shrink to zero. At first you'll get 512 bytes of random garbage, but it will stop and little by little you'll see more random data trickle trough.

This is not how people should operate /dev/random though. Normally developers will read a small amount of data, like 128 bits, and use that to seed some kind of PRNG algorithm. It's polite to not read any more entropy from /dev/random than you need to, since it takes so long to build up entropy, and it is considered valuable. Thus if you drain it by carelessly catting the file like above, you'll cause other applications that need to read from /dev/random to block.

A good number of available entropy is usually between 2500 and 4096 bits. Entropy is considered to be low when it is below 1000.

Number of results found: 468.
Displaying results: 1 - 10.