Topics: Networking, Red Hat

Setting up a bonded network interface on RHEL 7

The following procedure describes how to set up a bonded network interface on Red Hat Enterprise Linux. It assumes that you already have a working single network interface, and wish to now move the system to a bonded network interface set-up, to allow for network redundancy, for example by connecting two separate network interfaces, preferably on two different network cards in the server, to two different network switches. This provides both redundancy, should a network card in the server fail, and also if a network switch would fail.
First, log in as user root on the console of the server, as we are going to change the current network configuration to a bonded network configuration, and while doing so, the system will lose network connectivity temporarilty, so it is best to work from the console.

In this procedure, We'll be using network interfaces em1 and p3p1, on two different cards, to get card redundancy (just in case one of the network cards will fail).

Let's assume that IP address is currently configured on network interface em1. You can verify that, by running:

# ip a s
Also, we'll need to verify, using the ethtool command, that there is indeed a good link status on both the em1 and p3p1 network interfaces:
# ethtool em1
# ethtool p3p1
Run, to list the bonding module info (should be enabled by default already, so this is just to verify):
# modinfo bonding
Create copies of the current network files, just for safe-keeping:
# cd /etc/sysconfig/network-scripts
# cp ifcfg-em1 /tmp
# cp ifcfg-p3p1 /tmp
Now, create a new file ifcfg-bond0 in /etc/sysconfig/network-scripts. We'll configure the IP address of the system (the one that was configured previously on network interface em1) on a new bonded network interface, called bond0. Make sure to update the file with the correct IP address, gateway and network mask for your environment:
# cat ifcfg-bond0
BONDING_OPTS="mode=5 miimon=100"
The next thing to do is to create two more files, one for each network interfaces that will be the slaves of the bonded network interface. In our example, that will be em1 and p3p1.

Create file /etc/sysconfig/network-scripts/ifcfg-em1 (be sure to update the file to your environment, for example, use the correct UUID. You may find that in the copies you've made of the previous network interface files. In this file, you'll also specify that the bond0 interface is now the master.
# cat ifcfg-em1
Create file ifcfg-p3p1:
# cat ifcfg-p3p1
Now, we're ready to start using the new bonded network interface. Restart the network service:
# systemctl restart network.service
Run the ip command to check the current network config:
# ip a s
The IP address should now be configured on the bond0 interface.

Ping the default gateway, to test if your bonded network interface can reach the switch. In our example, the default gateway is set to
# ping
This should work. If not, re-trace the steps you've done so far, or work with your network team to identify the issue.

Check that both interfaces of the bonded interface are up, and what the current active network interface is. You can do this by looking at file /proc/net/bonding/bond0. In this file you can see what the currently active slave is, and if all slaves of the bonded network interface are up. For example:
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: p3p1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: p3p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:ce:26:30
Slave queue ID: 0

Slave Interface: em1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:bd:b7:9e
Slave queue ID: 0
In the example above, the active network interface is p3p1. Let's bring it down, to see if it fails over to network interface em1. You can bring down a network interface using the ifdown command:
# ifdown p3p1
Device 'p3p1' successfully disconnected.
Again, look at the /proc/net/bonding/bond0 file. You can now see that the active network interface has changed to em1, and that network interface p3p1 is no longer listed in the file (because it is down):
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: em1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:bd:b7:9e
Slave queue ID: 0
Now ping the default gateway again, and make sure it still works (now that we're using network interface em1 instead of network interface p3p1).

Then bring the p3p1 interface back up, using the ifup command:
# ifup p3p1
And check the bonding status again:
# cat /proc/net/bonding/bond0
It should show that the active network interface is still on em1, it will not fail back to network interface p3p1 (After all, why would it?! Network interface em1 works just fine).

Now repeat the same test, by bringing down network interface em1, ping the default gateway again, and check the bonding status, and bring em1 back up:
# ifdown em1
# cat /proc/net/bonding/bond0
# ping
# ifup em1
# cat /proc/net/bonding/bond0
# ping
If this all works fine, then you're all set.

Topics: AIX, Networking

Adding and deleting a static network route using the command line

There are two commands that can be used to add a route on an AIX system.

The first one is route, and can be used to temporarily add a route to an AIX system. Meaning, if the system is rebooted, after the route has been added, the route will be lost again after the reboot.

The second command is chdev -l inet0 that can be used to permanently add a route on an AIX system. When this command is used, the route will persist during reboots, as this command writes to information of the route in to the ODM of AIX.

Let's say you have a need to add a route on a system to network And that network uses a netmask of (or "24" for the short mask notation). Finally, the gateway that can be used to access this network is Obviously, please adjust this to your own situation.

To temporarily add a route on a system for this network, use the following route command:

# route add -net -netmask
After running this command, you can use the netstat -nr command to confirm that the route indeed has been set up:
# netstat -nr | grep
172.30.224/24   UG   0   0   en1   -   -
To remove that route again, simply change the route command from "add" to "delete":
# route delete -net -netmask
Again, confirm with the netstat -nr command that the route indeed has been removed.

Now, as mentioned earlier, the route command will only temporarily (until the next reboot) add a route on the AIX system. To make things permanent, use the chdev command. This command takes the following form:

chdev -l inet0 -a route=net,-netmask,[your-netmask-goes-here],-static,[your-network-address-goes-here],[your-gateway-goes-here]

For example:
# chdev -l inet0 -a route=net,-netmask,,-static,,
inet0 changed
This time, again, you can confirm with the netstat -nr command that the route has been set up. But now you can also confirm that the route has been added to the ODM, by using this command:
# lsattr -El inet0 -a route | grep
route net,-netmask,,-static,, Route True
At this point, you can reboot the system, and you'll notice that the route is still there, by repeating the netstat -nr and lsattr -El inet0 commands.

To remove this permanent route from the AIX system, simply change the chdev command above from "route" to "delroute":
# chdev -l inet0 -a delroute=net,-netmask,,-static,,
inet0 changed
Finally, again confirm using the netstat -nr and lsattr -El inet0 commands that the route indeed has been removed.

Topics: Networking, Red Hat, System Admin

RHEL: Delete multiple default gateways

A Red Hat Enterprise Linux system should have a single default gateway defined. However, sometimes, it does occur that a system has multiple default gateways. Here's information to detect multiple default gateways and how to get rid of them:

First, check the number of default gateways defined, by running the netstat command and looking for entries that start with

# netstat -nr | grep ^    UG        0 0        0 em1    UG        0 0        0 em2
In the example above, there are 2 default gateway entries, one to, and another one to

Quite often, more than 1 default gateways will be defined on a RHEL system, if there are multiple network interfaces present, and a GATEWAY entry is defined in each of the network interface files in /etc/sysconfig/network-script/ifcfg-*:
# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-*
On a system with multiple network interfaces, it is best to define the default gateway in file /etc/sysconfig/network instead. This file is global network file. Put the following entries in this file, assuming your default gateway is and the network interface to be used for the default gateway is em1:
Next, remove any GATEWAY entries in any of the ifcfg-* files in /etc/sysconfig/network-scripts.

Finally, restart the network service:
# service network restart
This should resolve multiple default gateways, and the output of the netstat command should now only show one single entry with

Note: If the netstat command is not available on the system, you may also determine the number of default gateways, by running:
# ip route show | grep ^default

Topics: Networking, Red Hat, Storage, System Admin

Quick NFS configuration on Red Hat

This is a quick NFS configuration using RHEL without too much concerts about security or any fine tuning and access control. In our scenario, there are two hosts:

  • NFS Server, IP
  • NFS Client, IP
First, start with the NFS server:

On the NFS server, un the below commands to begin the NFS server installation:
[nfs-server] # yum install nfs-utils rpcbind
Next, for this procedure, we export an arbitrary directory called /opt/nfs. Create /opt/nfs directory:
[nfs-server] # mkdir -p /opt/nfs
Edit the /etc/exports file (which is the NFS exports file) to add the below line to export folder /opt/nfs to client
Next, make sure to open port 2049 on your firewall to allow client requests:
[nfs-server] # firewall-cmd --zone=public --add-port=2049/tcp --permanent
[nfs-server] # firewall-cmd --reload
Start the rpcbind and NFS server daemons in this order:
[nfs-server] # service rpcbind start; service nfs start
Check the NFS server status:
[nfs-server] # service nfs status 
Redirecting to /bin/systemctl status nfs.service
nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; 
 vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
   Active: active (exited) since Tue 2017-11-14 09:06:21 CST; 1h 14min ago
 Main PID: 2883 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service
Next, export all the file systems configured in /etc/exports:
[nfs-server] # exportfs -rav
And check the currently exported file systems:
[nfs-server] # exportfs -v
Next, continue with the NFS client:

Install the required packages:
[nfs-client] # yum install nfs-utils rpcbind
[nfs-client]# service rpcbind start
Create a mount point directory on the client, for example /mnt/nfs:
[nfs-client] # mkdir -p /mnt/nfs
Discover the NFS exported file systems:
[nfs-client] # showmount -e
Export list for
Mount the previously NFS exported /opt/nfs directory:
[nfs-client] # mount /mnt/nfs
Test the correctness of the setup between the NFS server and the NFS client by creating a file in the NFS mounted directory on the client side:
[nfs-client] # cd /mnt/nfs/
[nfs-client] # touch testfile
[nfs-client] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
Move to the server side and check if the testfile file exists:
[nfs-server] # cd /opt/nfs/
[nfs-server] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
At this point it is working, but it is not set up to remain there permanently (as in: it will be gone when either the NFS server or NFS client is rebooted. To ensure it remains working even after a reboot, perform the following steps:

On the NFS server side, to have the NFS server service enabled at system boot time, run:
[nfs-server] # systemctl enable nfs-server
On the NFS server client side, add an entry to the /etc/fstab file, that will ensure the NFS file system is mounted at boot time:  /mnt/nfs  nfs4  soft,intr,nosuid  0 0
The options for the NFS file systems are as follows:
  • soft = No hard mounting, avoids hanging file access commands on the NFS client, if the NFS servers is unavailable.
  • intr = Allow NFS requests to be interrupted if the NFS server goes down or can't be reached.
  • nosuid = This prevents remote users from gaining higher privileges by running a setuid program.
If you need to know on the NFS server side, which clients are using the NFS file system, you can use the netstat command, and search for both the NFS server IP address and port 2049:
[nfs-server] # netstat -an | grep
This will tell you the established connections for each of the clients, for example:
tcp  0  0  ESTABLISHED
In the example above you can see that IP address on port 757 (NFS client) is connected to port 2049 on IP address (NFS server).

Topics: Networking, System Admin

Ping tricks

A few trick for the ping command to thoroughly test your network connectivity and check how much time a ping request takes:

Increase the interval of the ping requests from the default 1 second to, for example, 10 ping requests every second by using the -i option. As a test, to ping to, 10 times a second, run:

# ping -i .1
You can also go up to 1/100th of a second:
# ping -i .01
To increase the default packet size of 64 bites, use -s option, for example to ping 1 KB with every ping request, run:
# ping -s 1024
Or combine the -i and -s options:
# ping -s 1024 -i .01

Topics: Networking, System Admin

Measuring network throughput with Iperf

Iperf is a command-line tool that can be used to diagnose network speed related issues, or just simply determine the available network throughput.

Iperf measures the maximum network throughput a server can handle. It is particularly useful when experiencing network speed issues, as you can use Iperf to determine what the maximum throughput is for a server.

First, you'll need to install iperf.

For AIX:

Iperf is available from Download the RPM file, for example iperf-2.0.9-1.aix5.1.ppc.rpm to your AIX system. Next install it:

# rpm -ihv iperf-2.0.9-1.aix5.1.ppc.rpm
For Red Hat Enterprise Linux:

You'll first need to install EPEL, as Iperf is not available in the standard Red Hat repositories. For example for Red Hat 7 systems:
# yum -y install
Next, you'll have to install Iperf itself:
# yum -y install iperf
Now that you have Iperf installed, you can start testing the connection between two servers. So, you'll need to have at least two servers with Iperf installed.

On the server you wish to test, launch Iperf in server mode:
# iperf -s
That will the server in listening mode, and besides that, nothing happens. The output will look something like this:
# iperf -s
Server listening on TCP port 5001
TCP window size: 16.0 KByte (default)
[  4] local port 5001 connected with port 59700
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  9.76 GBytes  8.38 Gbits/sec
On the other server, connect to the first server. For example, if your first server is at IP address, run:
# iperf -c
After about 10 seconds, you'll see output on your screen showing the amount of data transferred, and the available bandwidth. The output may look something like this:
#  iperf -c
Client connecting to, TCP port 5001
TCP window size: 85.0 KByte (default)
[  3] local port 59700 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  9.76 GBytes  8.38 Gbits/sec
You can run multiple tests while the server Iperf process is listening on the first server. When you've completed your test, you can CTRL-C the running server Iperf command.

For more information, see the official Iperf site at

Topics: AIX, Monitoring, Networking, Red Hat, Security, System Admin

Determining type of system remotely

If you run into a system that you can't access, but is available on the network, and have no idea what type of system that is, then there are few tricks you can use to determine the type of system remotely.

The first one, is by looking at the TTL (Time To Live), when doing a ping to the system's IP address. For example, a ping to an AIX system may look like this:

# ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=253 time=0.394 ms
TTL (Time To Live) is a timer value included in packets sent over networks that tells the recipient how long to hold or use the packet before discarding and expiring the data (packet). TTL values are different for different Operating Systems. So, you can determine the OS based on the TTL value. A detailed list of operating systems and their TTL values can be found here. Basically, a UNIX/Linux system has a TTL of 64. Windows uses 128, and AIX/Solaris uses 254.

Now, in the example above, you can see "ttl=253". It's still an AIX system, but there's most likely a router in between, decreasing the TTL with one.

Another good method is by using nmap. The nmap utility has a -O option that allows for OS detection:
# nmap -O -v | grep OS
Initiating OS detection (try #1) against (
OS details: IBM AIX 5.3
OS detection performed.
Okay, so it isn't a perfect method either. We ran the nmap command above against an AIX 7.1 system, and it came back as AIX 5.3 instead. And sometimes, you'll have to run nmap a couple of times, before it successfully discovers the OS type. But still, we now know it's an AIX system behind that IP.

Another option you may use, is to query SNMP information. If the device is SNMP enabled (it is running a SNMP daemon and it allows you to query SNMP information), then you may be able to run a command like this:
# snmpinfo -h -m get -v sysDescr.0
sysDescr.0 = "IBM PowerPC CHRP Computer
Machine Type: 0x0800004c Processor id: 0000962CG400
Base Operating System Runtime AIX version: 06.01.0008.0015
TCP/IP Client Support  version: 06.01.0008.0015"
By the way, the example for SNMP above is exactly why AIX Health Check generally recommends to disable SNMP, or at least to dis-allow providing such system information trough SNMP by updating the /etc/snmpdv3.conf file appropriately, because this information can be really useful to hackers. On the other hand, your organization may use monitoring that relies of SNMP, in which case it needs to be enabled. But then you stil have the opportunity of changing the SNMP community name to something else (the default is "public"), which also limits the remote information gathering possibilities.

Topics: AIX, Networking, System Admin

Using tcpdump to discover network information

As an AIX admin, you may not always know what switches a certain server is connected to. If you have Cisco switches, here's an interesting method to identify the switch your server is connected to.

First, run ifconfig to look up the interfaces that are in use:

# ifconfig -a | grep en | grep UP | cut -f1 -d:
Okay, so on this system, you have interfaces en0, en4 and en8 active. So, if you want to determine the switch en4 is connected to, run this command:
#  tcpdump -nn -v -i en4 -s 1500 -c 1 'ether[20:2] == 0x2000'
tcpdump: listening on en4, link-type 1, capture size 1500 bytes
After a while, it will display the following information:
11:40:14.176810 CDP v2, ttl: 180s, checksum: 692 (unverified)
   Device-ID (0x01), length: 22 bytes: ''
   Version String (0x05), length: 263 bytes:
   Cisco IOS Software, Catalyst 4500 L3 Switch Software 
      (cat4500e-IPBASEK9-M), Version 12.2(52)XO, RELEASE SOFTWARE
   Technical Support:
   Copyright (c) 1986-2009 by Cisco Systems, Inc.
   Compiled Sun 17-May-09 18:51 by prod_rel_team
   Platform (0x06), length: 16 bytes: 'cisco WS-C4506-E'
   Address (0x02), length: 13 bytes: IPv4 (1)
   Port-ID (0x03), length: 18 bytes: 'GigabitEthernet2/7'
   Capability (0x04), length: 4 bytes: (0x00000029): 
      Router, L2 Switch, IGMP snooping
   VTP Management Domain (0x09), length: 2 bytes: ''''
   Native VLAN ID (0x0a), length: 2 bytes: 970
   Duplex (0x0b), length: 1 byte: full
   Management Addresses (0x16), length: 13 bytes: IPv4 (1)
   unknown field type (0x1a), length: 12 bytes:
      0x0000:  0000 0001 0000 0000 ffff ffff
47 packets received by filter
0 packets dropped by kernel
This will help you determine, that en4 is connected to a network switch called '', with IP address '', and that it is connected to port 'GigabitEthernet2/7' (most likely port 7 on blade 2 of this switch).

If you're running the same command on an Etherchannelled interface, keep in mind that it will only display the information of the active interface in the Etherchannel configuration. You may have to fail over the Etherchannel to a backup adapter, to determine the switch information for the backup adapter in the Etherchannel configuration.

If your LPAR has virtual Ethernet adapters, this will not work (the command will just hang). Instead, run the command on the VIOS instead.

Also note that you may need to run the command a couple of times, for tcpdump to discover the necessary information.

Another interesting way to use tcpdump is to discover what VLAN an network interface is connected to. For example, if you have 2 interfaces on an AIX system, and you would want to configure them in an Etherchannel, or you would want to use one of them as a production interface, and another as a standby interface. In that case, it is important to know that both interfaces are within the same VLAN. Obviously, you can ask your network team to validate, but it is also good to be able to validate on the host side. Also, you can just configure an IP address on it, and see if it will work. But for production systems, that may not always be possible.

The trick basically is, to run tcpdump on an interface, and check what network traffic can be discovered. For example, if you have 2 network interfaces, like these:
# netstat -ni | grep en[0,1]
en0 1500 link#2    0.21.5e.c0.d0.12 1426632806  0 86513680  0  0
en0 1500 10.27.18      1426632806  0 86513680  0  0
en1 1500 link#3    0.21.5e.c0.d0.13   20198022  0  7426576  0  0
en1 1500 10.27.130       20198022  0  7426576  0  0
In this case, interface en0 uses IP address, and is within the 10.27.18.x subnet. Interface en1 uses IP address, and is within the 10.27.130.x subnet (assuming both interfaces use a subnet mask of

Now, if en0 is a production interface, and you would like to confirm that en1, the standby interface, can be used to fail over the production interface to, then you need to know that both of the interfaces are within the same VLAN. To determine that, for en1, run tcpdump, and check if any network traffic in the 10.27.18 subnet (used by en0) can be seen (press CTRL-C after seeing any such network traffic, to cancel the tcpdump command):
# tcpdump -i en1 -qn net 10.27.18
tcpdump: verbose output suppressed, 
use -v or -vv for full protocol decode
listening on en1, link-type 1, capture size 96 bytes
07:27:25.842887 ARP, Request who-has
   (ff:ff:ff:ff:ff:ff) tell, length 46
07:27:25.846134 ARP, Request who-has 
   (ff:ff:ff:ff:ff:ff) tell, length 46
07:27:25.917068 IP > UDP, length 20
07:27:25.931376 IP > UDP, length 20
24 packets received by filter
0 packets dropped by kernel
After seeing this, you know for sure that on interface en1, even though it has an IP address in subnet 10.27.130.x, network traffic for 10.27.18.x subnet can be seen, and thus that failing over the production interface IP address from en0 to en1 should work just fine.

Topics: AIX, Networking, System Admin

Using iptrace

The iptrace command can be very useful to find out what network traffic flows to and from an AIX system.

You can use any combination of these options, but you do not need to use them all:

  • -a   Do NOT print out ARP packets.
  • -s [source IP]   Limit trace to source/client IP address, if known.
  • -d [destination IP]   Limit trace to destination IP, if known.
  • -b   Capture bidirectional network traffic (send and receive packets).
  • -p [port]   Specify the port to be traced.
  • -i [interface]   Only trace for network traffic on a specific interface.

Run iptrace on AIX interface en1 to capture port 80 traffic to file trace.out from a single client IP to a server IP:
# iptrace -a -i en1 -s clientip -b -d serverip -p 80 trace.out
This trace will capture both directions of the port 80 traffic on interface en1 between the clientip and serverip and sends this to the raw file of trace.out.

To stop the trace:
# ps -ef|grep iptrace
# kill 
The ipreport command can be used to transform the trace file generated by iptrace to human readable format:
# ipreport trace.out >

Topics: AIX, Networking, System Admin

IP alias

To configure IP aliases on AIX:

Use the ifconfig command to create an IP alias. To have the alias created when the system starts, add the ifconfig command to the /etc/ script.

The following example creates an alias on the en1 network interface. The alias must be defined on the same subnet as the network interface.

# ifconfig en1 alias netmask up
The following example deletes the alias:
# ifconfig en1 delete

Number of results found for topic Networking: 23.
Displaying results: 1 - 10.