Topics: Networking, Red Hat, System Admin

RHEL: Delete multiple default gateways

A Red Hat Enterprise Linux system should have a single default gateway defined. However, sometimes, it does occur that a system has multiple default gateways. Here's information to detect multiple default gateways and how to get rid of them:

First, check the number of default gateways defined, by running the netstat command and looking for entries that start with 0.0.0.0:

# netstat -nr | grep ^0.0.0.0
0.0.0.0     192.168.0.1     0.0.0.0    UG        0 0        0 em1
0.0.0.0     192.168.1.1     0.0.0.0    UG        0 0        0 em2
In the example above, there are 2 default gateway entries, one to 192.168.0.1, and another one to 192.168.1.1.

Quite often, more than 1 default gateways will be defined on a RHEL system, if there are multiple network interfaces present, and a GATEWAY entry is defined in each of the network interface files in /etc/sysconfig/network-script/ifcfg-*:
# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-*
ifcfg-em1:GATEWAY=192.168.0.1
ifcfg-em2:GATEWAY=192.168.1.1
On a system with multiple network interfaces, it is best to define the default gateway in file /etc/sysconfig/network instead. This file is global network file. Put the following entries in this file, assuming your default gateway is 192.168.0.1 and the network interface to be used for the default gateway is em1:
GATEWAY=192.168.0.1
GATEWAYDEV=em1
Next, remove any GATEWAY entries in any of the ifcfg-* files in /etc/sysconfig/network-scripts.

Finally, restart the network service:
# service network restart
This should resolve multiple default gateways, and the output of the netstat command should now only show one single entry with 0.0.0.0.

Note: If the netstat command is not available on the system, you may also determine the number of default gateways, by running:
# ip route show | grep ^default

Topics: Networking, Red Hat, Storage, System Admin

Quick NFS configuration on Redhat

This is a quick (and dirty) NFS configuration using RHEL without too much concerts about security or any fine tuning and access control. In our scenario there are two hosts:

  • NFS Server, IP 10.1.1.100
  • NFS Client, IP 10.1.1.101
First, start with the NFS server:

On the NFS server, un the below commands to begin the NFS server installation:
[nfs-server] # yum install nfs-utils rpcbind
Next, for this procedure, we export an arbitrary directory called /opt/nfs. Create /opt/nfs directory:
[nfs-server] # mkdir -p /opt/nfs
Edit the /etc/exports file (which is the NFS exports file) to add the below line to export folder /opt/nfs to client 10.1.1.101:
/opt/nfs 10.1.1.101(no_root_squash,rw)
Next, make sure to open port 2049 on your firewall to allow client requests:
[nfs-server] # firewall-cmd --zone=public --add-port=2049/tcp --permanent
[nfs-server] # firewall-cmd --reload
Start the rpcbind and NFS server daemons in this order:
[nfs-server] # service rpcbind start; service nfs start
Check the NFS server status:
[nfs-server] # service nfs status 
Redirecting to /bin/systemctl status nfs.service
nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; 
 vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           order-with-mounts.conf
   Active: active (exited) since Tue 2017-11-14 09:06:21 CST; 1h 14min ago
 Main PID: 2883 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service
Next, export all the file systems configured in /etc/exports:
[nfs-server] # exportfs -rav
And check the currently exported file systems:
[nfs-server] # exportfs -v
Next, continue with the NFS client:

Install the required packages:
[nfs-client] # yum install nfs-utils rpcbind
[nfs-client]# service rpcbind start
Create a mount point directory on the client, for example /mnt/nfs:
[nfs-client] # mkdir -p /mnt/nfs
Discover the NFS exported file systems:
[nfs-client] # showmount -e 10.1.1.100
Export list for 10.1.1.100:
/opt/nfs 10.1.1.101
Mount the previously NFS exported /opt/nfs directory:
[nfs-client] # mount 10.1.1.100:/opt/nfs /mnt/nfs
Test the correctness of the setup between the NFS server and the NFS client by creating a file in the NFS mounted directory on the client side:
[nfs-client] # cd /mnt/nfs/
[nfs-client] # touch testfile
[nfs-client] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
Move to the server side and check if the testfile file exists:
[nfs-server] # cd /opt/nfs/
[nfs-server] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
At this point it is working, but it is not set up to remain there permanently (as in: it will be gone when either the NFS server or NFS client is rebooted. To ensure it remains working even after a reboot, perform the following steps:

On the NFS server side, to have the NFS server service enabled at system boot time, run:
[nfs-server] # systemctl enable nfs-server
On the NFS server client side, add an entry to the /etc/fstab file, that will ensure the NFS file system is mounted at boot time:
10.1.1.100:/opt/nfs  /mnt/nfs  nfs4  soft,intr,nosuid  0 0
The options for the NFS file systems are as follows:
  • soft = No hard mounting, avoids hanging file access commands on the NFS client, if the NFS servers is unavailable.
  • intr = Allow NFS requests to be interrupted if the NFS server goes down or can't be reached.
  • nosuid = This prevents remote users from gaining higher privileges by running a setuid program.
If you need to know on the NFS server side, which clients are using the NFS file system, you can use the netstat command, and search for both the NFS server IP address and port 2049:
[nfs-server] # netstat -an | grep 10.1.1.100:2049
This will tell you the established connections for each of the clients, for example:
tcp  0  0  10.1.1.100:2049  10.1.1.101:757  ESTABLISHED
In the example above you can see that IP address 10.1.1.101 on port 757 (NFS client) is connected to port 2049 on IP address 10.1.1.100 (NFS server).

Topics: Networking, System Admin

Ping tricks

A few trick for the ping command to thoroughly test your network connectivity and check how much time a ping request takes:

Increase the interval of the ping requests from the default 1 second to, for example, 10 ping requests every second by using the -i option. As a test, to ping to 192.168.0.1, 10 times a second, run:

# ping -i .1 192.168.0.1
You can also go up to 1/100th of a second:
# ping -i .01 192.168.0.1
To increase the default packet size of 64 bites, use -s option, for example to ping 1 KB with every ping request, run:
# ping -s 1024 192.168.0.1
Or combine the -i and -s options:
# ping -s 1024 -i .01 192.168.0.1

Topics: Networking, System Admin

Measuring network throughput with Iperf

Iperf is a command-line tool that can be used to diagnose network speed related issues, or just simply determine the available network throughput.

Iperf measures the maximum network throughput a server can handle. It is particularly useful when experiencing network speed issues, as you can use Iperf to determine what the maximum throughput is for a server.

First, you'll need to install iperf.

For AIX:

Iperf is available from http://www.perzl.org/aix/index.php?n=Main.iperf. Download the RPM file, for example iperf-2.0.9-1.aix5.1.ppc.rpm to your AIX system. Next install it:

# rpm -ihv iperf-2.0.9-1.aix5.1.ppc.rpm
For Red Hat Enterprise Linux:

You'll first need to install EPEL, as Iperf is not available in the standard Red Hat repositories. For example for Red Hat 7 systems:
# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Next, you'll have to install Iperf itself:
# yum -y install iperf
Now that you have Iperf installed, you can start testing the connection between two servers. So, you'll need to have at least two servers with Iperf installed.

On the server you wish to test, launch Iperf in server mode:
# iperf -s
That will the server in listening mode, and besides that, nothing happens. The output will look something like this:
# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  4] local 198.51.100.5 port 5001 connected with 198.51.100.6 port 59700
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  9.76 GBytes  8.38 Gbits/sec
On the other server, connect to the first server. For example, if your first server is at IP address 198.51.100.5, run:
# iperf -c 198.51.100.5
After about 10 seconds, you'll see output on your screen showing the amount of data transferred, and the available bandwidth. The output may look something like this:
#  iperf -c 198.51.100.5
------------------------------------------------------------
Client connecting to 198.51.100.5, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 198.51.100.6 port 59700 connected with 198.51.100.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  9.76 GBytes  8.38 Gbits/sec
You can run multiple tests while the server Iperf process is listening on the first server. When you've completed your test, you can CTRL-C the running server Iperf command.

For more information, see the official Iperf site at iperf.fr.

Topics: AIX, Monitoring, Networking, Red Hat, Security, System Admin

Determining type of system remotely

If you run into a system that you can't access, but is available on the network, and have no idea what type of system that is, then there are few tricks you can use to determine the type of system remotely.

The first one, is by looking at the TTL (Time To Live), when doing a ping to the system's IP address. For example, a ping to an AIX system may look like this:

# ping 10.11.12.82
PING 10.11.12.82 (10.11.12.82) 56(84) bytes of data.
64 bytes from 10.11.12.82 (10.11.12.82): icmp_seq=1 ttl=253 time=0.394 ms
...
TTL (Time To Live) is a timer value included in packets sent over networks that tells the recipient how long to hold or use the packet before discarding and expiring the data (packet). TTL values are different for different Operating Systems. So, you can determine the OS based on the TTL value. A detailed list of operating systems and their TTL values can be found here. Basically, a UNIX/Linux system has a TTL of 64. Windows uses 128, and AIX/Solaris uses 254.

Now, in the example above, you can see "ttl=253". It's still an AIX system, but there's most likely a router in between, decreasing the TTL with one.

Another good method is by using nmap. The nmap utility has a -O option that allows for OS detection:
# nmap -O -v 10.11.12.82 | grep OS
Initiating OS detection (try #1) against 10.11.12.82 (10.11.12.82)
OS details: IBM AIX 5.3
OS detection performed.
Okay, so it isn't a perfect method either. We ran the nmap command above against an AIX 7.1 system, and it came back as AIX 5.3 instead. And sometimes, you'll have to run nmap a couple of times, before it successfully discovers the OS type. But still, we now know it's an AIX system behind that IP.

Another option you may use, is to query SNMP information. If the device is SNMP enabled (it is running a SNMP daemon and it allows you to query SNMP information), then you may be able to run a command like this:
# snmpinfo -h 10.11.12.82 -m get -v sysDescr.0
sysDescr.0 = "IBM PowerPC CHRP Computer
Machine Type: 0x0800004c Processor id: 0000962CG400
Base Operating System Runtime AIX version: 06.01.0008.0015
TCP/IP Client Support  version: 06.01.0008.0015"
By the way, the example for SNMP above is exactly why AIX Health Check generally recommends to disable SNMP, or at least to dis-allow providing such system information trough SNMP by updating the /etc/snmpdv3.conf file appropriately, because this information can be really useful to hackers. On the other hand, your organization may use monitoring that relies of SNMP, in which case it needs to be enabled. But then you stil have the opportunity of changing the SNMP community name to something else (the default is "public"), which also limits the remote information gathering possibilities.

Topics: AIX, Networking, System Admin

Using tcpdump to discover network information

As an AIX admin, you may not always know what switches a certain server is connected to. If you have Cisco switches, here's an interesting method to identify the switch your server is connected to.

First, run ifconfig to look up the interfaces that are in use:

# ifconfig -a | grep en | grep UP | cut -f1 -d:
en0
en4
en8
Okay, so on this system, you have interfaces en0, en4 and en8 active. So, if you want to determine the switch en4 is connected to, run this command:
#  tcpdump -nn -v -i en4 -s 1500 -c 1 'ether[20:2] == 0x2000'
tcpdump: listening on en4, link-type 1, capture size 1500 bytes
After a while, it will display the following information:
11:40:14.176810 CDP v2, ttl: 180s, checksum: 692 (unverified)
   Device-ID (0x01), length: 22 bytes: 'switch1.host.com'
   Version String (0x05), length: 263 bytes:
   Cisco IOS Software, Catalyst 4500 L3 Switch Software 
      (cat4500e-IPBASEK9-M), Version 12.2(52)XO, RELEASE SOFTWARE
   Technical Support: http://www.cisco.com/techsupport
   Copyright (c) 1986-2009 by Cisco Systems, Inc.
   Compiled Sun 17-May-09 18:51 by prod_rel_team
   Platform (0x06), length: 16 bytes: 'cisco WS-C4506-E'
   Address (0x02), length: 13 bytes: IPv4 (1) 111.22.33.44
   Port-ID (0x03), length: 18 bytes: 'GigabitEthernet2/7'
   Capability (0x04), length: 4 bytes: (0x00000029): 
      Router, L2 Switch, IGMP snooping
   VTP Management Domain (0x09), length: 2 bytes: ''''
   Native VLAN ID (0x0a), length: 2 bytes: 970
   Duplex (0x0b), length: 1 byte: full
   Management Addresses (0x16), length: 13 bytes: IPv4 (1) 
      111.22.33.44
   unknown field type (0x1a), length: 12 bytes:
      0x0000:  0000 0001 0000 0000 ffff ffff
47 packets received by filter
0 packets dropped by kernel
This will help you determine, that en4 is connected to a network switch called 'switch1.host.com', with IP address '111.22.33.44', and that it is connected to port 'GigabitEthernet2/7' (most likely port 7 on blade 2 of this switch).

If you're running the same command on an Etherchannelled interface, keep in mind that it will only display the information of the active interface in the Etherchannel configuration. You may have to fail over the Etherchannel to a backup adapter, to determine the switch information for the backup adapter in the Etherchannel configuration.

If your LPAR has virtual Ethernet adapters, this will not work (the command will just hang). Instead, run the command on the VIOS instead.

Also note that you may need to run the command a couple of times, for tcpdump to discover the necessary information.

Another interesting way to use tcpdump is to discover what VLAN an network interface is connected to. For example, if you have 2 interfaces on an AIX system, and you would want to configure them in an Etherchannel, or you would want to use one of them as a production interface, and another as a standby interface. In that case, it is important to know that both interfaces are within the same VLAN. Obviously, you can ask your network team to validate, but it is also good to be able to validate on the host side. Also, you can just configure an IP address on it, and see if it will work. But for production systems, that may not always be possible.

The trick basically is, to run tcpdump on an interface, and check what network traffic can be discovered. For example, if you have 2 network interfaces, like these:
# netstat -ni | grep en[0,1]
en0 1500 link#2    0.21.5e.c0.d0.12 1426632806  0 86513680  0  0
en0 1500 10.27.18  10.27.18.64      1426632806  0 86513680  0  0
en1 1500 link#3    0.21.5e.c0.d0.13   20198022  0  7426576  0  0
en1 1500 10.27.130 10.27.130.10       20198022  0  7426576  0  0
In this case, interface en0 uses IP address 10.27.18.64, and is within the 10.27.18.x subnet. Interface en1 uses IP address 10.27.130.10, and is within the 10.27.130.x subnet (assuming both interfaces use a subnet mask of 255.255.255.0).

Now, if en0 is a production interface, and you would like to confirm that en1, the standby interface, can be used to fail over the production interface to, then you need to know that both of the interfaces are within the same VLAN. To determine that, for en1, run tcpdump, and check if any network traffic in the 10.27.18 subnet (used by en0) can be seen (press CTRL-C after seeing any such network traffic, to cancel the tcpdump command):
# tcpdump -i en1 -qn net 10.27.18
tcpdump: verbose output suppressed, 
use -v or -vv for full protocol decode
listening on en1, link-type 1, capture size 96 bytes
07:27:25.842887 ARP, Request who-has 10.27.18.136
   (ff:ff:ff:ff:ff:ff) tell 10.27.18.2, length 46
07:27:25.846134 ARP, Request who-has 10.27.18.135 
   (ff:ff:ff:ff:ff:ff) tell 10.27.18.2, length 46
07:27:25.917068 IP 10.27.18.2.1985 > 224.0.0.2.1985: UDP, length 20
07:27:25.931376 IP 10.27.18.3.1985 > 224.0.0.2.1985: UDP, length 20
^C
24 packets received by filter
0 packets dropped by kernel
After seeing this, you know for sure that on interface en1, even though it has an IP address in subnet 10.27.130.x, network traffic for 10.27.18.x subnet can be seen, and thus that failing over the production interface IP address from en0 to en1 should work just fine.

Topics: AIX, Networking, System Admin

Using iptrace

The iptrace command can be very useful to find out what network traffic flows to and from an AIX system.

You can use any combination of these options, but you do not need to use them all:

  • -a   Do NOT print out ARP packets.
  • -s [source IP]   Limit trace to source/client IP address, if known.
  • -d [destination IP]   Limit trace to destination IP, if known.
  • -b   Capture bidirectional network traffic (send and receive packets).
  • -p [port]   Specify the port to be traced.
  • -i [interface]   Only trace for network traffic on a specific interface.
Example:

Run iptrace on AIX interface en1 to capture port 80 traffic to file trace.out from a single client IP to a server IP:
# iptrace -a -i en1 -s clientip -b -d serverip -p 80 trace.out
This trace will capture both directions of the port 80 traffic on interface en1 between the clientip and serverip and sends this to the raw file of trace.out.

To stop the trace:
# ps -ef|grep iptrace
# kill 
The ipreport command can be used to transform the trace file generated by iptrace to human readable format:
# ipreport trace.out > trace.report

Topics: AIX, Networking, System Admin

IP alias

To configure IP aliases on AIX:

Use the ifconfig command to create an IP alias. To have the alias created when the system starts, add the ifconfig command to the /etc/rc.net script.

The following example creates an alias on the en1 network interface. The alias must be defined on the same subnet as the network interface.

# ifconfig en1 alias 9.37.207.29 netmask 255.255.255.0 up
The following example deletes the alias:
# ifconfig en1 delete 9.37.207.29

Topics: Networking, Red Hat

Howto Red Hat Enterprise Linux 5 configuring the network

Red Hat LogoRed hat Linux provides following tools to make changes to Network configuration such as add new card, assign IP address, change DNS server, etcetera:

  • GUI tool (X windows required) - system-config-network
  • Command line text based GUI tool (No X windows required) - system-config-network-tui
  • Edit configuration files directly, stored in /etc/sysconfig/network-scripts directory
The following instructions are compatible with CentOS, Fedora Core and Red Hat Enterprise Linux 3, 4 and 5.

Editing the configuration files stored in /etc/sysconfig/network-scripts:

First change directory to /etc/sysconfig/network-scripts/:
# cd /etc/sysconfig/network-scripts/
You need to edit / create files as follows:
  • /etc/sysconfig/network-scripts/ifcfg-eth0 : First Ethernet card configuration file
  • /etc/sysconfig/network-scripts/ifcfg-eth1 : Second Ethernet card configuration file
To edit/create the first NIC file, type the following command:
# vi ifcfg-eth0
Append/modify as follows:
# Intel Corporation 82573E Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=static
DHCPCLASS=
HWADDR=00:30:48:56:A6:2E
IPADDR=10.251.17.204
NETMASK=255.255.255.0
ONBOOT=yes
Save and close the file. Define the default gateway (router IP) and hostname in /etc/sysconfig/network file:
# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=host.domain.com
GATEWAY=10.251.17.1
Save and close the file. Restart networking:
# /etc/init.d/network restart
Make sure you have correct DNS server defined in /etc/resolv.conf file. Try to ping the gateway, and other hosts on your network. Also check if you can resolv host names:
# nslookup host.domain.com
And verify if the NTP servers are correct in /etc/ntp.conf, and if you can connect to the time server, by running the ntpdate command against one of the NTP servers:
# ntpdate 10.20.30.40
This should synchronize system time with time server 10.20.30.40.

Topics: AIX, Networking, System Admin

Map a socket to a process

Let's say you want to know what process is tying up port 25000:

# netstat -aAn | grep 25000
f100060020cf1398  tcp4  0  0  *.25000  *.*  LISTEN
f10006000d490c08  stream  0  0  f1df487f8  0  0  0  /tmp/.sapicm25000
So, now let's see what the process is:
# rmsock f100060020cf1398 tcpcb
The socket 0x20cf1008 is being held by proccess 1806748 (icman).
If you have lsof installed, you can get the same result with the lsof command:
# lsof -i :[PORT]
Example:
# lsof -i :5710
COMMAND     PID   USER   FD   TYPE     DEVICE  SIZE/OFF NODE NAME
oracle  2638066 oracle   18u  IPv4 0xf1b3f398 0t1716253  TCP host:5710

Number of results found for topic Networking: 21.
Displaying results: 1 - 10.