Topics: Security, System Admin

Automatically accept new SSH keys

Whenever you have to connect through SSH to a lot of different servers, and you create a command for it like this:

# for h in $SERVER_LIST; do ssh $h "uptime"; done
You may run into an error that stops your command, especially when a new server is added to $SERVER_LIST, like this:
The authenticity of host 'myserver (1.2.3.4)' can't be established.
RSA key fingerprint is .....
Are you sure you want to continue connecting (yes/no)?
And you'll have to type "yes" every time this error is encountered.

So, how do you automate this, and not have to type "yes" with every new host?

The answer is to disable strict host key checking with the ssh command like this:
ssh -oStrictHostKeyChecking=no $h uptime
Please note that you should only do this with hosts that you're familiar with, and/or are in trusted networks, as it bypasses a security question.

Topics: Red Hat, System Admin

Configuring NTP on CentOS 6

Configuring NTP on CentOS 6 (and similar versions) involves a number of steps - especially if you want to have it configured right and secure. Here's a quick guide how to do it:

First of all you have to determine the IP addresses of the NTP servers you are going to use. You may have to contact your network administrator to find out. Ensure that you get at least two time server IP addresses to use.

Then, install and verify the NTP packages:

# yum -y install ntp ntpdate
# yum -q ntp ntpdate
Edit file /etc/ntp.conf and ensure that option "broadcastclient" is commented out (which it is by default with a new installation).

Enable ntp and ntpdate at system boot time:
# chkconfig ntpd on
# chkconfig ntpdate on
Ensure that file /etc/ntp/step-tickers is empty. This will make sure that if ntpdate is run, that it will use one of the time servers configured in /etc/ntp.conf.
# cp /dev/null /etc/ntp/step-tickers
Add two time servers to /etc/ntp.conf, or use any of the pre-configured time servers in this file. Comment out the pre-configured servers, if you are using your own time servers.
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 1.2.3.4
server 5.6.7.8
Do not copy the example above. Use the IP addresses for each time server that you've received from your network administrator instead.

Enable NTP slewing (for slow time stepping if the time on the server is off, instead of suddenly making big time jump changes), by adding "-x" to OPTIONS in /etc/sysconfig/ntpd. Also add "SYNC_HWCLOCK=yes" in /etc/sysconfig/ntpdate to synchronize the hardware clock with any time changes.

Stop the NTP service, if it is running:
# service ntpd stop
Start the ntpdate service (this will synchronize the system clock and the hardware clock):
# service ntpdate start
Now, start the time service:
# service ntpd start
Wait a few minutes for the server to synchronize its time with the time servers. This may take anywhere between a few and 15 minutes. Then check the status of the time synchronization:
# ntpq -p
# ntpstat
The asterisk in front of the time server name in the "ntpq -p" output indicates that the client has reached time synchronization with that particular time server.

Done!

Topics: Red Hat, Security, System Admin

Disabling SELinux

Security Enhanced Linux, or short SELinux, is by default enabled on Red Hat Enterprise (and alike) Linux systems.

To determine the status of SELinux, simply run:

# sestatus
There will be times when it may be necessary to disable SELinux. Or for example, when a Linux system is not Internet facing, you may not need to have SELinux enabled.

From the command line, you can edit the /etc/sysconfig/selinux file. This file is a symbolic link to file /etc/selinux/config.

By default, option SELINUX will be set to enforcing in this file:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
By changing it to "permissive", you will disable SELinux:
SELINUX=permissive

Topics: Red Hat, System Admin

Setting the hostname in RHEL 7

Red Hat Enterprise Linux 7 and similar Linux distrobutions have a new command to set the hostname of the system easily. The command is hostnamectl. For example, to set the hostname of a RHEL 7 system to "flores", run:

# hostnamectl set-hostname flores
The hostnamectl command provides some other interesting features.

For example, it can be used to set the deployment type of the system, for example "development" or "production" or anything else you like to give it (as long as it's a single word. You can do so, for example by setting it to "production", by running:
# hostnamectl set-deployment production
Another option is to set the location of the system (and here you can use multiple words):
# hostnamectl set-location "third floor rack A12 U24"
To retrieve all this information, use hostnamectl as well to query the status:
root@(enemigo) selinux # hostnamectl status
   Static hostname: flores
         Icon name: computer-laptop
           Chassis: laptop
        Deployment: production
          Location: third floor rack A12 U24
        Machine ID: 4d8158f54d5166ff374bb372599351c4
           Boot ID: ae8e7dccf14a492984fb5462c4da2aa2
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-693.2.2.el7.x86_64
      Architecture: x86-64

Topics: Networking, Red Hat, System Admin

RHEL: Delete multiple default gateways

A Red Hat Enterprise Linux system should have a single default gateway defined. However, sometimes, it does occur that a system has multiple default gateways. Here's information to detect multiple default gateways and how to get rid of them:

First, check the number of default gateways defined, by running the netstat command and looking for entries that start with 0.0.0.0:

# netstat -nr | grep ^0.0.0.0
0.0.0.0     192.168.0.1     0.0.0.0    UG        0 0        0 em1
0.0.0.0     192.168.1.1     0.0.0.0    UG        0 0        0 em2
In the example above, there are 2 default gateway entries, one to 192.168.0.1, and another one to 192.168.1.1.

Quite often, more than 1 default gateways will be defined on a RHEL system, if there are multiple network interfaces present, and a GATEWAY entry is defined in each of the network interface files in /etc/sysconfig/network-script/ifcfg-*:
# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-*
ifcfg-em1:GATEWAY=192.168.0.1
ifcfg-em2:GATEWAY=192.168.1.1
On a system with multiple network interfaces, it is best to define the default gateway in file /etc/sysconfig/network instead. This file is global network file. Put the following entries in this file, assuming your default gateway is 192.168.0.1 and the network interface to be used for the default gateway is em1:
GATEWAY=192.168.0.1
GATEWAYDEV=em1
Next, remove any GATEWAY entries in any of the ifcfg-* files in /etc/sysconfig/network-scripts.

Finally, restart the network service:
# service network restart
This should resolve multiple default gateways, and the output of the netstat command should now only show one single entry with 0.0.0.0.

Note: If the netstat command is not available on the system, you may also determine the number of default gateways, by running:
# ip route show | grep ^default

Topics: Networking, Red Hat, Storage, System Admin

Quick NFS configuration on Redhat

This is a quick (and dirty) NFS configuration using RHEL without too much concerts about security or any fine tuning and access control. In our scenario there are two hosts:

  • NFS Server, IP 10.1.1.100
  • NFS Client, IP 10.1.1.101
First, start with the NFS server:

On the NFS server, un the below commands to begin the NFS server installation:
[nfs-server] # yum install nfs-utils rpcbind
Next, for this procedure, we export an arbitrary directory called /opt/nfs. Create /opt/nfs directory:
[nfs-server] # mkdir -p /opt/nfs
Edit the /etc/exports file (which is the NFS exports file) to add the below line to export folder /opt/nfs to client 10.1.1.101:
/opt/nfs 10.1.1.101(no_root_squash,rw)
Next, make sure to open port 2049 on your firewall to allow client requests:
[nfs-server] # firewall-cmd --zone=public --add-port=2049/tcp --permanent
[nfs-server] # firewall-cmd --reload
Start the rpcbind and NFS server daemons in this order:
[nfs-server] # service rpcbind start; service nfs start
Check the NFS server status:
[nfs-server] # service nfs status 
Redirecting to /bin/systemctl status nfs.service
nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; 
 vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           order-with-mounts.conf
   Active: active (exited) since Tue 2017-11-14 09:06:21 CST; 1h 14min ago
 Main PID: 2883 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service
Next, export all the file systems configured in /etc/exports:
[nfs-server] # exportfs -rav
And check the currently exported file systems:
[nfs-server] # exportfs -v
Next, continue with the NFS client:

Install the required packages:
[nfs-client] # yum install nfs-utils rpcbind
[nfs-client]# service rpcbind start
Create a mount point directory on the client, for example /mnt/nfs:
[nfs-client] # mkdir -p /mnt/nfs
Discover the NFS exported file systems:
[nfs-client] # showmount -e 10.1.1.100
Export list for 10.1.1.100:
/opt/nfs 10.1.1.101
Mount the previously NFS exported /opt/nfs directory:
[nfs-client] # mount 10.1.1.100:/opt/nfs /mnt/nfs
Test the correctness of the setup between the NFS server and the NFS client by creating a file in the NFS mounted directory on the client side:
[nfs-client] # cd /mnt/nfs/
[nfs-client] # touch testfile
[nfs-client] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
Move to the server side and check if the testfile file exists:
[nfs-server] # cd /opt/nfs/
[nfs-server] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
At this point it is working, but it is not set up to remain there permanently (as in: it will be gone when either the NFS server or NFS client is rebooted. To ensure it remains working even after a reboot, perform the following steps:

On the NFS server side, to have the NFS server service enabled at system boot time, run:
[nfs-server] # systemctl enable nfs-server
On the NFS server client side, add an entry to the /etc/fstab file, that will ensure the NFS file system is mounted at boot time:
10.1.1.100:/opt/nfs  /mnt/nfs  nfs4  soft,intr,nosuid  0 0
The options for the NFS file systems are as follows:
  • soft = No hard mounting, avoids hanging file access commands on the NFS client, if the NFS servers is unavailable.
  • intr = Allow NFS requests to be interrupted if the NFS server goes down or can't be reached.
  • nosuid = This prevents remote users from gaining higher privileges by running a setuid program.
If you need to know on the NFS server side, which clients are using the NFS file system, you can use the netstat command, and search for both the NFS server IP address and port 2049:
[nfs-server] # netstat -an | grep 10.1.1.100:2049
This will tell you the established connections for each of the clients, for example:
tcp  0  0  10.1.1.100:2049  10.1.1.101:757  ESTABLISHED
In the example above you can see that IP address 10.1.1.101 on port 757 (NFS client) is connected to port 2049 on IP address 10.1.1.100 (NFS server).

Topics: Red Hat, System Admin

Incrond

Incron is an interesting piece of software for Linux, that can monitor for file changes in a specific folder, and can act upon those file changes. For example, it's possible to wait for files to be written in a folder, and have a command run to process these files.

Incron is not installed by default and is part of the EPEL repository. For Red Hat and CentOS 7, it's also possible to just download the RPM package from https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/i/incron-0.5.10-8.el7.x86_64.rpm, for example using wget.

To install incron, run:

# yum -y install /path/to/incron*rpm
There are 4 files important for incron:
  • /etc/incron.conf - The main configuration file for incron, but this file can be left configured as default.
  • /usr/sbin/incrond - This is the incron daemon that will have to run for incron to work. You can simply start it by executing this command, and it will automatically run in the background. When it's no longer needed, you can simply kill the process of /usr/sbin/incrond. However, its better to enable the service as system boot time and start the service:
    # systemctl enable incrond.service
    # service incrond start
    
  • /var/log/cron - This is the default location where the incron daemon will log its activities (through rsyslog). The file is also used by the cron daemon, so you may see other messages in this file. By using the tail command on this file, you can monitor what the incron daemon is doing. For example:
    # tail -f /var/log/cron
    
  • The incrontab file - You can edit this file by running:
    # incrontab -e
    
    This command will automatically load the incrontab file in an editor like VI, and you can add/modify/remove entries this way. Once you save the file, its contents will be automatically activated by the incron daemon. To list the entries in the incrontab file, run:
    # incrontab -l
    
There's a specific format to the entries in the incrontab file mentioned above, and the format looks like this:

[path] [mask] [command]

Where:
  • [path] is the folder that the incron daemon will be monitoring for any new files (only in the folder itself, not in any sub-folders).
  • [mask] is the activity that the incron daemon should respond to. There are several different available activities to choose from. For a list of options, see https://linux.die.net/man/5/incrontab. One option that can be used is "IN_CLOSE_WRITE", which means, act if a file is closed for writing, meaning, writing to a file in the folder has been completed.
  • [command] is the command to be run by the incron daemon when a file activity takes place in the monitored path. For this command you can use available wildcards, such as:
    • $@ : watched filesystem path
    • $# : event-related file name
An example of the incrontab file can be:
/path/to/my/folder IN_CLOSE_WRITE /path/to/script.bash $@ $#
You can have multiple entries in the incrontab file, each on a separate line. In the example above, the incron daemon will start script /path/to/script.bash with two parameters (the path of the monitored folder, and the name of the file that was written to the folder), for each file that has been closed for writing in folder /path/to/my/folder.

To monitor the status of the incron daemon, run:
# service incrond status
To restart the incron daemon, run:
# service incrond stop
# service incrond start
Or shorter:
# service incrond restart
There is a downside to using incron, which is, that there is no way to limit the number of processes that can be started by the incron daemon. If a thousand files are written to the folder monitored by the incron daemon, then it will kick off the defined proces in the incrontab file for that folder a thousand times. This may place some serious CPU load on a system (or even hang up the system), especially if the command being run is CPU and/or memory intensive.

Topics: Networking, System Admin

Ping tricks

A few trick for the ping command to thoroughly test your network connectivity and check how much time a ping request takes:

Increase the interval of the ping requests from the default 1 second to, for example, 10 ping requests every second by using the -i option. As a test, to ping to 192.168.0.1, 10 times a second, run:

# ping -i .1 192.168.0.1
You can also go up to 1/100th of a second:
# ping -i .01 192.168.0.1
To increase the default packet size of 64 bites, use -s option, for example to ping 1 KB with every ping request, run:
# ping -s 1024 192.168.0.1
Or combine the -i and -s options:
# ping -s 1024 -i .01 192.168.0.1

Topics: AIX, PowerHA / HACMP, System Admin

Mountguard

IBM has implemented a new feature implemented for JFS2 filesystems to prevent simultaneous mounting within PowerHA clusters.

While PowerHA can give concurrent access of volume groups to multiple systems, mounting a JFS2 filesystem on multiple nodes simultaneously will cause filesystem corruption. These simultaneous mount events can also cause a system crash, when the system detects a conflict between data or metadata in the filesystem and the in-memory state of the filesystem. The only exception to this is mounting the filesystem read-only, where files or directories can't be changed.

In AIX 7100-01 and 6100-07 a new feature called "Mount Guard" has been added to prevent simultaneous or concurrent mounts. If a filesystem appears to be mounted on another server, and the feature is enabled, AIX will prevent mounting on any other server. Mount Guard is not enabled by default, but is configurable by the system administrator. The option is not allowed to be set on base OS filesystems such as /, /usr, /var etc.

To turn on Mount Guard on a filesystem you can permanently enable it via /usr/sbin/chfs:

# chfs -a mountguard=yes /mountpoint
/mountpoint is now guarded against concurrent mounts.
The same option is used with crfs when creating a filesystem.

To turn off mount guard:
# chfs -a mountguard=no /mountpoint
/mountpoint is no longer guarded against concurrent mounts.
To determine the mount guard state of a filesystem:
# lsfs -q /mountpoint
Name      Nodename Mount Pt    VFS  Size    Options Auto Accounting
/dev/fslv --       /mountpoint jfs2 4194304 rw      no   no
  (lv size: 4194304, fs size: 4194304, block size: 4096, sparse files: yes,
  inline log: no, inline log size: 0, EAformat: v1, Quota: no, DMAPI: 
  no, VIX: yes, EFS: no, ISNAPSHOT: no, MAXEXT: 0, MountGuard: yes)
The /usr/sbin/mount command will not show the mount guard state.

When a filesystem is protected against concurrent mounting, and a second mount attempt is made you will see this error:
# mount /mountpoint
mount: /dev/fslv on /mountpoint:
Cannot mount guarded filesystem.
The filesystem is potentially mounted on another node
After a system crash the filesystem may still have mount flags enabled and refuse to be mounted. In this case the guard state can be temporarily overridden by the "noguard" option to the mount command:
# mount -o noguard /mountpoint
mount: /dev/fslv on /mountpoint:
Mount guard override for filesystem.
The filesystem is potentially mounted on another node.
Reference: http://www-01.ibm.com/support/docview.wss?uid=isg3T1018853

Topics: Red Hat, System Admin

Watch

On Linux, you can use the watch command to run a specific command repeatedly, and monitor the output.

Watch is a command-line tool, part of the Linux procps and procps-ng packages, that runs the specified command repeatedly and displays the results on standard output so you can watch it change over time. You may need to encase the command in quotes for it to run correctly.

For example, you can run:

# watch "ps -ef | grep bash"
The "-d" argument can be used to highlight the differences between each iteration, for example to highlight the time changes in the ntptime command:
# watch -d ntptime
By default, the command is run every two seconds, although this is adjustable with the "-n" argument. For example, to run the uptime command every second:
# watch -n 1 uptime

Number of results found for topic System Admin: 230.
Displaying results: 1 - 10.