Topics: Red Hat, System Admin

Configuring NTP on CentOS 6

Configuring NTP on CentOS 6 (and similar versions) involves a number of steps - especially if you want to have it configured right and secure. Here's a quick guide how to do it:

First of all you have to determine the IP addresses of the NTP servers you are going to use. You may have to contact your network administrator to find out. Ensure that you get at least two time server IP addresses to use.

Then, install and verify the NTP packages:

# yum -y install ntp ntpdate
# yum -q ntp ntpdate
Edit file /etc/ntp.conf and ensure that option "broadcastclient" is commented out (which it is by default with a new installation).

Enable ntp and ntpdate at system boot time:
# chkconfig ntpd on
# chkconfig ntpdate on
Ensure that file /etc/ntp/step-tickers is empty. This will make sure that if ntpdate is run, that it will use one of the time servers configured in /etc/ntp.conf.
# cp /dev/null /etc/ntp/step-tickers
Add two time servers to /etc/ntp.conf, or use any of the pre-configured time servers in this file. Comment out the pre-configured servers, if you are using your own time servers.
#server iburst
#server iburst
#server iburst
#server iburst
Do not copy the example above. Use the IP addresses for each time server that you've received from your network administrator instead.

Enable NTP slewing (for slow time stepping if the time on the server is off, instead of suddenly making big time jump changes), by adding "-x" to OPTIONS in /etc/sysconfig/ntpd. Also add "SYNC_HWCLOCK=yes" in /etc/sysconfig/ntpdate to synchronize the hardware clock with any time changes.

Stop the NTP service, if it is running:
# service ntpd stop
Start the ntpdate service (this will synchronize the system clock and the hardware clock):
# service ntpdate start
Now, start the time service:
# service ntpd start
Wait a few minutes for the server to synchronize its time with the time servers. This may take anywhere between a few and 15 minutes. Then check the status of the time synchronization:
# ntpq -p
# ntpstat
The asterisk in front of the time server name in the "ntpq -p" output indicates that the client has reached time synchronization with that particular time server.


Topics: Red Hat, Security, System Admin

Disabling SELinux

Security Enhanced Linux, or short SELinux, is by default enabled on Red Hat Enterprise (and alike) Linux systems.

To determine the status of SELinux, simply run:

# sestatus
There will be times when it may be necessary to disable SELinux. Or for example, when a Linux system is not Internet facing, you may not need to have SELinux enabled.

From the command line, you can edit the /etc/sysconfig/selinux file. This file is a symbolic link to file /etc/selinux/config.

By default, option SELINUX will be set to enforcing in this file:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
By changing it to "permissive", you will disable SELinux:

Topics: Red Hat, System Admin

Setting the hostname in RHEL 7

Red Hat Enterprise Linux 7 and similar Linux distrobutions have a new command to set the hostname of the system easily. The command is hostnamectl. For example, to set the hostname of a RHEL 7 system to "flores", run:

# hostnamectl set-hostname flores
The hostnamectl command provides some other interesting features.

For example, it can be used to set the deployment type of the system, for example "development" or "production" or anything else you like to give it (as long as it's a single word. You can do so, for example by setting it to "production", by running:
# hostnamectl set-deployment production
Another option is to set the location of the system (and here you can use multiple words):
# hostnamectl set-location "third floor rack A12 U24"
To retrieve all this information, use hostnamectl as well to query the status:
root@(enemigo) selinux # hostnamectl status
   Static hostname: flores
         Icon name: computer-laptop
           Chassis: laptop
        Deployment: production
          Location: third floor rack A12 U24
        Machine ID: 4d8158f54d5166ff374bb372599351c4
           Boot ID: ae8e7dccf14a492984fb5462c4da2aa2
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-693.2.2.el7.x86_64
      Architecture: x86-64

Topics: Networking, Red Hat, System Admin

RHEL: Delete multiple default gateways

A Red Hat Enterprise Linux system should have a single default gateway defined. However, sometimes, it does occur that a system has multiple default gateways. Here's information to detect multiple default gateways and how to get rid of them:

First, check the number of default gateways defined, by running the netstat command and looking for entries that start with

# netstat -nr | grep ^    UG        0 0        0 em1    UG        0 0        0 em2
In the example above, there are 2 default gateway entries, one to, and another one to

Quite often, more than 1 default gateways will be defined on a RHEL system, if there are multiple network interfaces present, and a GATEWAY entry is defined in each of the network interface files in /etc/sysconfig/network-script/ifcfg-*:
# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-*
On a system with multiple network interfaces, it is best to define the default gateway in file /etc/sysconfig/network instead. This file is global network file. Put the following entries in this file, assuming your default gateway is and the network interface to be used for the default gateway is em1:
Next, remove any GATEWAY entries in any of the ifcfg-* files in /etc/sysconfig/network-scripts.

Finally, restart the network service:
# service network restart
This should resolve multiple default gateways, and the output of the netstat command should now only show one single entry with

Note: If the netstat command is not available on the system, you may also determine the number of default gateways, by running:
# ip route show | grep ^default

Topics: Networking, Red Hat, Storage, System Admin

Quick NFS configuration on Redhat

This is a quick (and dirty) NFS configuration using RHEL without too much concerts about security or any fine tuning and access control. In our scenario there are two hosts:

  • NFS Server, IP
  • NFS Client, IP
First, start with the NFS server:

On the NFS server, un the below commands to begin the NFS server installation:
[nfs-server] # yum install nfs-utils rpcbind
Next, for this procedure, we export an arbitrary directory called /opt/nfs. Create /opt/nfs directory:
[nfs-server] # mkdir -p /opt/nfs
Edit the /etc/exports file (which is the NFS exports file) to add the below line to export folder /opt/nfs to client
Next, make sure to open port 2049 on your firewall to allow client requests:
[nfs-server] # firewall-cmd --zone=public --add-port=2049/tcp --permanent
[nfs-server] # firewall-cmd --reload
Start the rpcbind and NFS server daemons in this order:
[nfs-server] # service rpcbind start; service nfs start
Check the NFS server status:
[nfs-server] # service nfs status 
Redirecting to /bin/systemctl status nfs.service
nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; 
 vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
   Active: active (exited) since Tue 2017-11-14 09:06:21 CST; 1h 14min ago
 Main PID: 2883 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service
Next, export all the file systems configured in /etc/exports:
[nfs-server] # exportfs -rav
And check the currently exported file systems:
[nfs-server] # exportfs -v
Next, continue with the NFS client:

Install the required packages:
[nfs-client] # yum install nfs-utils rpcbind
[nfs-client]# service rpcbind start
Create a mount point directory on the client, for example /mnt/nfs:
[nfs-client] # mkdir -p /mnt/nfs
Discover the NFS exported file systems:
[nfs-client] # showmount -e
Export list for
Mount the previously NFS exported /opt/nfs directory:
[nfs-client] # mount /mnt/nfs
Test the correctness of the setup between the NFS server and the NFS client by creating a file in the NFS mounted directory on the client side:
[nfs-client] # cd /mnt/nfs/
[nfs-client] # touch testfile
[nfs-client] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
Move to the server side and check if the testfile file exists:
[nfs-server] # cd /opt/nfs/
[nfs-server] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
At this point it is working, but it is not set up to remain there permanently (as in: it will be gone when either the NFS server or NFS client is rebooted. To ensure it remains working even after a reboot, perform the following steps:

On the NFS server side, to have the NFS server service enabled at system boot time, run:
[nfs-server] # systemctl enable nfs-server
On the NFS server client side, add an entry to the /etc/fstab file, that will ensure the NFS file system is mounted at boot time:  /mnt/nfs  nfs4  soft,intr,nosuid  0 0
The options for the NFS file systems are as follows:
  • soft = No hard mounting, avoids hanging file access commands on the NFS client, if the NFS servers is unavailable.
  • intr = Allow NFS requests to be interrupted if the NFS server goes down or can't be reached.
  • nosuid = This prevents remote users from gaining higher privileges by running a setuid program.
If you need to know on the NFS server side, which clients are using the NFS file system, you can use the netstat command, and search for both the NFS server IP address and port 2049:
[nfs-server] # netstat -an | grep
This will tell you the established connections for each of the clients, for example:
tcp  0  0  ESTABLISHED
In the example above you can see that IP address on port 757 (NFS client) is connected to port 2049 on IP address (NFS server).

Topics: Red Hat, System Admin


Incron is an interesting piece of software for Linux, that can monitor for file changes in a specific folder, and can act upon those file changes. For example, it's possible to wait for files to be written in a folder, and have a command run to process these files.

Incron is not installed by default and is part of the EPEL repository. For Red Hat and CentOS 7, it's also possible to just download the RPM package from, for example using wget.

To install incron, run:

# yum -y install /path/to/incron*rpm
There are 4 files important for incron:
  • /etc/incron.conf - The main configuration file for incron, but this file can be left configured as default.
  • /usr/sbin/incrond - This is the incron daemon that will have to run for incron to work. You can simply start it by executing this command, and it will automatically run in the background. When it's no longer needed, you can simply kill the process of /usr/sbin/incrond. However, its better to enable the service as system boot time and start the service:
    # systemctl enable incrond.service
    # service incrond start
  • /var/log/cron - This is the default location where the incron daemon will log its activities (through rsyslog). The file is also used by the cron daemon, so you may see other messages in this file. By using the tail command on this file, you can monitor what the incron daemon is doing. For example:
    # tail -f /var/log/cron
  • The incrontab file - You can edit this file by running:
    # incrontab -e
    This command will automatically load the incrontab file in an editor like VI, and you can add/modify/remove entries this way. Once you save the file, its contents will be automatically activated by the incron daemon. To list the entries in the incrontab file, run:
    # incrontab -l
There's a specific format to the entries in the incrontab file mentioned above, and the format looks like this:

[path] [mask] [command]

  • [path] is the folder that the incron daemon will be monitoring for any new files (only in the folder itself, not in any sub-folders).
  • [mask] is the activity that the incron daemon should respond to. There are several different available activities to choose from. For a list of options, see One option that can be used is "IN_CLOSE_WRITE", which means, act if a file is closed for writing, meaning, writing to a file in the folder has been completed.
  • [command] is the command to be run by the incron daemon when a file activity takes place in the monitored path. For this command you can use available wildcards, such as:
    • $@ : watched filesystem path
    • $# : event-related file name
An example of the incrontab file can be:
/path/to/my/folder IN_CLOSE_WRITE /path/to/script.bash $@ $#
You can have multiple entries in the incrontab file, each on a separate line. In the example above, the incron daemon will start script /path/to/script.bash with two parameters (the path of the monitored folder, and the name of the file that was written to the folder), for each file that has been closed for writing in folder /path/to/my/folder.

To monitor the status of the incron daemon, run:
# service incrond status
To restart the incron daemon, run:
# service incrond stop
# service incrond start
Or shorter:
# service incrond restart
There is a downside to using incron, which is, that there is no way to limit the number of processes that can be started by the incron daemon. If a thousand files are written to the folder monitored by the incron daemon, then it will kick off the defined proces in the incrontab file for that folder a thousand times. This may place some serious CPU load on a system (or even hang up the system), especially if the command being run is CPU and/or memory intensive.

Topics: Red Hat, System Admin


On Linux, you can use the watch command to run a specific command repeatedly, and monitor the output.

Watch is a command-line tool, part of the Linux procps and procps-ng packages, that runs the specified command repeatedly and displays the results on standard output so you can watch it change over time. You may need to encase the command in quotes for it to run correctly.

For example, you can run:

# watch "ps -ef | grep bash"
The "-d" argument can be used to highlight the differences between each iteration, for example to highlight the time changes in the ntptime command:
# watch -d ntptime
By default, the command is run every two seconds, although this is adjustable with the "-n" argument. For example, to run the uptime command every second:
# watch -n 1 uptime

Topics: LVM, Red Hat, Storage

Logical volume snapshot on Linux

Creating a snapshot of a logical volume, is an easy way to create a point-in-time backup of a file system, while still allowing changes to occur to the file system. Basically, by creating a snapshot, you will get a frozen (snapshot) file system that can be backed up without having to worry about any changes to the file system.

Many applications these days allow for options to "freeze" and "thaw" the application (as in, telling the application to not make any changes to the file system while frozen, and also telling it to continue normal operations when thawed). This functionality of an application can be really useful for creating snapshot backups. One can freeze the application, create a snapshot file system (literally in just seconds), and thaw the application again, allowing the application to continue. Then, the snapshot can be backed up, and once the backup has been completed, the snapshot can be removed.

Let's give this a try.

In the following process, we'll create a file system /original, using a logical volume called originallv, in volume group "extern". We'll keep it relatively small (just 1 Gigabyte - or 1G), as it is just a test:

# lvcreate -L 1G -n originallv extern
  Logical volume "originallv" created.
Next, we'll create a file system of type XFS on it, and we'll mount it.
# mkfs.xfs /dev/mapper/extern-originallv
# mkdir /original
# mount /dev/mapper/extern-originallv /original
# df -h | grep original
/dev/mapper/extern-originallv 1014M   33M  982M   4% /original
At this point, we have a file system /original available, and we can start creating a snapshot of it. For the purpose of testing, first, create a couple of files in the /original file system:
# touch /original/file1 /original/file2 /original/file3
# ls /original
file1  file2  file3
Creating a snapshot of a logical volume is done using the "-s" option of lvcreate:
# lvcreate -s -L 1G -n originalsnapshotlv /dev/mapper/extern-originallv
In the command example above, a size of 1 GB is specified (-L 1G). The snapshot logical volume doesn't have to be the same size as the original logical volume. The snapshot logical volume only needs to hold any changes to the original logical volume while the snapshot logical volume exists. So, if there are very little changes to the original logical volume, the snapshot logical volume can be quite small. It's not uncommon for the snapshot logical volume to be just 10% of the size of the original logical volume. If there are a lot of changes to the original logical volume, while the snapshot logical volume exists, you may need to specify a larger logical volume size. Please note that large databases, in which lots of changes are being made, are generally not good candidates for snapshot-style backups. You'll probably have to test in your environment if it will work for your application, and to determine what a good size will be of the snapshot logical volume.

The name of the snapshot logical volume in the command example above is set to originalsnapshotlv, using the -n option. And "/dev/mapper/extern-originallv" is specified to indicate what the device name is of the original logical volume.

We can now mount the snapshot:
# mkdir /snapshot
# mount -o nouuid /dev/mapper/extern-originalsnapshotlv /snapshot
# df -h | grep snapshot
/dev/mapper/extern-originalsnapshotlv 1014M   33M  982M   4% /snapshot
And at this point, we can see the same files in the /snapshot folder, as in the /original folder:
# ls /snapshot
file1  file2  file3
To prove that the /snapshot file system remains untouched, even when the /original file system is being changed, let's create a file in the /original file system:
# touch /original/file4
# ls /original
file1  file2  file3  file4
# ls /snapshot
file1  file2  file3
As you can see, the /original file system now holds 4 files, while the /snapshot file system only holds the original 3 files. The snapshot file system remains untouched.

To remove the snapshot, a simple umount and lvremove will do:
# umount /snapshot
# lvremove -y /dev/mapper/extern-originalsnapshotlv
So, if you want to run backups of your file systems, while ensuring no changes are being made, here's the logical order of steps that can be scripted:
  • Freeze the application
  • Create the snapshot (lvcreate -s ...)
  • Thaw the application
  • Mount the snapshot (mkdir ... ; mount ...)
  • Run the backup of the snapshot file system
  • Remove the snapshot (umount ... ; lvremove ... ; rmdir ...)

Topics: Red Hat, Virtualization

Renaming a virtual machine domain with virsh

There is no API to accomplish renaming a domain (or system) using virsh. The well known graphical tool "virt-manager" (or "Virtual Machine Manager") on Red Hat Enterprise Linux therefore also does not offer the possibility to rename a domain.

In order to do that, you have to stop the virtual machine and edit the XML file as follows:

# virsh dumpxml > machine.xml
# vi machine.xml
Edit the name between the name tags at the beginning of the XML file.

When completed, remove the domain and define it again:
# virsh undefine
# virsh define machine.xml

Topics: Red Hat

Red Hat Enterprise Linux links

Official Red Hat sites:

Other Red Hat related sites:

Number of results found for topic Red Hat: 60.
Displaying results: 1 - 10.