Topics: Red Hat / Linux

Convert Red Hat Enterprise Linux to Oracle Linux

Why would you convert an existing Red Hat or CentOS system to Oracle Linux?

Well, there aren't huge advantages, but a few:

  • If you would like to use Oracle Linux technical support. Oracle Linux licensing is supposedly cheaper than that of Red Hat, but please do verify first, it that's the case for your organization as well.
  • Oracle Linux updates are more frequent than CentOS, but actually slower than Red Hat.
If you want the Oracle sales pitch, check here.

Oracle Linux is binary compatible with RHEL and with CentOS, so using your organization's existing applications should not be a problem on Oracle Linux.

When you've decided it's time to convert, then here's how to do it:

First, create a backup of your system and make sure the backup is successful. Don't skip this step.

Configure the Oracle Linux Yum repository (see: http://public-yum.oracle.com/getting-started.html), for example for Red Hat version 7:
# cd /etc/yum.repos.d
# wget https://yum.oracle.com/public-yum-ol7.repo
Configure the Oracle Linux GPG Key (see: http://public-yum.oracle.com/faq.html#a10), for example for Red Hat version 7:
# wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 \
-O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
# gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Configure the Oracle pre-install package:
# yum install oracle-rdbms-server-12cR1-preinstall -y
Run a full update:
# yum update -y
Reboot:
# reboot
If you wish to convert a CentOS system to Oracle Linux, that can be done too, as follows:
# curl -O https://linux.oracle.com/switch/centos2ol.sh
# sh centos2ol.sh
Make sure all of the packages are synced up with the Oracle Linux repository:
# yum distro-sync
No need to reboot afterwards, however, it is recommended to do so, to make sure the system comes back up normally after a reboot.

Topics: Red Hat / Linux, Storage

Using tmp.mount

If you've ever looked at the /tmp file system on a RHEL system, you may have noticed that it is, by default, simply a folder in the root directory.

For example:

# df -h /tmp
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  100G  4.6G   96G   5% /
The risk of having this is, that anyone can fill up the root file system, by writing temporary data to the /tmp folder, which is risky for system stability.

Red Hat Enterprise Linux 7 offers the ability to use /tmp as a mount point for a temporary file storage system (tmpfs), but unfortunately, it is not enabled by default.

When enabled, this temporary storage appears as a mounted file system, but stores its content in volatile memory instead of on a persistent storage device. And when using this, no files in /tmp are stored on the hard drive except when memory is low, in which case swap space is used. This also means that the contents of /tmp are not persisted across a reboot.

To enable this feature, execute the following commands:
# systemctl enable tmp.mount
# systemctl start tmp.mount
RHEL uses a default size of half the memory size for the in-memory /tmp file system. For example on a system with 16 GB of memory, an 8 GB /tmp file system is set up after enabling the tmp.mount feature:
# df -h /tmp
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  100G   53G   48G  53% /
# systemctl enable tmp.mount
# systemctl start tmp.mount
# df -h /tmp
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           7.8G     0  7.8G   0% /tmp
By having this in place, it's no longer possible to fill up the root file system, when writing files and/or data to the /tmp file system. The downside, however, is that this uses memory, and when filling up the memory, may be using the swap space. As such, having a dedicated file system on disk for the /tmp folder is still the better solution.

Topics: Red Hat / Linux, Scripting

Bash scripting: SSH breaks out of while-loop

If you use a bash shell script that does an ssh command within a while-loop, you may encounter that the ssh command will break out of the while-loop, and that the script doesn't complete all the intended ssh commands. An example of a script is below:

# cat hostsfile
server1
server2
# cat script
cat hostsfile | while read server ; do
        echo $server
        ssh $server uptime
done
# ./script
server1
 16:19:22 up 11 days, 22:30,  0 users,  load average: 0.00, 0.01, 0.05
As you can see in the example above; the script should run a ssh command for all files in the file "hostsfile". Instead, it stops after the first one.

This can be very easily resolved, by adding the "-n" option for the ssh command, as follows:
# cat script
cat hostsfile | while read server ; do
        echo $server
        ssh -n $server uptime
done
# ./script
server1
 16:19:22 up 11 days, 22:30,  0 users,  load average: 0.00, 0.01, 0.05
server2
 15:20:56 up 11 days, 22:32,  0 users,  load average: 0.00, 0.00, 0.00

Topics: Red Hat / Linux, Storage

Setting up a Volume Group and File Systems on RHEL

This procedure describes how to set a new volume group and file systems on a Red Hat Enterprise Linux system.

First, we'll need to make sure that there is storage available on the system that can be allocated to a new volume group. For this purpose, run the lsblk command:

# lsblk | grep disk
In the output, for example, you may see:
# lsblk | grep disk
fd0       2:0      1    4K  0 disk
sda       8:0      0   60G  0 disk
sdb       8:16     0    5T  0 disk
In the example above, the system has two SCSI devices (that start with "sd"), called sda and sdb. Device sda is 60 GB, and device sdb is 5 TB.

Next, run this command:
# lsblk -a
It will provide you with a tree-like output showing all the disks available on the system, and any partitions (listed as "part") and logical volumes (listed as "lvm") configured on those disks. For the sake of this example, we'll assume that on device sdb there are no partitions and or logical volumes configured, and thus is available.

Also, for the sake of this example, we'll assume that we'll want to set up a few file systems for an Oracle environment, called /u01, /u02, /u03, /u04 and /u05, and that we'll want to have these file systems configured within a volume group called "oracle".

List the volume groups already configured on the system:
# vgs
Make sure there isn't already a volume group present that is called oracle.

Now, let's create a new volume group called oracle, using device sdb:
# vgcreate oracle /dev/sdb
  Physical volume "/dev/sdb" successfully created.
  Volume group "oracle" successfully created
We can now use the "vgs" and "pvs" commands to list the volume groups and the physical volumes on the system. Note in the output that you now can see that a volume group called "oracle" is present, and that disk /dev/sdb is configured in volume group "oracle".

Now create the logical volumes. A logical volume is required for us to create the file systems in later on. We'll be creating the following logical volumes:
  • u01lv of 100 GB for the use of the /u01 file system
  • u02lv of 1.5 TB for the use of the /u02 file system
  • u03lv of 1.5 TB for the use of the /u03 file system
  • u04lv of 1.5 TB for the use of the /u04 file system
  • u05lv of 300 GB for the use of the /u05 file system
Run the following commands to create the logical volumes. You may run the "lvs" command before, in between and after each command to see your progress.
# lvcreate -n u01lv -L 100G oracle
# lvcreate -n u02lv -L 1.5T oracle
# lvcreate -n u03lv -L 1.5T oracle
# lvcreate -n u04lv -L 1.5T oracle
# lvcreate -n u05lv -L 300G oracle
# lvs | grep oracle
  u01lv oracle -wi-a----- 100.00g
  u02lv oracle -wi-a-----   1.50t
  u03lv oracle -wi-a-----   1.50t
  u04lv oracle -wi-a-----   1.50t
  u05lv oracle -wi-a----- 300.00g
Now it's time to create the file systems. We'll be using the standard XFS type of file system:
# mkfs.xfs /dev/oracle/u01lv
# mkfs.xfs /dev/oracle/u02lv
# mkfs.xfs /dev/oracle/u03lv
# mkfs.xfs /dev/oracle/u04lv
# mkfs.xfs /dev/oracle/u05lv
And now that the file systems have been created on top of the logical volumes, we can mount the file systems. To ensure that file systems are mounted at the time that the system boots up, it's best to add the new file systems to file /etc/fstab. Add the following lines to that file:
/dev/oracle/u01lv      /u01     xfs      defaults,noatime 0 0
/dev/oracle/u02lv      /u02     xfs      defaults,noatime 0 0
/dev/oracle/u03lv      /u03     xfs      defaults,noatime 0 0
/dev/oracle/u04lv      /u04     xfs      defaults,noatime 0 0
/dev/oracle/u05lv      /u05     xfs      defaults,noatime 0 0
Make sure the folders of the mount points exist by creating them:
# mkdir /u01
# mkdir /u02
# mkdir /u03
# mkdir /u04
# mkdir /u05
Now mount all the file systems at once:
# mount -a
And then verify that the file systems are indeed present:
# df -h | grep u0
/dev/mapper/oracle-u01lv  100G   33M  100G   1% /u01
/dev/mapper/oracle-u02lv  1.5T   33M  1.5T   1% /u01
/dev/mapper/oracle-u03lv  1.5T   33M  1.5T   1% /u01
/dev/mapper/oracle-u04lv  1.5T   33M  1.5T   1% /u01
/dev/mapper/oracle-u05lv  300G   33M  300G   1% /u01
And that's it. The file systems have been created, and these file systems will persist during a system reboot.

Topics: Monitoring, Red Hat / Linux

Monitoring a log file through Systemd

The following procedure describes how you can continuously monitor a log file through the use of SystemD on Red Hat Enterprise Linux (or similar operating systems).

Let's say you want to receive an email when a certain string occurs in a log file. For example, if the string "error" occurs in file /var/log/messages.

First, create a script that does a tail of /var/log/messages and searches for that string:

# cat /usr/local/bin/monitor.bash
#!/bin/bash

tail -fn0 /var/log/messages | while read line ; do
   echo "${line}" | grep -i "error" > /dev/null
   if [ $? = 0 ] ; then
      echo "${line}" | mailx -s "error in messages file" your@emailaddress.com
   fi
done
You can run that script, and be done with it. But if that script somehow gets cancelled or killed, for example when the system is rebooted, then the monitoring of the log file stops as well. That's where the use of Systemd comes in.

Create a file in folder /etc/systemd/system, such as "monitor.service", and add the following:
[Unit]
Description=My monitor script
After=network.target

[Service]
Type=simple
ExecStart=/bin/bash /usr/local/bin/monitor.bash
TimeoutStartSec=0
Restart=always
StartLimitInterval=0

[Install]
WantedBy=default.target
This file is a description of a service managed by Systemd, and it basically tells SystemD to run the script that we created earlier, and to restart it, in case it fails (that's what "Restart=always" is for).

Next, you'll have to tell Systemd that you made some changes:
# systemctl daemon-reload
Now you can start the newly defined service:
# systemctl start monitor.service
And after starting it, you can query the status:
# systemctl status monitor.service
monitor.service - My monitor script
   Loaded: loaded (/etc/systemd/system/monitor.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2018-10-03 12:12:28 CDT; 2s ago
 Main PID: 1832 (bash)
   CGroup: /system.slice/monitor.service
           ??1832 /bin/bash /usr/local/bin/monitor.bash
           ??1833 tail -fn0 /var/log/messages
           ??1834 /bin/bash /usr/local/bin/monitor.bash

Oct 03 12:12:28 enemigo systemd[1]: Started My monitor script.
Oct 03 12:12:28 enemigo systemd[1]: Starting My monitor script...
As you can see in the output above, both the monitor.bash script and the tail command are running. To test if the service is actually restarted, you can try killing the tail process or the monitor.bash script, and then check for the status again. You'll see it is restarted.

You may also want to test that the new monitor script indeed is working. Considering that /var/log/messages is a file written to through rsyslog, you can log an entry with the string "error" to that file as follows:
# logger error
Next, you should receive an email saying that an "error" occurrence was found in the messages file.

Finally, you'll want to make sure this new monitor service is restarted also when the system boots:
# systemctl enable monitor.service

Topics: Hardware, Red Hat / Linux

Common items to install under Red Hat on Dell hardware

There are a few items that can be very useful to install within Red Hat, if used on Dell hardware. These are the OpenManage System Administrator tools that will provide you more information on the Dell hardware, Dell System Updater or DSU, that can be used to update firmware and BIOS on Dell hardware, and the Dell iDRAC Service Module that allows the iDRAC to exchange information with the Operating System.

First, set up the Dell Linux repository:

# curl -s http://linux.dell.com/repo/hardware/dsu/bootstrap.cgi | bash
Next, install OpenManage System Administrator, or OMSA, and make sure to start it, and enable it at boot time:
# yum -y install srvadmin-all
# /opt/dell/srvadmin/sbin/srvadmin-services.sh start
# /opt/dell/srvadmin/sbin/srvadmin-services.sh enable
With OMSA installed, you can, for example, now retrieve information about the physical disks in the system, and also the virtual disks (or RAID arrays) configured on these physical disks. Here's how you can use the command line interface tools to look up this information:

List the controllers available in the system:
# /opt/dell/srvadmin/bin/omreport storage controller
Now, list the physical disks (or pdisks) for each controller, for example for the controller with ID 0:
# /opt/dell/srvadmin/bin/omreport storage pdisk controller=0
And you can list the virtual disks (or pdisks) for each controller, for example for the controller with ID 0:
# /opt/dell/srvadmin/bin/omreport storage vdisk controller=X
A lot more is possible with OMSA, but that's outside the scope of this article. Instead, let's move on with the items to install.

Install Dell System Update (or DSU):
# yum -y install dell-system-update
To update firmware, you can now run:
# dsu
Usually it's just fine to select all firmware items to update (by pressing "a") and have it updated (by pressing "c"). This may take a while, and may require a reboot of the system. Upon reboot, the system may also take a while to complete the firmware and/or BIOS updates.

Finally, the Dell iDRAC service module. The latest version (at time of writing this article) can be found here: https://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=GH8R3. Copy the download link on this page of the GNU ZIP file.

The Dell Service Module requires the usbutils package to be installed:
# yum -y install usbutils
Now you can download and install the Dell iDRAC Service Module:
# mkdir /tmp/dsm
# cd /tmp/dsm
# wget https://downloads.dell.com/FOLDER05038177M/1/OM-iSM-Dell-Web-LX-320-1234_A00.tar.gz
# gzip -d *z
# tar xf *tar
# ./setup.sh
Here it's best to select all features and hit "i" to install. Keep everything at default settings and answer "yes" to any other questions. After installation is completed, you can log in to the iDRAC of the system, and view Operating System information there. This information has been communicated from the OS to the iDRAC by the Dell iDRAC Service Module.

Topics: Networking, Red Hat / Linux

How to add a new static route on RHEL 7

Any network we are trying to reach is accessed via the default gateway only, if it is not implicitly overwritten by another static route definition. Let's have a look at a routing table on a Red Hat 7 system:

# ip route show
default via 192.168.0.1 dev em1 proto static metric 100
192.168.0.0/24 dev em1 proto kernel scope link src 192.168.0.204 metric 100
192.168.1.0/24 dev em2 proto kernel scope link src 192.168.1.32
From the above we can see that any packets to reach a destination network IP 192.168.0.0/24 (meaning anything on the network 192.168.0.X with a subnet mask of 255.255.255.0) should travel via interface em1 with IP address 192.168.0.204, and any other destination network not implicitly defined should use default gateway 192.168.0.1.

Sometimes you'll require a static route. Static routes are for traffic that must not, or should not, go through the default gateway. Routing is often handled by devices on the network dedicated to routing (although any device can be configured to perform routing). Therefore, it is often not necessary to configure static routes on RHEL servers or clients. Exceptions include traffic that must pass through an encrypted VPN tunnel, or traffic that should take a specific route for reasons of cost or security or bandwidth. The default gateway is for any and all traffic which is not destined for the local network and for which no preferred route is specified in the routing table. The default gateway is traditionally a dedicated network router.

To add a new static route means to define yet another destination network as well as specify via which IP address and interface the packet should travel through in order to reach its destination. Usually this comes in handy when you have a second interface on the system that can be used to reach other networks (other than the networks that can be reached through the default gateway). In the example above that's interface em2.

For example, let's add a static route to destination network 192.168.2.0/24 via the 192.168.1.32 IP address and em2 network interface.

There are 2 ways of accomplishing this: By either using the "ip route add" command, which will define the route, but it will be lost upon reboot. Or by creating a route configuration file in the /etc/sysconfig/network-scripts/ directory.

First, the "ip route add" command:

To add a static route to a network, in other words, representing a range of IP addresses, issue this command as root:
# ip route add 192.168.2.0/24 via 192.168.1.32 dev em2
Where 192.168.2.0 is the IP address of the destination network in dotted decimal notation and /24 is the network prefix, which is equal to a subnet mask of 255.255.255.0. The network prefix is the number of enabled bits in the subnet mask. If you now rerun the "ip route show" command, you'll see that the route has been added.
192.168.2.0/24 via 192.168.1.32 dev em2 proto static metric 1
If you ever need to delete the route, you can use the same command, but just replace "add" with "delete".

Static route configuration can be stored per-interface in a /etc/syconfig/network-scripts/route-interface file. For example, fstatic routes for the em2 interface would be stored in the /etc/sysconfig/network-scripts/route-em2 file, for example:
# cat /etc/sysconfig/network-scripts/route-em2
192.168.2.0/24 via 192.168.1.32 dev em2
Once done, restart your network service (or restart the server):
# systemctl restart network

Topics: Networking, Red Hat / Linux

Using tcpdump to discover network information

For most switches it is impossible to see which switch and switch port you are when you are connected to an 'access' port.

Using the Cisco Discovery Protocol or CDP (Cisco) and Link Layer Discovery Protocol or LLDP (Juniper or Dell) you can find out quite a bit of information about the switch that a host is connected to.

Enabling CDP/LLDP on an access port is arguably a security risk (information exposure), so it might not be enabled on your network. You can use the tcpdump command to disassemble CDP/LLDP packets which will usually show information like the name of the switch, its IP address, the switch port connected to, and sometimes the VLAN in use.

For Cisco CDP, assuming the network interface you wish to check is called "eth0":

# tcpdump -nn -v -i eth0 -s 1500 -c 1 'ether[20:2] == 0x2000' 
For Juniper LLDP:
# tcpdump -nn -v -i eth0 -s 1500 -c 1 '(ether[12:2]=0x88cc or ether[20:2]=0x2000)'

Topics: Red Hat / Linux

Install Python version 3 on Red Hat / CentOS

Red Hat / CentOS versions come, by default, with an older version of Python installed:

# python --version
Python 2.7.5
Python version 3 is available, however, it is not included with the Red Hat and/or CentOS distribution (at the time of writing this article).

If you do wish to use Python version 3, you can download this version from iuscommunity.org, which is an online community dedicated to delivering high quality packages of newer versions of popular software.

Here's how to do that:

First, install the ius-release.rpm:
# yum install https://centos7.iuscommunity.org/ius-release.rpm
Next, install Python 3.6:
# yum install python36u python36u-libs python36u-devel python36u-pip
Now you can write a python script. Be sure to replace the shebang at the beginning of the script from "python" to "python3.6", like this:
# /usr/bin/env python3.6
print('Hello')
If you were to just use just "python" in the shebang, then you get version 2 of Python. By specifiying "python3.6" in the shebang, you get version 3.6.:
# python --version
Python 3.6.5

Topics: Red Hat / Linux, System Admin

Linux Screen

The screen utility on Linux allows you to:

  • Use multiple shell windows from a single SSH session
  • Keep a shell active even through network disruptions
  • Disconnect and re-connect to a shell sessions from multiple locations
  • Run a long running process without maintaining an active shell session
First, let's install screen on a CentOS system:
# yum -y install screen
Once it's installed, screen can be easily started:
# screen
You are now inside of a window within screen. This functions just like a normal shell except for a special control command: "Ctrl-a".

Screen uses the command "Ctrl-a" (that's the control key and a lowercase "a") as a signal to send commands to screen instead of the shell.

For example, type "Ctrl-a", let go, and then type "?". You should now see the screen help page, showing you all the available key binding. Key bindings are the commands the screen accepts after you hit "Ctrl-a". You can reconfigure these keys to your liking using a .screenrc file, if you like.

To create a new window, you can use "Ctrl-a" and "c". This will create a new window for you with your default prompt. Your old window is still active.

For example, you can be running top and then open a new window to do other things. Top will remain running in the first window.

Screen allows you to switch between screens, by using "Ctrl-a" and "n". This command switches you to the next window. If you were to open more windows in screen, then "Ctrl-a" and "n" will allow you to cycle through all the windows, by repating the "Ctrl-a" and "n" commands. The windows work like a carousel and will loop back around to your first window. You can create several windows and toggle through them with "Ctrl-a" and "n" for the next window or "Ctrl-a" and "p" for the previous window. Each process in a window will keep running until you exit out of that window by typing "exit".

Anoter feature of screen is that you can detach from a screen, by typing "Ctrl-a" and "d". Screen allows you to detach from a window and reattach later. If your network connection fails, screen will automatically detach your session! If you detach from screen, you will drop back into your shell. All screen windows are still there and you can re-attach to them later.

If your connection drops or you have detached from a screen, you can re-attach by just running:
# screen -r
This will re-attach to your screen.

Screen will also allow you to create a log of the session, by typing "Ctrl-a" and "H". When you do that, you'll see in the Putty titlebar of your session the name of the log file being created, usually in the form of "screenlog.0". Screen will keep appending data to the file through multiple sessions. Using the log function is very useful for capturing what you have done, especially if you are making a lot of changes. If something goes awry, you can look back through your logs. Locking Your Screen Session

If you need to step away from your computer for a minute, you can lock your screen session using "Ctrl-a" and "x". This will require a password to access the session again.

When you are done with your work, you can stop screen by typing exit from your shell. This will close that screen window. You have to close all screen windows to terminate the session. You should get a message about screen being terminated once you close all windows. Alternatively, you can use "Ctrl-a" and "k". You should get a message if you want to kill the screen.

Number of results found for topic Red Hat / Linux: 103.
Displaying results: 21 - 30.