Topics: Red Hat, System Admin

XRDP

XRDP is an Open Source Remote Desktop Protocol server, very similar to what is used on Windows Server systems, but XRDP is meant for Linux. Once installed on Linux, you can set up a RDP (or Remote Desktop Connection) session from a Windows system directly to a Linux system.

Here's how you install and configure it on RHEL or CentOS 7:

First of all, we need to install the EPEL repository and XRDP server:

# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# yum -y install xrdp
Next, we need to start and enable the service:
# systemctl start xrdp.service 
# systemctl enable xrdp.service
To check if its running, run:
# netstat -an | grep 3389 
tcp   0    0 0.0.0.0:3389   0.0.0.0:*   LISTEN
That's all. Now you can connect to your server from any Windows machine using RDP.

Topics: Red Hat

Preventing Gnome's initial setup

The first time a user logs into the default desktop (Gnome) for Red Hat version 7 based systems, they're prompted to set a language, add online accounts, and dropped into a help menu right from the start. While this might be nice for brand new users, it's certainly not ideal for everyone.

There is a very simple way to prevent this annoyance, by simple removing package gnome-initial-setup:

# yum -y erase gnome-inital-setup

Topics: Red Hat, Scripting

Bash scripting: SSH breaks out of while-loop

If you use a bash shell script that does an ssh command within a while-loop, you may encounter that the ssh command will break out of the while-loop, and that the script doesn't complete all the intended ssh commands. An example of a script is below:

# cat hostsfile
server1
server2
# cat script
cat hostsfile | while read server ; do
        echo $server
        ssh $server uptime
done
# ./script
server1
 16:19:22 up 11 days, 22:30,  0 users,  load average: 0.00, 0.01, 0.05
As you can see in the example above; the script should run a ssh command for all files in the file "hostsfile". Instead, it stops after the first one.

This can be very easily resolved, by adding the "-n" option for the ssh command, as follows:
# cat script
cat hostsfile | while read server ; do
        echo $server
        ssh -n $server uptime
done
# ./script
server1
 16:19:22 up 11 days, 22:30,  0 users,  load average: 0.00, 0.01, 0.05
server2
 15:20:56 up 11 days, 22:32,  0 users,  load average: 0.00, 0.00, 0.00

Topics: Red Hat, Storage

Using tmp.mount

If you've ever looked at the /tmp file system on a RHEL system, you may have noticed that it is, by default, simply a folder in the root directory.

For example:

# df -h /tmp
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  100G  4.6G   96G   5% /
The risk of having this is, that anyone can fill up the root file system, by writing temporary data to the /tmp folder, which is risky for system stability.

Red Hat Enterprise Linux 7 offers the ability to use /tmp as a mount point for a temporary file storage system (tmpfs), but unfortunately, it is not enabled by default.

When enabled, this temporary storage appears as a mounted file system, but stores its content in volatile memory instead of on a persistent storage device. And when using this, no files in /tmp are stored on the hard drive except when memory is low, in which case swap space is used. This also means that the contents of /tmp are not persisted across a reboot.

To enable this feature, execute the following commands:
# systemctl enable tmp.mount
# systemctl start tmp.mount
RHEL uses a default size of half the memory size for the in-memory /tmp file system. For example on a system with 16 GB of memory, an 8 GB /tmp file system is set up after enabling the tmp.mount feature:
# df -h /tmp
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  100G   53G   48G  53% /
# systemctl enable tmp.mount
# systemctl start tmp.mount
# df -h /tmp
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           7.8G     0  7.8G   0% /tmp
By having this in place, it's no longer possible to fill up the root file system, when writing files and/or data to the /tmp file system. The downside, however, is that this uses memory, and when filling up the memory, may be using the swap space. As such, having a dedicated file system on disk for the /tmp folder is still the better solution.

Topics: Red Hat

Convert Red Hat Enterprise Linux to Oracle Linux

Why would you convert an existing Red Hat or CentOS system to Oracle Linux?

Well, there aren't huge advantages, but a few:

  • If you would like to use Oracle Linux technical support. Oracle Linux licensing is supposedly cheaper than that of Red Hat, but please do verify first, it that's the case for your organization as well.
  • Oracle Linux updates are more frequent than CentOS, but actually slower than Red Hat.
If you want the Oracle sales pitch, check here.

Oracle Linux is binary compatible with RHEL and with CentOS, so using your organization's existing applications should not be a problem on Oracle Linux.

When you've decided it's time to convert, then here's how to do it:

First, create a backup of your system and make sure the backup is successful. Don't skip this step.

Configure the Oracle Linux Yum repository (see: http://public-yum.oracle.com/getting-started.html), for example for Red Hat version 7:
# cd /etc/yum.repos.d
# wget https://yum.oracle.com/public-yum-ol7.repo
Configure the Oracle Linux GPG Key (see: http://public-yum.oracle.com/faq.html#a10), for example for Red Hat version 7:
# wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 \
-O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
# gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Configure the Oracle pre-install package:
# yum install oracle-rdbms-server-12cR1-preinstall -y
Run a full update:
# yum update -y
Reboot:
# reboot
If you wish to convert a CentOS system to Oracle Linux, that can be done too, as follows:
# curl -O https://linux.oracle.com/switch/centos2ol.sh
# sh centos2ol.sh
Make sure all of the packages are synced up with the Oracle Linux repository:
# yum distro-sync
No need to reboot afterwards, however, it is recommended to do so, to make sure the system comes back up normally after a reboot.

Topics: Red Hat

Disable NUMA on RHEL version 7

This article is based on: https://access.redhat.com/solutions/23216 and describes how to disable NUMA on a Red Hat version 7 based system.

Edit file /etc/default/grub, and add "numa=off" to the GRUB_CMDLINE_LINUX, for example:

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb
quiet transparent_hugepage=never numa=off"
Rebuild the grub config:
# grub2-mkconfig -o /etc/grub2.cfg
Then reboot:
# reboot

Topics: AIX, Red Hat, Security

Generate a random password

The following command will generate a random password of 9 characters:

# openssl rand -base64 9

Topics: Red Hat, Storage

Setting up a Volume Group and File Systems on RHEL

This procedure describes how to set a new volume group and file systems on a Red Hat Enterprise Linux system.

First, we'll need to make sure that there is storage available on the system that can be allocated to a new volume group. For this purpose, run the lsblk command:

# lsblk | grep disk
In the output, for example, you may see:
# lsblk | grep disk
fd0       2:0      1    4K  0 disk
sda       8:0      0   60G  0 disk
sdb       8:16     0    5T  0 disk
In the example above, the system has two SCSI devices (that start with "sd"), called sda and sdb. Device sda is 60 GB, and device sdb is 5 TB.

Next, run this command:
# lsblk -a
It will provide you with a tree-like output showing all the disks available on the system, and any partitions (listed as "part") and logical volumes (listed as "lvm") configured on those disks. For the sake of this example, we'll assume that on device sdb there are no partitions and or logical volumes configured, and thus is available.

Also, for the sake of this example, we'll assume that we'll want to set up a few file systems for an Oracle environment, called /u01, /u02, /u03, /u04 and /u05, and that we'll want to have these file systems configured within a volume group called "oracle".

List the volume groups already configured on the system:
# vgs
Make sure there isn't already a volume group present that is called oracle.

Now, let's create a new volume group called oracle, using device sdb:
# vgcreate oracle /dev/sdb
  Physical volume "/dev/sdb" successfully created.
  Volume group "oracle" successfully created
We can now use the "vgs" and "pvs" commands to list the volume groups and the physical volumes on the system. Note in the output that you now can see that a volume group called "oracle" is present, and that disk /dev/sdb is configured in volume group "oracle".

Now create the logical volumes. A logical volume is required for us to create the file systems in later on. We'll be creating the following logical volumes:
  • u01lv of 100 GB for the use of the /u01 file system
  • u02lv of 1.5 TB for the use of the /u02 file system
  • u03lv of 1.5 TB for the use of the /u03 file system
  • u04lv of 1.5 TB for the use of the /u04 file system
  • u05lv of 300 GB for the use of the /u05 file system
Run the following commands to create the logical volumes. You may run the "lvs" command before, in between and after each command to see your progress.
# lvcreate -n u01lv -L 100G oracle
# lvcreate -n u02lv -L 1.5T oracle
# lvcreate -n u03lv -L 1.5T oracle
# lvcreate -n u04lv -L 1.5T oracle
# lvcreate -n u05lv -L 300G oracle
# lvs | grep oracle
  u01lv oracle -wi-a----- 100.00g
  u02lv oracle -wi-a-----   1.50t
  u03lv oracle -wi-a-----   1.50t
  u04lv oracle -wi-a-----   1.50t
  u05lv oracle -wi-a----- 300.00g
Now it's time to create the file systems. We'll be using the standard XFS type of file system:
# mkfs.xfs /dev/oracle/u01lv
# mkfs.xfs /dev/oracle/u02lv
# mkfs.xfs /dev/oracle/u03lv
# mkfs.xfs /dev/oracle/u04lv
# mkfs.xfs /dev/oracle/u05lv
And now that the file systems have been created on top of the logical volumes, we can mount the file systems. To ensure that file systems are mounted at the time that the system boots up, it's best to add the new file systems to file /etc/fstab. Add the following lines to that file:
/dev/oracle/u01lv      /u01     xfs      defaults,noatime 0 0
/dev/oracle/u02lv      /u02     xfs      defaults,noatime 0 0
/dev/oracle/u03lv      /u03     xfs      defaults,noatime 0 0
/dev/oracle/u04lv      /u04     xfs      defaults,noatime 0 0
/dev/oracle/u05lv      /u05     xfs      defaults,noatime 0 0
Make sure the folders of the mount points exist by creating them:
# mkdir /u01
# mkdir /u02
# mkdir /u03
# mkdir /u04
# mkdir /u05
Now mount all the file systems at once:
# mount -a
And then verify that the file systems are indeed present:
# df -h | grep u0
/dev/mapper/oracle-u01lv  100G   33M  100G   1% /u01
/dev/mapper/oracle-u02lv  1.5T   33M  1.5T   1% /u01
/dev/mapper/oracle-u03lv  1.5T   33M  1.5T   1% /u01
/dev/mapper/oracle-u04lv  1.5T   33M  1.5T   1% /u01
/dev/mapper/oracle-u05lv  300G   33M  300G   1% /u01
And that's it. The file systems have been created, and these file systems will persist during a system reboot.

Topics: Monitoring, Red Hat

Monitoring a log file through Systemd

The following procedure describes how you can continuously monitor a log file through the use of SystemD on Red Hat Enterprise Linux (or similar operating systems).

Let's say you want to receive an email when a certain string occurs in a log file. For example, if the string "error" occurs in file /var/log/messages.

First, create a script that does a tail of /var/log/messages and searches for that string:

# cat /usr/local/bin/monitor.bash
#!/bin/bash

tail -fn0 /var/log/messages | while read line ; do
   echo "${line}" | grep -i "error" > /dev/null
   if [ $? = 0 ] ; then
      echo "${line}" | mailx -s "error in messages file" your@emailaddress.com
   fi
done
You can run that script, and be done with it. But if that script somehow gets cancelled or killed, for example when the system is rebooted, then the monitoring of the log file stops as well. That's where the use of Systemd comes in.

Create a file in folder /etc/systemd/system, such as "monitor.service", and add the following:
[Unit]
Description=My monitor script
After=network.target

[Service]
Type=simple
ExecStart=/bin/bash /usr/local/bin/monitor.bash
TimeoutStartSec=0
Restart=always
StartLimitInterval=0

[Install]
WantedBy=default.target
This file is a description of a service managed by Systemd, and it basically tells SystemD to run the script that we created earlier, and to restart it, in case it fails (that's what "Restart=always" is for).

Next, you'll have to tell Systemd that you made some changes:
# systemctl daemon-reload
Now you can start the newly defined service:
# systemctl start monitor.service
And after starting it, you can query the status:
# systemctl status monitor.service
monitor.service - My monitor script
   Loaded: loaded (/etc/systemd/system/monitor.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2018-10-03 12:12:28 CDT; 2s ago
 Main PID: 1832 (bash)
   CGroup: /system.slice/monitor.service
           ??1832 /bin/bash /usr/local/bin/monitor.bash
           ??1833 tail -fn0 /var/log/messages
           ??1834 /bin/bash /usr/local/bin/monitor.bash

Oct 03 12:12:28 enemigo systemd[1]: Started My monitor script.
Oct 03 12:12:28 enemigo systemd[1]: Starting My monitor script...
As you can see in the output above, both the monitor.bash script and the tail command are running. To test if the service is actually restarted, you can try killing the tail process or the monitor.bash script, and then check for the status again. You'll see it is restarted.

You may also want to test that the new monitor script indeed is working. Considering that /var/log/messages is a file written to through rsyslog, you can log an entry with the string "error" to that file as follows:
# logger error
Next, you should receive an email saying that an "error" occurrence was found in the messages file.

Finally, you'll want to make sure this new monitor service is restarted also when the system boots:
# systemctl enable monitor.service

Topics: Hardware, Red Hat

Common items to install under Red Hat on Dell hardware

There are a few items that can be very useful to install within Red Hat, if used on Dell hardware. These are the OpenManage System Administrator tools that will provide you more information on the Dell hardware, Dell System Updater or DSU, that can be used to update firmware and BIOS on Dell hardware, and the Dell iDRAC Service Module that allows the iDRAC to exchange information with the Operating System.

First, set up the Dell Linux repository:

# curl -s http://linux.dell.com/repo/hardware/dsu/bootstrap.cgi | bash
Next, install OpenManage System Administrator, or OMSA, and make sure to start it, and enable it at boot time:
# yum -y install srvadmin-all
# /opt/dell/srvadmin/sbin/srvadmin-services.sh start
# /opt/dell/srvadmin/sbin/srvadmin-services.sh enable
With OMSA installed, you can, for example, now retrieve information about the physical disks in the system, and also the virtual disks (or RAID arrays) configured on these physical disks. Here's how you can use the command line interface tools to look up this information:

List the controllers available in the system:
# /opt/dell/srvadmin/bin/omreport storage controller
Now, list the physical disks (or pdisks) for each controller, for example for the controller with ID 0:
# /opt/dell/srvadmin/bin/omreport storage pdisk controller=0
And you can list the virtual disks (or pdisks) for each controller, for example for the controller with ID 0:
# /opt/dell/srvadmin/bin/omreport storage vdisk controller=X
A lot more is possible with OMSA, but that's outside the scope of this article. Instead, let's move on with the items to install.

Install Dell System Update (or DSU):
# yum -y install dell-system-update
To update firmware, you can now run:
# dsu
Usually it's just fine to select all firmware items to update (by pressing "a") and have it updated (by pressing "c"). This may take a while, and may require a reboot of the system. Upon reboot, the system may also take a while to complete the firmware and/or BIOS updates.

Finally, the Dell iDRAC service module. The latest version (at time of writing this article) can be found here: https://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=GH8R3. Copy the download link on this page of the GNU ZIP file.

The Dell Service Module requires the usbutils package to be installed:
# yum -y install usbutils
Now you can download and install the Dell iDRAC Service Module:
# mkdir /tmp/dsm
# cd /tmp/dsm
# wget https://downloads.dell.com/FOLDER05038177M/1/OM-iSM-Dell-Web-LX-320-1234_A00.tar.gz
# gzip -d *z
# tar xf *tar
# ./setup.sh
Here it's best to select all features and hit "i" to install. Keep everything at default settings and answer "yes" to any other questions. After installation is completed, you can log in to the iDRAC of the system, and view Operating System information there. This information has been communicated from the OS to the iDRAC by the Dell iDRAC Service Module.

Number of results found for topic Red Hat: 87.
Displaying results: 1 - 10.