Topics: Networking, Red Hat, Storage

How to install and configure Samba on CentOS 7 for file sharing on Windows

Here's how to set up a secure Samba share from a CentOS 7 (or RHEL 7) system, and share it with a Windows client.

First, install Samba:

# yum install samba samba-client samba-common
Add an exception to the firewall, if the firewall is active:
# firewall-cmd --permanent --zone=public --add-service=samba
# firewall-cmd --reload
Next, you'll need to know the workgroup the Windows system is configured in. By far, the easiest way to do this, is to open a command prompt on the Windows system, and run:
net config workstation
For the sake of this tutorial, we'll assume the workgroup is called WORKGROUP.

Make a copy of the Samba config file:
# cp /etc/samba/smb.conf /etc/samba/smb.conf.orig
Set up a secure file share. In the example below, the share will be located in /media/windows/share on the CentOS 7 system. Be sure to set the permissions in such a way that the user account used for the share (see below) indeed has access to this folder.
# mkdir -p /media/windows/share
# chmod -R 0755 /media/windows/share
# chown -R user:group /media/windows/share
Edit file /etc/samba/smb.conf and add:
[global]
        workgroup = WORKGROUP
        netbios name = centos

[Share]
        comment = Shared Folder
        path = /media/windows/share
        valid users = user
        browsable = yes
        writable = yes
        guest ok = no
        read only = no
Set the SMB passwd for the user (this will be the username and password used to access the share from Windows):
# smbpasswd -a user
New SMB password:
Retype new SMB password:
Make sure everything is okay:
# testparm
Now enable and start Samba:
# systemctl enable smb.service
# systemctl enable nmb.service
# systemctl start smb.service
# systemctl start nmb.service
On the Windows host, io File explore type the IP address of the CentOS system, for example:
\\192.168.0.206
You will be asked for the username and password used when you ran the smbpasswd command.

And that should do it; You should now have a secured Samba share available on a Windows system.

Windows may cache any credentials that are used for the Samba share(s). When configuring the Samba share(s), it may be needed to have Windows "forget" these credentials. This can be easily achieved by running from a Command Prompt:
net use * /del

Topics: Monitoring, Networking, Red Hat

Securely enabling SNMP on Red Hat

Monitoring tools often use SNMP to query another system's information and status. For that to work on a Red Hat Enterprise Linux system, that system will have to have SNMP configured. And to allow a remote (monitoring) system to query SNMP information of a Red Hat Enterprise Linux system, one has to complete the following 3 items:

  • Set up SNMP.
  • Configure SNMP to use a non-public community name.
  • Allow access through the firewall, if configured.
For the configuration of SNMP, you'll need to install the following 2 packages:
# yum -y install net-snmp net-snmp-utils
Next, start and enable (at boot time) the SNMP daemon to run on the system:
# systemctl enable snmpd
# systemctl start snmpd
Now you can test if you can query SNMP infomation -locally- on the system, by using the snmpwalk command:
# snmpwalk -v2c -c public localhost | head -5
The community string used above ("public") is a well-known SNMP community string, and this can be (and probably "is") utilized by hackers or other unfriendly people to obtain information about the system remotely, and as such, it's best practice to change the public community name into something a littlebit different, preferably something that can't be guessed very easily. For the sake of this tutorial, we'll change it to "kermit".

Basically, you'll have to update this line in /etc/snmp/snmpd.conf from "public" to "kermit":

Before:
com2sec notConfigUser  default       public
After:
com2sec notConfigUser  default       kermit
Then, restart the SNMP daemon, so it picks up the changes to configuration file /etc/snmp/snmpd.conf:
# systemctl restart snmpd
Now test again with the snmpwalk command but this time by using the "kermit" community name:
# snmpwalk -v2c -c kermit localhost
That should give you quite a bit of output. If it doesn't, you've made a mistake, and you'll have to re-trace your steps.

The final step is to allow remote access. That will be needed if a remote system is being used to monitor the server, for example by a tool like Solarwinds. By default, remote access will be blocked by the firewall daemon on the system. To allow remote access, open up UDP port 161 on the client:
# firewall-cmd --zone=public --add-port=161/udp --permanent
# firewall-cmd --reload
Now log in to a remote system and run a similar snmpwalk command, but this time, specify the hostname of the server that you're querying (instead of "localhost"). For example, if the name of the host is "myserver", run:
# snmpwalk -v2c -c kermit myserver
And that's it. You can now remotely monitor a Linux server using SNMP, and you've secured it by changing the community name.

Topics: Networking, Red Hat

Running tcpdump

From time to time, there may be a need to run a tcpdump, to analyze the TCP traffic on a Red Hat system.

Now, there's a perfectly good description on how to that on the Red Hat website at https://access.redhat.com/solutions/8787, so we won't be repeating that on this blog.

Just a few simple commands to get the tcpdump command going:

To start a tcpdump, for example on network interface em1, and dump the output to a file called /tmp/tcpdump.out, run:

# tcpdump -s 0 -i em1 -w /tmp/tcpdump.out -v
The "-v" option used in the example above, shows the number of packets that it captured, while the tcpdump command is running, and thus is very useful. Once you think you have gathered enough information, hit CTRL-C to stop the tcpudmp. Be careful, running tcpdump can create quite a bit of output, especially if there's a lot of network traffic going on. This may fill up the the file system where the tcpdump output file is located in, pretty quickly, so don't leave the tcpdump running for prolonged periods of time.

To review the contents of the tcpdump output, use the "-r" option:
# tcpdump -r /tmp/tcpdump.out
The "tcpdump -r" command will show you detailed information about the captured network packets.

Topics: Networking, Red Hat

Setting up a bonded network interface on RHEL 7

The following procedure describes how to set up a bonded network interface on Red Hat Enterprise Linux. It assumes that you already have a working single network interface, and wish to now move the system to a bonded network interface set-up, to allow for network redundancy, for example by connecting two separate network interfaces, preferably on two different network cards in the server, to two different network switches. This provides both redundancy, should a network card in the server fail, and also if a network switch would fail.
First, log in as user root on the console of the server, as we are going to change the current network configuration to a bonded network configuration, and while doing so, the system will lose network connectivity temporarilty, so it is best to work from the console.

In this procedure, We'll be using network interfaces em1 and p3p1, on two different cards, to get card redundancy (just in case one of the network cards will fail).

Let's assume that IP address 172.29.126.213 is currently configured on network interface em1. You can verify that, by running:

# ip a s
Also, we'll need to verify, using the ethtool command, that there is indeed a good link status on both the em1 and p3p1 network interfaces:
# ethtool em1
# ethtool p3p1
Run, to list the bonding module info (should be enabled by default already, so this is just to verify):
# modinfo bonding
Create copies of the current network files, just for safe-keeping:
# cd /etc/sysconfig/network-scripts
# cp ifcfg-em1 /tmp
# cp ifcfg-p3p1 /tmp
Now, create a new file ifcfg-bond0 in /etc/sysconfig/network-scripts. We'll configure the IP address of the system (the one that was configured previously on network interface em1) on a new bonded network interface, called bond0. Make sure to update the file with the correct IP address, gateway and network mask for your environment:
# cat ifcfg-bond0
DEVICE=bond0
TYPE=Bond
NAME=bond0
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.29.126.213
NETMASK=255.255.255.0
GATEWAY=172.29.126.1
BONDING_OPTS="mode=5 miimon=100"
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
The next thing to do is to create two more files, one for each network interfaces that will be the slaves of the bonded network interface. In our example, that will be em1 and p3p1.

Create file /etc/sysconfig/network-scripts/ifcfg-em1 (be sure to update the file to your environment, for example, use the correct UUID. You may find that in the copies you've made of the previous network interface files. In this file, you'll also specify that the bond0 interface is now the master.
# cat ifcfg-em1
TYPE=Ethernet
BOOTPROTO=none
NAME=em1
UUID=cab24cdf-793e-4aa7-a093-50bf013910db
DEVICE=em1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
Create file ifcfg-p3p1:
# cat ifcfg-p3p1
TYPE=Ethernet
BOOTPROTO=none
NAME=p3p1
UUID=5017c829-2a57-4626-8c0b-65e807326dc0
DEVICE=p3p1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
Now, we're ready to start using the new bonded network interface. Restart the network service:
# systemctl restart network.service
Run the ip command to check the current network config:
# ip a s
The IP address should now be configured on the bond0 interface.

Ping the default gateway, to test if your bonded network interface can reach the switch. In our example, the default gateway is set to 172.29.126.1:
# ping 172.29.126.1
This should work. If not, re-trace the steps you've done so far, or work with your network team to identify the issue.

Check that both interfaces of the bonded interface are up, and what the current active network interface is. You can do this by looking at file /proc/net/bonding/bond0. In this file you can see what the currently active slave is, and if all slaves of the bonded network interface are up. For example:
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: p3p1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: p3p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:ce:26:30
Slave queue ID: 0

Slave Interface: em1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:bd:b7:9e
Slave queue ID: 0
In the example above, the active network interface is p3p1. Let's bring it down, to see if it fails over to network interface em1. You can bring down a network interface using the ifdown command:
# ifdown p3p1
Device 'p3p1' successfully disconnected.
Again, look at the /proc/net/bonding/bond0 file. You can now see that the active network interface has changed to em1, and that network interface p3p1 is no longer listed in the file (because it is down):
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: em1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:bd:b7:9e
Slave queue ID: 0
Now ping the default gateway again, and make sure it still works (now that we're using network interface em1 instead of network interface p3p1).

Then bring the p3p1 interface back up, using the ifup command:
# ifup p3p1
And check the bonding status again:
# cat /proc/net/bonding/bond0
It should show that the active network interface is still on em1, it will not fail back to network interface p3p1 (After all, why would it?! Network interface em1 works just fine).

Now repeat the same test, by bringing down network interface em1, ping the default gateway again, and check the bonding status, and bring em1 back up:
# ifdown em1
# cat /proc/net/bonding/bond0
# ping 172.29.126.1
# ifup em1
# cat /proc/net/bonding/bond0
# ping 172.29.126.1
If this all works fine, then you're all set.

Topics: AIX, Networking

Adding and deleting a static network route using the command line

There are two commands that can be used to add a route on an AIX system.

The first one is route, and can be used to temporarily add a route to an AIX system. Meaning, if the system is rebooted, after the route has been added, the route will be lost again after the reboot.

The second command is chdev -l inet0 that can be used to permanently add a route on an AIX system. When this command is used, the route will persist during reboots, as this command writes to information of the route in to the ODM of AIX.

Let's say you have a need to add a route on a system to network 10.0.0.0. And that network uses a netmask of 255.255.255.0 (or "24" for the short mask notation). Finally, the gateway that can be used to access this network is 192.168.0.1. Obviously, please adjust this to your own situation.

To temporarily add a route on a system for this network, use the following route command:

# route add -net 10.0.0.0 -netmask 255.255.255.0 192.168.0.1
After running this command, you can use the netstat -nr command to confirm that the route indeed has been set up:
# netstat -nr | grep 192.168.0.1
172.30.224/24   192.168.0.1   UG   0   0   en1   -   -
To remove that route again, simply change the route command from "add" to "delete":
# route delete -net 10.0.0.0 -netmask 255.255.255.0 192.168.0.1
Again, confirm with the netstat -nr command that the route indeed has been removed.

Now, as mentioned earlier, the route command will only temporarily (until the next reboot) add a route on the AIX system. To make things permanent, use the chdev command. This command takes the following form:

chdev -l inet0 -a route=net,-netmask,[your-netmask-goes-here],-static,[your-network-address-goes-here],[your-gateway-goes-here]

For example:
# chdev -l inet0 -a route=net,-netmask,255.255.255.0,-static,10.0.0.0,
192.168.0.1
inet0 changed
This time, again, you can confirm with the netstat -nr command that the route has been set up. But now you can also confirm that the route has been added to the ODM, by using this command:
# lsattr -El inet0 -a route | grep 192.168.0.1
route net,-netmask,255.255.255.0,-static,10.0.0.0,192.168.0.1 Route True
At this point, you can reboot the system, and you'll notice that the route is still there, by repeating the netstat -nr and lsattr -El inet0 commands.

To remove this permanent route from the AIX system, simply change the chdev command above from "route" to "delroute":
# chdev -l inet0 -a delroute=net,-netmask,255.255.255.0,-static,10.0.0.0,
192.168.0.1
inet0 changed
Finally, again confirm using the netstat -nr and lsattr -El inet0 commands that the route indeed has been removed.

Topics: Networking, Red Hat, System Admin

RHEL: Delete multiple default gateways

A Red Hat Enterprise Linux system should have a single default gateway defined. However, sometimes, it does occur that a system has multiple default gateways. Here's information to detect multiple default gateways and how to get rid of them:

First, check the number of default gateways defined, by running the netstat command and looking for entries that start with 0.0.0.0:

# netstat -nr | grep ^0.0.0.0
0.0.0.0     192.168.0.1     0.0.0.0    UG        0 0        0 em1
0.0.0.0     192.168.1.1     0.0.0.0    UG        0 0        0 em2
In the example above, there are 2 default gateway entries, one to 192.168.0.1, and another one to 192.168.1.1.

Quite often, more than 1 default gateways will be defined on a RHEL system, if there are multiple network interfaces present, and a GATEWAY entry is defined in each of the network interface files in /etc/sysconfig/network-script/ifcfg-*:
# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-*
ifcfg-em1:GATEWAY=192.168.0.1
ifcfg-em2:GATEWAY=192.168.1.1
On a system with multiple network interfaces, it is best to define the default gateway in file /etc/sysconfig/network instead. This file is global network file. Put the following entries in this file, assuming your default gateway is 192.168.0.1 and the network interface to be used for the default gateway is em1:
GATEWAY=192.168.0.1
GATEWAYDEV=em1
Next, remove any GATEWAY entries in any of the ifcfg-* files in /etc/sysconfig/network-scripts.

Finally, restart the network service:
# service network restart
This should resolve multiple default gateways, and the output of the netstat command should now only show one single entry with 0.0.0.0.

Note: If the netstat command is not available on the system, you may also determine the number of default gateways, by running:
# ip route show | grep ^default

Topics: Networking, Red Hat, Storage, System Admin

Quick NFS configuration on Red Hat

This is a quick NFS configuration using RHEL without too much concerts about security or any fine tuning and access control. In our scenario, there are two hosts:

  • NFS Server, IP 10.1.1.100
  • NFS Client, IP 10.1.1.101
First, start with the NFS server:

On the NFS server, un the below commands to begin the NFS server installation:
[nfs-server] # yum install nfs-utils rpcbind
Next, for this procedure, we export an arbitrary directory called /opt/nfs. Create /opt/nfs directory:
[nfs-server] # mkdir -p /opt/nfs
Edit the /etc/exports file (which is the NFS exports file) to add the below line to export folder /opt/nfs to client 10.1.1.101:
/opt/nfs 10.1.1.101(no_root_squash,rw)
Next, make sure to open port 2049 on your firewall to allow client requests:
[nfs-server] # firewall-cmd --zone=public --add-port=2049/tcp --permanent
[nfs-server] # firewall-cmd --reload
Start the rpcbind and NFS server daemons in this order:
[nfs-server] # service rpcbind start; service nfs start
Check the NFS server status:
[nfs-server] # service nfs status 
Redirecting to /bin/systemctl status nfs.service
nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; 
 vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           order-with-mounts.conf
   Active: active (exited) since Tue 2017-11-14 09:06:21 CST; 1h 14min ago
 Main PID: 2883 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service
Next, export all the file systems configured in /etc/exports:
[nfs-server] # exportfs -rav
And check the currently exported file systems:
[nfs-server] # exportfs -v
Next, continue with the NFS client:

Install the required packages:
[nfs-client] # yum install nfs-utils rpcbind
[nfs-client]# service rpcbind start
Create a mount point directory on the client, for example /mnt/nfs:
[nfs-client] # mkdir -p /mnt/nfs
Discover the NFS exported file systems:
[nfs-client] # showmount -e 10.1.1.100
Export list for 10.1.1.100:
/opt/nfs 10.1.1.101
Mount the previously NFS exported /opt/nfs directory:
[nfs-client] # mount 10.1.1.100:/opt/nfs /mnt/nfs
Test the correctness of the setup between the NFS server and the NFS client by creating a file in the NFS mounted directory on the client side:
[nfs-client] # cd /mnt/nfs/
[nfs-client] # touch testfile
[nfs-client] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
Move to the server side and check if the testfile file exists:
[nfs-server] # cd /opt/nfs/
[nfs-server] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
At this point it is working, but it is not set up to remain there permanently (as in: it will be gone when either the NFS server or NFS client is rebooted. To ensure it remains working even after a reboot, perform the following steps:

On the NFS server side, to have the NFS server service enabled at system boot time, run:
[nfs-server] # systemctl enable nfs-server
On the NFS server client side, add an entry to the /etc/fstab file, that will ensure the NFS file system is mounted at boot time:
10.1.1.100:/opt/nfs  /mnt/nfs  nfs4  soft,intr,nosuid  0 0
The options for the NFS file systems are as follows:
  • soft = No hard mounting, avoids hanging file access commands on the NFS client, if the NFS servers is unavailable.
  • intr = Allow NFS requests to be interrupted if the NFS server goes down or can't be reached.
  • nosuid = This prevents remote users from gaining higher privileges by running a setuid program.
If you need to know on the NFS server side, which clients are using the NFS file system, you can use the netstat command, and search for both the NFS server IP address and port 2049:
[nfs-server] # netstat -an | grep 10.1.1.100:2049
This will tell you the established connections for each of the clients, for example:
tcp  0  0  10.1.1.100:2049  10.1.1.101:757  ESTABLISHED
In the example above you can see that IP address 10.1.1.101 on port 757 (NFS client) is connected to port 2049 on IP address 10.1.1.100 (NFS server).

Topics: Networking, System Admin

Ping tricks

A few trick for the ping command to thoroughly test your network connectivity and check how much time a ping request takes:

Increase the interval of the ping requests from the default 1 second to, for example, 10 ping requests every second by using the -i option. As a test, to ping to 192.168.0.1, 10 times a second, run:

# ping -i .1 192.168.0.1
You can also go up to 1/100th of a second:
# ping -i .01 192.168.0.1
To increase the default packet size of 64 bites, use -s option, for example to ping 1 KB with every ping request, run:
# ping -s 1024 192.168.0.1
Or combine the -i and -s options:
# ping -s 1024 -i .01 192.168.0.1

Topics: Networking, System Admin

Measuring network throughput with Iperf

Iperf is a command-line tool that can be used to diagnose network speed related issues, or just simply determine the available network throughput.

Iperf measures the maximum network throughput a server can handle. It is particularly useful when experiencing network speed issues, as you can use Iperf to determine what the maximum throughput is for a server.

First, you'll need to install iperf.

For AIX:

Iperf is available from http://www.perzl.org/aix/index.php?n=Main.iperf. Download the RPM file, for example iperf-2.0.9-1.aix5.1.ppc.rpm to your AIX system. Next install it:

# rpm -ihv iperf-2.0.9-1.aix5.1.ppc.rpm
For Red Hat Enterprise Linux:

You'll first need to install EPEL, as Iperf is not available in the standard Red Hat repositories. For example for Red Hat 7 systems:
# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Next, you'll have to install Iperf itself:
# yum -y install iperf
Now that you have Iperf installed, you can start testing the connection between two servers. So, you'll need to have at least two servers with Iperf installed.

On the server you wish to test, launch Iperf in server mode:
# iperf -s
That will the server in listening mode, and besides that, nothing happens. The output will look something like this:
# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  4] local 198.51.100.5 port 5001 connected with 198.51.100.6 port 59700
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  9.76 GBytes  8.38 Gbits/sec
On the other server, connect to the first server. For example, if your first server is at IP address 198.51.100.5, run:
# iperf -c 198.51.100.5
After about 10 seconds, you'll see output on your screen showing the amount of data transferred, and the available bandwidth. The output may look something like this:
#  iperf -c 198.51.100.5
------------------------------------------------------------
Client connecting to 198.51.100.5, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 198.51.100.6 port 59700 connected with 198.51.100.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  9.76 GBytes  8.38 Gbits/sec
You can run multiple tests while the server Iperf process is listening on the first server. When you've completed your test, you can CTRL-C the running server Iperf command.

For more information, see the official Iperf site at iperf.fr.

Topics: AIX, Monitoring, Networking, Red Hat, Security, System Admin

Determining type of system remotely

If you run into a system that you can't access, but is available on the network, and have no idea what type of system that is, then there are few tricks you can use to determine the type of system remotely.

The first one, is by looking at the TTL (Time To Live), when doing a ping to the system's IP address. For example, a ping to an AIX system may look like this:

# ping 10.11.12.82
PING 10.11.12.82 (10.11.12.82) 56(84) bytes of data.
64 bytes from 10.11.12.82 (10.11.12.82): icmp_seq=1 ttl=253 time=0.394 ms
...
TTL (Time To Live) is a timer value included in packets sent over networks that tells the recipient how long to hold or use the packet before discarding and expiring the data (packet). TTL values are different for different Operating Systems. So, you can determine the OS based on the TTL value. A detailed list of operating systems and their TTL values can be found here. Basically, a UNIX/Linux system has a TTL of 64. Windows uses 128, and AIX/Solaris uses 254.

Now, in the example above, you can see "ttl=253". It's still an AIX system, but there's most likely a router in between, decreasing the TTL with one.

Another good method is by using nmap. The nmap utility has a -O option that allows for OS detection:
# nmap -O -v 10.11.12.82 | grep OS
Initiating OS detection (try #1) against 10.11.12.82 (10.11.12.82)
OS details: IBM AIX 5.3
OS detection performed.
Okay, so it isn't a perfect method either. We ran the nmap command above against an AIX 7.1 system, and it came back as AIX 5.3 instead. And sometimes, you'll have to run nmap a couple of times, before it successfully discovers the OS type. But still, we now know it's an AIX system behind that IP.

Another option you may use, is to query SNMP information. If the device is SNMP enabled (it is running a SNMP daemon and it allows you to query SNMP information), then you may be able to run a command like this:
# snmpinfo -h 10.11.12.82 -m get -v sysDescr.0
sysDescr.0 = "IBM PowerPC CHRP Computer
Machine Type: 0x0800004c Processor id: 0000962CG400
Base Operating System Runtime AIX version: 06.01.0008.0015
TCP/IP Client Support  version: 06.01.0008.0015"
By the way, the example for SNMP above is exactly why UNIX Health Check generally recommends to disable SNMP, or at least to dis-allow providing such system information trough SNMP by updating the /etc/snmpdv3.conf file appropriately, because this information can be really useful to hackers. On the other hand, your organization may use monitoring that relies of SNMP, in which case it needs to be enabled. But then you stil have the opportunity of changing the SNMP community name to something else (the default is "public"), which also limits the remote information gathering possibilities.

Number of results found for topic Networking: 26.
Displaying results: 1 - 10.