The environment variable EXTENDED_HISTORY in AIX will timestamp your shell history. In ksh, you set it as follows:
# export EXTENDED_HISTORY=ON
A good practice is to set this variable in /etc/environment.
To view your history:
# history
888 ? :: cd aix_auth/
889 ? :: vi server
890 ? :: ldapsearch
891 ? :: fc -lt
892 ? :: fc -l
NOTE: before setting this environment variable, the previous commands in your history will have a question mark in the timestamp field.
If you use the fc command, you will have to use the "-t" option to see the timestamp:
# fc -t
Red hat Linux provides following tools to make changes to Network configuration such as add new card, assign IP address, change DNS server, etcetera:
- GUI tool (X windows required) - system-config-network
- Command line text based GUI tool (No X windows required) - system-config-network-tui
- Edit configuration files directly, stored in /etc/sysconfig/network-scripts directory
The following instructions are compatible with CentOS, Fedora Core and Red Hat Enterprise Linux 3, 4 and 5.
Editing the configuration files stored in /etc/sysconfig/network-scripts:
First change directory to /etc/sysconfig/network-scripts/:
# cd /etc/sysconfig/network-scripts/
You need to edit / create files as follows:
- /etc/sysconfig/network-scripts/ifcfg-eth0 : First Ethernet card configuration file
- /etc/sysconfig/network-scripts/ifcfg-eth1 : Second Ethernet card configuration file
To edit/create the first NIC file, type the following command:
# vi ifcfg-eth0
Append/modify as follows:
# Intel Corporation 82573E Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=static
DHCPCLASS=
HWADDR=00:30:48:56:A6:2E
IPADDR=10.251.17.204
NETMASK=255.255.255.0
ONBOOT=yes
Save and close the file. Define the default gateway (router IP) and hostname in /etc/sysconfig/network file:
# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=host.domain.com
GATEWAY=10.251.17.1
Save and close the file. Restart networking:
# /etc/init.d/network restart
Make sure you have correct DNS server defined in /etc/resolv.conf file. Try to ping the gateway, and other hosts on your network. Also check if you can resolv host names:
# nslookup host.domain.com
And verify if the NTP servers are correct in /etc/ntp.conf, and if you can connect to the time server, by running the ntpdate command against one of the NTP servers:
# ntpdate 10.20.30.40
This should synchronize system time with time server 10.20.30.40.
Should you ever need hardware information from your Linux server, then a very useful command is dmidecode. it is a tool for dumping a computer's DMI table contents in a human-readable format. This table contains a description of the system's hardware components, as well as other useful pieces of information such as serial numbers and BIOS revision.
For example:
# dmidecode | awk 'BEGIN {RS = "\n\n"} /System Information/'
Handle 0x0100, DMI type 1, 27 bytes
System Information
Manufacturer: HP
Product Name: ProLiant DL360 G5
Version: Not Specified
Serial Number: MX8Q835AYV
UUID: 34353379-3232-4D85-5183-333843155695
Wake-up Type: Power Switch
SKU Number: 457922-001
Family: ProLiant
Issue when trying to bring up a resource group: For example, the hacmp.out log file contains the following:
cl_disk_available[187] cl_fscsilunreset fscsi0 hdiskpower1 false
cl_fscsilunreset[124]: openx(/dev/hdiskpower1, O_RDWR, 0, SC_NO_RESERVE): Device busy
cl_fscsilunreset[400]: ioctl SCIOLSTART id=0X11000 lun=0X1000000000000 : Invalid argument
To resolve this, you will have to make sure that the SCSI reset disk method is configured in HACMP. For example, when using EMC storage:
Make sure emcpowerreset is present in /usr/lpp/EMC/Symmetrix/bin/emcpowerreset.
Then add new custom disk method:
- Enter into the SMIT fastpath for HACMP "smitty hacmp".
- Select Extended Configuration.
- Select Extended Resource Configuration.
- Select HACMP Extended Resources Configuration.
- Select Configure Custom Disk Methods.
- Select Add Custom Disk Methods.
Change/Show Custom Disk Methods
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Disk Type (PdDvLn field from CuDv) disk/pseudo/power
* New Disk Type [disk/pseudo/power]
* Method to identify ghost disks [SCSI3]
* Method to determine if a reserve is held [SCSI_TUR]
* Method to break reserve [/usr/lpp/EMC/Symmetrix/bin/emcpowerreset]
Break reserves in parallel true
* Method to make the disk available [MKDEV]
There are a couple of options for running background jobs:
Option one:
Start the job as normal, then press CTRL-Z. It will say it is stopped, and then type "bg". It will continue in the background. Then type "fg", if you want it to run in the foreground again. You can repeat typing CTRL-Z, bg, fg as much as you like. The process will be killed once you log out. You can avoid this by running: nohup command.
Option two:
Use the at command: run the command as follows:
# echo "command" | at now
This will start it in the background and it will keep on running even if you log out.
Option three:
Run it with an ampersand:
command &
This will run it in the background. But the process will be killed if you log out. You can avoid the process being killed by running:
nohup command &.
Option four:
Schedule it one time in the crontab.
With all options, make sure you redirect any output and errors to a file, like:
# command > command.out 2>&1
UNIX doesn't store a file creation timestamp in the inode information. The timestamps recorded are the last access timestamp, the last modified timestamp and the last changed timestamp (which is the last change to the inode information). When a file is brand new, the last modified timestamp will be the creation timestamp of the file, but that piece of information is lost as soon as the file is modified in any way.
To get this information, use the istat command, for example for the /etc/rc.tcpip file:
# ls -li /etc/rc.tcpip
8247 -rwxrwxr-- 1 root system 6607 Jan 06 06:25 /etc/rc.tcpip
Now you know the inode number: 8247.
# istat /etc/rc.tcpip
Inode 8247 on device 10/4 File
Protection: rwxrwxr--
Owner: 0(root) Group: 0(system)
Link count: 1 Length 6607 bytes
Last updated: Wed Jan 6 06:25:49 PST 2010
Last modified: Wed Jan 6 06:25:49 PST 2010
Last accessed: Tue May 4 14:00:37 PDT 2010
The same type of information can be found using the
fsdb command. Start the
fsdb command with the file system where the file is located; in the example below the root file system. Then type the number of the inode, followed by "i":
# fsdb /
File System: /
File System Size: 2097152 (512 byte blocks)
Disk Map Size: 20 (4K blocks)
Inode Map Size: 38 (4K blocks)
Fragment Size: 4096 (bytes)
Allocation Group Size: 2048 (fragments)
Inodes per Allocation Group: 4096
Total Inodes: 524288
Total Fragments: 262144
8247i
i#: 8247 md: f---rwxrwxr-- ln: 1 uid: 0 gid: 0
szh: 0 szl: 6607 (actual size: 6607)
a0: 0x1203 a1: 0x1204 a2: 0x00 a3: 0x00
a4: 0x00 a5: 0x00 a6: 0x00 a7: 0x00
at: Tue May 04 14:00:37 2010
mt: Wed Jan 06 06:25:49 2010
ct: Wed Jan 06 06:25:49 2010
The most popular innovation of IBM AIX Version 6.1 is clearly workload partitioning (WPARs). Once you get past the marketing hype, you'll need to determine the value that WPARs can provide in your environment. What can WPARs do that Logical Partitions (LPARs) could not? How and when should you use WPARs? Equally as important, when should you not use Workload Partitioning. Finally, how do you create, configure, and administer workload partitions?
For a very good introduction to WPARs, please refer to the following article: https://www.ibm.com/developerworks/aix/library/au-wpar61aix/ or download the PDF version here.
This article describes the differences between system and application WPARs, the various commands available, such as mkwpar, lswpar, startwpar and clogin. It also describes how to create and manage file systems and users, and it discusses the WPAR manager. It ends with an excellent list of references for further reading.
To list the supported page sizes on a system:
# pagesize -a
4096
65536
16777216
17179869184
# pagesize -af
4K
64K
16M
16G
To learn more about the multiple page size support for AIX, please read the related
whitepaper here.
Prior to the introduction of POWER5 systems, it was only possible to create as many separate logical partitions (LPARs) on an IBM system as there were physical processors. Given that the largest IBM eServer pSeries POWER4 server, the p690, had 32 processors, 32 partitions were the most anyone could create. A customer could order a system with enough physical disks and network adapter cards, so that each LPAR would have enough disks to contain operating systems and enough network cards to allow users to communicate with each partition.
The Advanced POWER Virtualization feature of POWER5 platforms, makes it possible to allocate fractions of a physical CPU to a POWER5 LPAR. Using virtual CPU's and virtual I/O, a user can create many more LPARs on a p5 system than there are CPU's or I/O slots. The Advanced POWER Virtualization feature accounts for this by allowing users to create shared network adapters and virtual SCSI disks. Customers can use these virtual resources to provide disk space and network adapters for each LPAR they create on their POWER5 system.

There are three components of the Advanced POWER Virtualization feature: Micro-Partitioning, shared Ethernet adapters, and virtual SCSI. In addition, AIX 5L Version 5.3 allows users to define virtual Ethernet adapters permitting inter-LPAR communication.
Micro-Partitioning
An element of the IBM POWER Virtualization feature called Micro-Partitioning can divide a single processor into many different processors. In POWER4 systems, each physical processor is dedicated to an LPAR. This concept of dedicated processors is still present in POWER5 systems, but so is the concept of shared processors. A POWER5 system administrator can use the Hardware Management Console (HMC) to place processors in a shared processor pool. Using the HMC, the administrator can assign fractions of a CPU to individual partitions. If one LPAR is defined to use processors in the shared processor pool, when those CPUs are idle, the POWER Hypervisor makes them available to other partitions. This ensures that these processing resources are not wasted. Also, the ability to assign fractions of a CPU to a partition means it is possible to partition POWER5 servers into many different partitions. Allocation of physical processor and memory resources on POWER5 systems is managed by a system firmware component called the POWER Hypervisor.
Virtual Networking
Virtual networking on POWER5 hardware consists of two main capabilities. One capability is provided by a software IEEE 802.1q (VLAN) switch that is implemented in the Hypervisor on POWER5 hardware. Users can use the HMC to add Virtual Ethernet adapters to their partition definitions. Once these are added and the partitions booted, the new adapters can be configured just like real physical adapters, and the partitions can communicate with each other without having to connect cables between the LPARs. Users can separate traffic from different VLANs by assigning different VLAN IDs to each virtual Ethernet adapter. Each AIX 5.3 partition can support up to 256 Virtual Ethernet adapters.
In addition, a part of the Advanced POWER virtualization virtual networking feature allows users to share physical adapters between logical partitions. These shared adapters, called Shared Ethernet Adapters (SEAs), are managed by a Virtual I/O Server partition which maps physical adapters under its control to virtual adapters. It is possible to map many physical Ethernet adapters to a single virtual Ethernet adapter, thereby eliminating a single physical adapter as a point of failure in the architecture.
There are a few things users of virtual networking need to consider before implementing it. First, virtual networking ultimately uses more CPU cycles on the POWER5 machine than when physical adapters are assigned to a partition. Users should consider assigning a physical adapter directly to a partition when heavy network traffic is predicted over a certain adapter. Secondly, users may want to take advantage of larger MTU sizes that virtual Ethernet allows, if they know that their applications will benefit from the reduced fragmentation and better performance that larger MTU sizes offer. The MTU size limit for SEA is smaller than Virtual Ethernet adapters, so users will have to carefully choose an MTU size so that packets are sent to external networks with minimum fragmentation.
Virtual SCSI
The Advanced POWER Virtualization feature called virtual SCSI allows access to physical disk devices which are assigned to the Virtual I/O Server (VIOS). The system administrator uses VIOS logical volume manager commands to assign disks to volume groups. The administrator creates logical volumes in the Virtual I/O Server volume groups. Either these logical volumes or the physical disks themselves may ultimately appear as physical disks (hdisks) to the Virtual I/O Server's client partitions, once they are associated with virtual SCSI host adapters. While the Virtual I/O Server software is packaged as an additional software bundle that a user purchases separately from the AIX 5.3 distribution, the virtual I/O client software is a part of the AIX 5.3 base installation media, so an administrator does not need to install any additional filesets on a Virtual SCSI client partition.
An "unknown" entry appears when somebody tried to log on with a user id which is not known to the system. It would be possible to show the userid they attempted to use, but this is not done as a common mistake is to enter the password instead of the userid. If this was recorded it would be a security risk.
Number of results found: 469.
Displaying results: 231 - 240.