Topics: AIX, Performance, Storage, System Admin

Creating a RAM disk on AIX

The AIX mkramdisk command allows system administrators to create memory-resident file systems. The performance benefits of using RAM disk can be astonishing. The unload of a large TSM database was reduced from 40 hours on SAN disks down to 10 minutes using RAM disk.

The configuration of a RAM disk file system is very simple and takes just a few minutes. Once the file system is mounted, it can be used like any other file system. There are three steps involved: creating the RAM disk, making the file system and then mounting the file system.

First, we create the RAM disk, specifying the size we want. Let's create a RAM disk of 4 GB:

# mkramdisk 4G
The system will assign the next available RAM disk. Since this is our first one, it will be assigned the name ramdisk0:
# ls -l /dev/ram*
brw-------    1 root system  46, 0 Sep 22 08:01 /dev/ramdisk0
If there isn't sufficient available memory to create the RAM disk you have requested, the mkramdisk command will alert you. Free up some memory or create a smaller size RAM disk. You can use Dynamic LPAR on the HMC or IVM to assign more memory to your partition.

We could use the RAM disk /dev/ramdisk0 as a raw logical volume, but here we’re going to create and mount a JFS2 file system. Here's how to create the file system using the RAM disk as its logical volume:
# mkfs -V jfs2 /dev/ramdisk0
Now create the mount point:
# mkdir -p /ramdisk0
And mount the file system:
# mount -V jfs2 -o log=NULL /dev/ramdisk0 /ramdisk0
Note: mounting a JFS2 file system with logging disabled (log=NULL) only works in AIX 6.1. On AIX 5.3, here are the steps to create the ramdisk:
# mkramdisk 4G
# mkfs -V jfs /dev/ramdisk0
# mkdir /ramdisk0
# mount -V jfs -o nointegrity /dev/ramdisk0 /ramdisk0
You should now be able to see the new file system using df and you can write to it as you would any other file system. When you're finished, unmount the file system and then remove the ramdisk using the rmramdisk command.
# rmramdisk ramdisk0

Topics: AIX, EMC, SAN, Storage, System Admin

Unable to remove hdiskpower devices due to a method error

If you get a method error when trying to rmdev -dl your hdiskpower devices, then follow this procedure.

Cannot remove hdiskpower devices with rmdev, get error "method error (/etc/methods/ucfgpowerdisk):"
The fix is to uninstall/reinstall Powerpath, but you won't be able to until you remove the hdiskpower devices with this procedure:
  1. # odmdelete -q name=hdiskpowerX -o CuDv
    (for every hdiskpower device)
  2. # odmdelete -q name=hdiskpowerX -o CuAt
    (for every hdiskpower device)
  3. # odmdelete -q name=powerpath0 -o CuDv
  4. # odmdelete -q name=powerpath0 -o CuAt
  5. # rm /dev/powerpath0
  6. You must remove the modified files installed by powerpath and then reboot the server. You will then be able to uninstall powerpath after the reboot via the "installp -u EMCpower" command. The files to be removed are as follows:

    (Do not be concerned if some of the removals do not work as PowerPath may not be fully configured properly).
    # rm ./etc/PowerPathExtensions
    # rm ./etc/emcp_registration
    # rm ./usr/lib/boot/protoext/disk.proto.ext.scsi.pseudo.power
    # rm ./usr/lib/drivers/pnext
    # rm ./usr/lib/drivers/powerdd
    # rm ./usr/lib/drivers/powerdiskdd
    # rm ./usr/lib/libpn.a
    # rm ./usr/lib/methods/cfgpower
    # rm ./usr/lib/methods/cfgpowerdisk
    # rm ./usr/lib/methods/chgpowerdisk
    # rm ./usr/lib/methods/power.cat
    # rm ./usr/lib/methods/ucfgpower
    # rm ./usr/lib/methods/ucfgpowerdisk
    # rm ./usr/lib/nls/msg/en_US/power.cat
    # rm ./usr/sbin/powercf
    # rm ./usr/sbin/powerprotect
    # rm ./usr/sbin/pprootdev
    # rm ./usr/lib/drivers/cgext
    # rm ./usr/lib/drivers/mpcext
    # rm ./usr/lib/libcg.so
    # rm ./usr/lib/libcong.so
    # rm ./usr/lib/libemcp_mp_rtl.so
    # rm ./usr/lib/drivers/mpext
    # rm ./usr/lib/libmp.a
    # rm ./usr/sbin/emcpreg
    # rm ./usr/sbin/powermt
    # rm ./usr/share/man/man1/emcpreg.1
    # rm ./usr/share/man/man1/powermt.1
    # rm ./usr/share/man/man1/powerprotect.1
    
  7. Re-install Powerpath.

Topics: LVM, Red Hat / Linux, Storage

Howto extend an ext3 filesystem in RHEL5

You can grow your ext3 filesystems while online: The functionality has been included in resize2fs so to resize a logical volume, start by extending the volume:

# lvextend -L +2G /dev/systemvg/homelv
And the resize the filesystem:
# resize2fs /dev/systemvg/homelv
By omitting the size argument, resize2fs defaults to using the available space in the partition/lv.

Topics: AIX, Storage, System Admin

Using NFS

The Networked File System (NFS) is one of a category of filesystems known as distributed filesystems. It allows users to access files resident on remote systems without even knowing that a network is involved and thus allows filesystems to be shared among computers. These remote systems could be located in the same room or could be miles away.

In order to access such files, two things must happen. First, the remote system must make the files available to other systems on the network. Second, these files must be mounted on the local system to be able to access them. The mounting process makes the remote files appear as if they are resident on the local system. The system that makes its files available to others on the network is called a server, and the system that uses a remote file is called a client.

NFS Server

NFS consists of a number of components including a mounting protocol, a file locking protocol, an export file and daemons (mountd, nfsd, biod, rpc.lockd, rpc.stad) that coordinate basic file services.

Systems using NFS make the files available to other systems on the network by "exporting" their directories to the network. An NFS server exports its directories by putting the names of these directories in the /etc/exports file and executing the exportfs command. In its simplest form, /etc/exports consists of lines of the form:

pathname -option, option ...
Where pathname is the name of the file or directory to which network access is to be allowed; if pathname is a directory, then all of the files and directories below it within the same filesystem are also exported, but not any filesystems mounted within it. The next fields in the entry consist of various options that specify the type of access to be given and to whom. For example, a typical /etc/exports file may look like this:
/cyclop/users    -access=homer:bart, root=homer
/usr/share/man   -access=marge:maggie:lisa
/usr/mail
This export file permits the filesystem /cyclops/users to be mounted by homer and bart, and allows root access to it from homer. In addition, it lets /usr/share/man to be mounted by marge, maggie and lisa. The filesystem /usr/mail can be mounted by any system on the network. Filesystems listed in the export file without a specific set of hosts are mountable by all machines. This can be a sizable security hole.

When used with the -a option, the exportfs command reads the /etc/exports file and exports all the directories listed to the network. This is usually done at system startup time.
# exportfs -va
If the contents of /etc/exports change, you must tell mountd to reread it. This can be done by re-executing the exportfs command after the export file is changed.

The exact attributes that can be specified in the /etc/exports file vary from system to system. The most common attributes are:
  • -access=list : Colon-separated list of hostnames and netgroups that can mount the filesystem.
  • -ro : Export read-only; no clients may write on the filesystem.
  • -rw=list : List enumerates the hosts allowed to mount for writing; all others must mount read-only.
  • -root=list : Lists hosts permitted to access the filesystem as root. Without this option, root access from a client is equivalent to access by the user nobody (usually UID -1).
  • -anon : Specifies UID that should be used for requests coming from an unknown user. Defaults to nobody.
  • -hostname : Allow hostname to mount the filesystem.
For example:
/cyclop/users -rw=moe,anon=-1 /usr/inorganic -ro
This allows moe to mount /cyclop/users for reading and writing, and maps anonymous users (users from other hosts that do not exist on the local system and the root user from any remote system) to the UID -1. This corresponds to the nobody account, and it tells NFS not to allow such users access to anything.

NFS Clients

After the files, directories and/or filesystems have been exported, an NFS client must explicitly mount them before it can use them. It is handled by the mountd daemon (sometimes called rpc.mountd). The server examines the mount request to be sure the client has proper authorization.

The following syntax is used for the mount command. Note that the name of the server is followed by a colon and the directory to be mounted:
# mount server1:/usr/src /src
Here, the directory structure /usr/src resident on the remote system server1 is mounted on the /src directory on the local system.

When the remote filesystem is no longer needed, it is unmounted with the umount:
# umount server1:/usr/src
The mount command can be used to establish temporary network mounts, but mounts that are part of a system's permanent configuration should be either listed in /etc/filesystems (for AIX) or handled by an automatic mounting service such as automount or amd.

NFS Commands
  • lsnfsexp : Displays the characteristics of directories that are exported with the NFS.
    # lsnfsexp
    software -ro
    
  • mknfsexp -d path -t ro : Exports a read-only directory to NFS clients and add it to /etc/exports.
    # mknfsexp -d /software -t ro
    /software ro
    Exported /software
    # lsnfsexp
    /software -ro
    
  • rmnfsexp -d path : Unexports a directory from NFS clients and remove it from /etc/exports.
    # rmnfsexp -d /software
    
  • lsnfsmnt : Displays the characteristics of NFS mountable file systems.
  • showmount -e : List exported filesystems.
    # showmount -e
    export list for server:
    /software (everyone)
    
  • showmount -a : List hosts that have remotely mounted local systems.
    # showmount  -a
    server2:/sourcefiles
    server3:/datafiles
    
Start/Stop/Status NFS daemons

In the following discussion, reference to daemon implies any one of the SRC-controlled daemons (such as nfsd or biod).

The NFS daemons can be automatically started at system (re)start by including the /etc/rc.nfs script in the /etc/inittab file.

They can also be started manually by executing the following command:
# startsrc -s Daemon or startsrc -g nfs
Where the -s option will start the individual daemons and -g will start all of them.

These daemons can be stopped one at a time or all at once by executing the following command:
# stopsrc -s Daemon or stopsrc -g nfs
You can get the current status of these daemons by executing the following commands:
# lssrc -s [Daemon]
# lssrc -a
If the /etc/exports file does not exist, the nfsd and the rpc.mountd daemons will not start. You can get around this by creating an empty /etc/exports file. This will allow the nfsd and the rpc.mountd daemons to start, although no filesystems will be exported.

Topics: AIX, Storage, System Admin

Working with disks

With the passing time, some devices are added, and some are removed from a system. AIX learns about hardware changes when the root user executes the cfgmgr command. Without any attributes, it scans all buses for any attached devices. Information acquired by cfgmgr is stored in the ODM (Object Database Manager). Cfgmgr only discovers new devices. Removing devices is achieved with rmdev or odmdelete. Cfgmgr can be executed in the quiet (cfgmgr) or verbose (cfgmgr -v) mode. It can be directed to scan all or selected buses.

The basic command to learn about disks is lspv. Executed without any parameters, it will generate a listing of all disks recorded in the ODM, for example:

# lspv
hdisk0     00c609e0a5ec1460         rootvg     active
hdisk1     00c609e037478aad         rootvg     active
hdisk4     00c03c8a14fa936b         abc_vg     active
hdisk2     00c03b1a32e50767         None
hdisk3     00c03b1a32ee4222         None
hdisk5     00c03b1a35cdcdf0         None
Each row describes one disk. The first column shows its name followed by the PVID and the volume group it belongs to. "None" in the last column indicates that the disk does not belong to any volume group. "Active" in the last column indicates, that the volume group is varied on. Existence of a PVID indicates possibility of presence of data on the disk. It is possible that such disk belongs to a volume group which is varied off.

Executing lspv with a disk name generates information only about this device:
# lspv hdisk4
PHYSICAL VOLUME:   hdisk4                 VOLUME GROUP:    abc_vg
PV IDENTIFIER:     00c03c8a14fa936b       VG IDENTIFIER:   00c03b1a000
PV STATE:          active
STALE PARTITIONS:  0                      ALLOCATABLE:     yes
PP SZE:           16 megabyte(s)         LOGICAL VOLUMES: 2
TOTAL PPs:         639 (10224 megabytes)  VG DESCRIPTORS:  2
FREE PPs:          599 (9584 megabytes)   HOT SPARE:       no
USED PPs:          40 (640 megabytes)     MAX REQUEST:     256 kb
FREE DISTRIBUTION: 128..88..127..128..128
USED DISTRIBUTION: 00..40..00..00..00
In the case of hdisks, we are able to determine its size, the number of logical volumes (two), the number of physical partitions in need of synchronization (Stale Partitions) and the number of VGDA's. Executing lspv against a disk without a volume group membership does nothing useful:
# lspv hdisk2
0516-304: Unable to find device id hdisk2 in the Device 
configuration database
How do you establish the capacity of a disk that does not belong to a volume group? The next command provides this in megabytes:
# bootinfo -s hdisk2
10240
The same (and much more) information can be retrieved by executing lsattr -El hdisk#:
# lsattr -El hdisk0
PCM             PCM/scsiscsd      Path Control Module   False
algorithm       fail_over         Algorithm             True
dist_err_pcnt   0                 Distributed Error %   True
dist_tw_width   50                Sample Time           True
hcheck_interval 0                 Health Check Interval True
hcheck_mode     nonactive         Health Check Mode     True
max_transfer    0x40000           Maximum TRANSFER Size True
pvid            00c609e0a5ec1460  Volume identifier     False
queue_depth     3                 Queue DEPTH           False
reserve_policy  single_path       Reserve Policy        True
size_in_mb      73400             Size in Megabytes     False
unique_id       26080084C1AF0FHU  Unique identifier     False
The last command can be limited to show only the size if executed as shown:
# lsattr -El hdisk0 -a size_in_mb
size_in_mb 73400 Size in Megabytes False
A disk can get a PVID in one of two ways: by the virtue of membership in a volume group (when running extendvg or mkvg commands) or as the result of execution of the chdev command. Command lqueryvg helps to establish if there is data on the disk or not.
# lqueryvg -Atp hdisk2
0516-320 lqueryvg: hdisk2 is not assigned to a volume group.
Max LVs:        256
PP Size:        26
Free PPs:       1117
LV count:       0
PV count:       3
Total VGDAs:    3
Conc Allowed:   0
MAX PPs per PV  1016
MAX PVs:        32
Quorum (disk):  1
Quorum (dd):    1
Auto Varyon ?:  1
Conc Autovaryo  0
Varied on Conc  0
Physical:       00c03b1a32e50767   1   0
                00c03b1a32ee4222   1   0
                00c03b1a9db2f183   1   0
Total PPs:      1117
LTG size:       128
HOT SPARE:      0
AUTO SYNC:      0
VG PERMISSION:  0
SNAPSHOT VG:    0
IS_PRIMARY VG:  0
PSNFSTPP:       4352
VARYON MODE:    ???????
VG Type:        0
Max PPs:        32512
This disk belongs to a volume group that had three disks:
PV count: 3
Their PVIDs are:
Physical:       00c03b1a32e50767   1   0
                00c03b1a32ee4222   1   0
                00c03b1a9db2f183   1   0
At this time, it does not have any logical volumes:
LV count: 0
It is easy to notice that a disk belongs to a volume group. Logical volume names are the best proof of this. To display data stored on a disk you can use the command lquerypv.

A PVID can be assigned to or removed from a disk if it does not belong to a volume group, by executing the command chdev.
# chdev -l hdisk2 -a pv=clear
hdisk2 changed
lspv | grep hdisk2
hdisk2          none         None
Now, let's give the disk a new PVID:
# chdev -l hdisk2 -a pv=yes
hdisk2 changed
# lspv | grep hdisk2
hdisk2          00c03b1af578bfea    None
At times, it is required to restrict access to a disk or to its capacity. You can use command chpv for this purpose. To prevent I/O to access to a disk:
# chpv -v r hdisk2
To allow I/O:
# chpv -v a hdisk2
I/O on free PPs is not allowed:
# chpv -a n hdisk2
I/O on free PPs is allowed:
# chpv -a y hdisk2
AIX was created years ago, when disks were very expensive. I/O optimization, the decision what part of data will be read/written faster than other data, was determined by its position on the disk. Between I/O, disk heads are parked in the middle. Accordingly, the fastest I/O takes place in the middle. With this in mind, a disk is divided into five bands called: outer, outer-middle, center, inner and inner-edge. This method of assigning physical partitions (logical volumes) as the function of a band on a disk, is called the intra-physical policy. This policy and the policy defining the spread of logical volume on disks (inter-physical allocation policy) gains importance while creating logical volumes.

Disk topology, the range of physical partitions on each band is visualized with command lsvg -p vg_name and lspv hdisk#. Note the last two lines of the lspv:
FREE DISTRIBUTION:  128..88..127..128..128
USED DISTRIBUTION:  00..40..00..00..00
The row labeled FREE DISTRIBUTION shows the number of free PPs in each band. The row labeled USED DISTRIBUTION shows the number of used PPs in each band. As you can see, some bands of this disk have no data. Presently, this policy lost its meaning as even the slowest disks are much faster then their predecesors. In the case of RAID or SAN disks, this policy has no meaning at all. For those who still use individual SCSI or SSA disks, it is good to remember that the data closer to the outer edge is read/written the slowest.

To learn what logical volumes are located on a given disk, you can execute command lspv -l hdisk#. The reversed relation is established executing lslv -M lv_name.

It is always a good idea to know what adapter and what bus any disk is attached to. Otherwise, if one of the disks breaks, how will you know which disk needs to be removed and replaced? AIX has many commands that can help you. It is customary to start from the adapter, to identify all adapters known to the kernel:
# lsdev -Cc adapter | grep -i scsi
scsi0   Available 1S-08    Wide/Ultra-3 SCSI I/O Controller
scsi1   Available 1S-09    Wide/Ultra-3 SCSI I/O Controller
scsi2   Available 1c-08    Wide/Fast-20 SCSI I/O Controller
The last command produced information about SCSI adapters present during the last execution of the cfgmgr command. This output allows you to establish in what drawer the adapter is located as well. The listing, tells us that there are three SCSI adapters. The second colums shows the device state (Available: ready to be used; Defined: device needs further configuration). The next column shows its location (drawer/bus). The last column contains a short description. Executing the last command against a disk from rootvg produces:
# lsdev -Cc disk -l hdisk0
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
From both outputs we can determine what SCSI adapter controls this disk - scsi0. Also, we see that disk has SCSI ID 8,0. How to determine the type/model/capacity/part number, etc?
# lscfg -vl hdisk0
  hdisk0  U0.1-P2/Z1-A8  16 Bit LVD SCSI Disk Drive (36400 MB)

        Manufacturer................IBM
        Machine Type and Model......IC35L036UCDY10-0
        FRU Number..................00P3831
        ROS Level and ID............53323847
        Serial Number...............E3WP58EC
        EC Level....................H32224
        Part Number.................08K0293
        Device Specific.(Z0)........000003029F00013A
        Device Specific.(Z1)........07N4972
        Device Specific.(Z2)........0068
        Device Specific.(Z3)........04050
        Device Specific.(Z4)........0001
        Device Specific.(Z5)........22
        Device Specific.(Z6)........
You can get more details by executing command: lsattr -El hdisk0.

Topics: AIX, EMC, PowerHA / HACMP, SAN, Storage, System Admin

Missing disk method in HACMP configuration

Issue when trying to bring up a resource group: For example, the hacmp.out log file contains the following:

cl_disk_available[187] cl_fscsilunreset fscsi0 hdiskpower1 false cl_fscsilunreset[124]: openx(/dev/hdiskpower1, O_RDWR, 0, SC_NO_RESERVE): Device busy cl_fscsilunreset[400]: ioctl SCIOLSTART id=0X11000 lun=0X1000000000000 : Invalid argument
To resolve this, you will have to make sure that the SCSI reset disk method is configured in HACMP. For example, when using EMC storage:

Make sure emcpowerreset is present in /usr/lpp/EMC/Symmetrix/bin/emcpowerreset.

Then add new custom disk method:
  • Enter into the SMIT fastpath for HACMP "smitty hacmp".
  • Select Extended Configuration.
  • Select Extended Resource Configuration.
  • Select HACMP Extended Resources Configuration.
  • Select Configure Custom Disk Methods.
  • Select Add Custom Disk Methods.
      Change/Show Custom Disk Methods

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

                                                 [Entry Fields]
* Disk Type (PdDvLn field from CuDv)             disk/pseudo/power
* New Disk Type                                  [disk/pseudo/power]
* Method to identify ghost disks                 [SCSI3]
* Method to determine if a reserve is held       [SCSI_TUR]
* Method to break reserve [/usr/lpp/EMC/Symmetrix/bin/emcpowerreset]
  Break reserves in parallel                     true
* Method to make the disk available              [MKDEV]

Topics: AIX, Storage, System Admin

Mounting a Windows share on an AIX system

There is a way to mount a share from a windows system as an NFS filesystem in AIX:

  1. Install the CIFS software on the AIX server (this is part of AIX itself: bos.cifs_fs).
  2. Create a folder on the windows machine, e.g. D:\share.
  3. Create a local user, e.g. "share" (user IDs from Active Directory can not be used): Settings -> Control Panel -> User Accounts -> Advanced tab -> Advanced button -> Select Users -> Right click in right window and select "New User" -> Enter User-name, password twice, deselect "User must change password at next logon" and click on create and close and ok.
  4. Make sure the folder on the D: drive (in this case "share") is shared and give the share a name (we'll use "share" again as name in this example) and give "full control" permissions to "Everyone".
  5. Create a mountpoint on the AIX machine to mount the windows share on, e.g. /mnt/share.
  6. Type on the AIX server as user root:
    # mount -v cifs -n hostname/share/password -o uid=201,fmode=750 /share /mnt/share
  7. You're done!

Topics: AIX, Backup & restore, Storage, System Admin

JFS2 snapshots

JFS2 filesystems allow you to create file system snapshots. Creating a snapshot is actually creating a new file system, with a copy of the metadata of the original file system (the snapped FS). The snapshot (like a photograph) remains unchanged, so it's possible to backup the snapshot, while the original data can be used (and changed!) by applications. When data on the original file system changes, while a snapshot exists, the original data is copied to the snapshot to keep the snapshot in a consistant state. For these changes, you'll need temporary space, thus you need to create a snapshot of a specific size to allow updates while the snapshot exists. Usually 10% is enough. Database file systems are usually not a very good subject for creating snapshots, because all database files change constantly when the database is active, causing a lot of copying of data from the original to the snapshot file system.

In order to have a snapshot you have to:

  • Create and mount a JFS2 file system (source FS). You can find it in SMIT as "enhanced" file system.
  • Create a snapshot of a size big enough to hold the changes of the source FS by issuing smitty crsnap. Once you have created this snapshot as a logical device or logical volume, there's a read-only copy of the data in source FS. You have to mount this device in order to work with this data.
  • Mount your snapshot device by issuing smitty mntsnap. You have to provide a directory name over which AIX will mount the snapshot. Once mounted, this device will be read-only.
Creating a snapshot of a JFS2 file system:
# snapshot -o snapfrom=$FILESYSTEM -o size=${SNAPSIZE}M
Where $FILESYSTEM is the mount point of your file system and $SNAPSIZE is the amount of megabytes to reserve for the snapshot.

Check if a file system holds a snapshot:
# snapshot -q $FILESYSTEM
When the snapshot runs full, it is automatically deleted. Therefore, create it large enough to hold all changed data of the source FS.

Mounting the snapshot:

Create a directory:
# mkdir -p /snapshot$FILESYSTEM
Find the logical device of the snapshot:
# SNAPDEVICE=`snapshot -q $FILESYSTEM | grep -v ^Snapshots | grep -v ^Current | awk '{print $2}'`
Mount the snapshot:
# mount -v jfs2 -o snapshot $SNAPDEVICE /snapshot$FILESYSTEM
Now you can backup your data from the mountpoint you've just mounted.

When you're finished with the snapshot:

Unmount the snapshot filesystem:
# unmount /snapshot$FILESYSTEM
Remove the snapshot:
# snapshot -d $SNAPDEVICE
Remove the mount point:
# rm -rf /snapshot$FILESYSTEM
When you restore data from a snapshot, be aware that the backup of the snapshot is actually a different file system in your backup system, so you have to specify a restore destination to restore the data to.

Topics: AIX, SAN, SDD, Storage

PVID trouble

To add a PVID to a disk, enter:

# chdev -l vpathxx -a pv=yes
To clear all reservations from a previously used SAN disk:
# chpv -C vpathxx

Topics: AIX, Storage, System Admin

Burning AIX ISO files on CD

If you wish to put AIX files on a CD, you *COULD* use Windows. But, Windows files have certain restrictions on file length and permissions. Also, Windows can't handle files that begin with a dot, like ".toc", which is a very important file if you wish to burn installable filesets on a CD.

How do you solve this problem?

  • Put all files you wish to store on a CD in a separate directory, like: /tmp/cd
  • Create an ISO file of this directory. You'll need mkisofs to accomplish this. This is part of the AIX Toolbox for Linux. You can find it in /opt/freeware/bin.
    # mkisofs -o /path/to/file.iso -r /tmp/cd
  • This will create a file called file.iso. Make sure you have enough storage space.
  • Transfer this file to a PC with a CD-writer in it.
  • Burn this ISO file to CD using Easy CD Creator or Nero.
  • The CD will be usable in any AIX CD-ROM drive.

Number of results found for topic Storage: 51.
Displaying results: 21 - 30.