Cloning and Restoring SGI Root Disks

(Partially taken from Disk and File System Administration)

I'll assume you have a spare slot in an existing SGI to perform this manoever. The internal SCSI bus on an SGI is bus 0 on Indigo2 and Octane and seems to be 1 on the Power Challenge. The drives on the default desktop SGI sleds are set so that the SCSI IDs are assigned automatically - it appears that it does so by location. On the Power Challenge it actually takes notice of the specified SCSI ID on the drive. Check all this with the hardware inventory command hinv, which will show you what is going on.

Shut down the system - do not insert or remove SCSI devices with the system powered up ! This can destroy the SCSI bus. Do "shutdown -g0" after you've verified everyone is not using the system.

Insert your new system drive into a spare slot. Reboot the system and use hinv to figure out what ID this new drive has (usually it has ID #2, with the system drive as ID #1).

Make a mount point with:

mkdir /0
Use fx to repartition the disk. If the new drive is ID #2 on the internal SCSI bus, then the appropriate responses for fx are dksc, 0 (controller #), 2 (drive ID) and just use the default for the logical unit number. Select repartition and rootdrive. Use xfs as the filesystem type. I cannot think of a good reason to make a userroot drive with / and /usr in separate partitions (some old systems have them, but there's nothing stopping you having / and /usr in the same partition). While you are in fx check the drive label (usually this does not need modifying).

With the drive repartitioned, make the filesystem:

mkfs_xfs /dev/dsk/dks0d2s0
which assumes your drive is on controller 0 (dks0) ID #2 (d2) with the root partition 0. Mount this filesystem on the mount point /0.
mount /dev/dsk/dks0d2s0 /0

Cloning root disks

If you want to clone an existing root disk on a functioning system, do:
cd /
umount /proc
umount -t nfs
tar cvBpf - . | (cd /0; tar xBpf -)
/etc/mntproc
cd /0
/bin/rm -rf 0
This unmounts the dummy /proc directory from the system to avoid confusing tar, unmounts all NFS mounts, clones the disk, remounts /proc and removes the uncessary duplication of the mount 0 inside the new root disk. (Not that it would hurt).

Finally, the volume header information from the root disk must be copied onto the target disk. Enter the following to do this:

cd /stand
dvhtool -v get sash sash /dev/rdsk/dks0d1vh
dvhtool -v get ide ide /dev/rdsk/dks0d1vh
dvhtool -v creat sash sash /dev/rdsk/dks0d2vh
dvhtool -v creat ide ide /dev/rdsk/dks0d2vh
(this assumes your ROOT disk is ID #1 on controller #0 and the target disk is ID #2 on controller #0.) In the case of xray3, you might do:
cd /stand
dvhtool -v get sash sash
dvhtool -v get ide ide
dvhtool -v creat sash sash /dev/rdsk/dks1d2vh
dvhtool -v creat ide ide /dev/rdsk/dks1d2vh
i.e. it's the same except the controller is 1 and we don't go out of our way to specify the location of the root disk.

Restoring root disks

This is basically the same as cloning root disks, except you restore the disk contents from backup. Assuming you are using the 8mm drive on ximpact1, do the following:
cd /0
bru -xvjf ximpact1:/dev/rmt/tps1d4nr.8500c
The flags are to -x (extract), -v (verbose), -j (map absolute paths to relative ones). You can then update /etc/passwd, /etc/hosts, /etc/fstab etc if they are not from recent backups.

Finally, the volume header information from the root disk must be copied onto the target disk. Enter the following to do this:

cd /stand
dvhtool -v get sash sash /dev/rdsk/dks0d1vh
dvhtool -v get ide ide /dev/rdsk/dks0d1vh
dvhtool -v creat sash sash /dev/rdsk/dks0d2vh
dvhtool -v creat ide ide /dev/rdsk/dks0d2vh
(this assumes your ROOT disk is ID #1 on controller #0 and the target disk is ID #2 on controller #0.) In the case of xray3, you might do:
cd /stand
dvhtool -v get sash sash
dvhtool -v get ide ide
dvhtool -v creat sash sash /dev/rdsk/dks1d2vh
dvhtool -v creat ide ide /dev/rdsk/dks1d2vh
i.e. it's the same except the controller is 1 and we don't go out of our way to specify the location of the root disk.

Note that since we have R4400, R10000 and R12000 machines, I think it is necessary to clone from the same class of machine since the stand-alone-shell (sash) and IDE probably use different microcode.

As of Feb 2003 I have copied sash and ide from every root disk volume header into /stand so that these files will get backed up in the normal backup passes. The syntax (e.g.) "dvhtool -v get sash sash /dev/rdsk/dks0d1vh" works for everything except xray3, but probably the syntax "dvhtool -v get sash sash" is more general since the device defaults to the root disk (creates will not work using this method in the above protocols because you don't want to WRITE to the root disk).

Cloning/Restoring User/Option Disks

This goes the same way, except you usually restore option disks in situ rather than on different systems, because a system will happily boot with an unformatted option disk. External disks are on SCSI controller #1, internals are on #0. Use fx to partition and/or label the disk (option disks, or there's no harm formatting them as root disks if you want, for use as impromptu spares), mount them on the appropriate mount point, and clone via tar or restore via bru. Note that the option disk partitions are 7 (/dev/dsk/dks0d2s7 or /dev/dsk/dks1d3s7 etc) and that you do not need to do the dvhtool commands since these are not boot disks. Then just restore your data from tape.

Cloning root disks over the network

If you want to clone an existing root disk on a functioning system via NSF onto xray0, the procedure is marginally more complex. First open NFSManager on xray0 and see what the "share" (NFS export) directory is. It's usually /Volumes/Backup2/SGI_root_backups. Then you will want to unmount all the NFS mounts and /proc on the SGI. But you need to mount the xray0 disk onto the SGI via NFS, and to avoid the recursive hell of backing up the backup directory you just have to be a little more explicit in making the backup. Instead of making a pure disk clone, I make a gzipped tar archive of the backup. You could as necessary, gunzip and untar it on xray0 (or any other machine) prior to restoring the disk. Note that first we put sash and ide into /stand. It often pays to reboot the machine first before doing this backup because sometimes it conspicuously refuses to umount /proc or certain NFS directories. Also, logging into the terminal directly, rather than via rsh, tends to help. If all else fails the umount -k option attempts to kill processes that are clinging to the mount points. This tends to be a little vicious, so use with care. In extreme cases you may need to comment the local disks out of /etc/exports and reboot.
cd /stand
dvhtool -v get sash sash /dev/rdsk/dks0d1vh
dvhtool -v get ide ide /dev/rdsk/dks0d1vh
cd /
umount /proc
umount -t nfs
umount /usr1 /usr2   # etc as necessary - not really necessary on Irix 6.2 tar syntax
mount /xray0
That should do it as a starter - this obviously assumes you have the xray0 directory mount in /etc/fstab to mount onto /xray0. Then go ahead and to the tar. Notice that the -E flag allegedly omits directories not on the local filesystem (i.e. contents of /xray0). The -B and -p flags are probably not necessary but do no harm. This works for Irix 6.5:
cd /
tar cvBEpf - / | gzip > /xray0/ximpact1_root.tgz
This works for Irix 6.2 (tar does not have the -E flag in 6.2):
tar cvBpf - .[a-z]* /CDROM /bin /debug /dev /dumpster /etc /lib /lib32 \
                    /lib64 /nsmail /opt /proc /sbin /stand /temp /test \
                    /tmp /unix /usr /var /xi* /xr[1-9]* /xt* | gzip > /xray0/ximpact1_root.tgz
which is a terribly elaborate way of getting around the /xray0 mount. You might be able to rsh from xray0 onto the SGI and do "tar cvBpf - /" and grab the feed, gzip it and stuff it on the local disk. Haven't tested this possibility yet. However it's still not entirely automatible since the umounts don't always "just work". Then when that is all done:
/etc/mntproc
exportfs -a
mount -a
If you don't have an /xray0 mount point in /etc/fstab you might consider adding the following:
xray0:/Volumes/Backup2/SGI_root_backups /xray0     nfs rw,bg,soft 0 0
Finally, verify that the archive can be read on the remote machine with tar -tzvf (the SGIs are unique amongst our Unix boxes in that they don't have the -z flag to g[un]zip on the fly).