I'll assume you have a spare slot in an existing SGI to perform this manoever. The internal SCSI bus on an SGI is bus 0 on Indigo2 and Octane and seems to be 1 on the Power Challenge. The drives on the default desktop SGI sleds are set so that the SCSI IDs are assigned automatically - it appears that it does so by location. On the Power Challenge it actually takes notice of the specified SCSI ID on the drive. Check all this with the hardware inventory command hinv, which will show you what is going on.
Shut down the system - do not insert or remove SCSI devices with the system powered up ! This can destroy the SCSI bus. Do "shutdown -g0" after you've verified everyone is not using the system.
Insert your new system drive into a spare slot. Reboot the system and use hinv to figure out what ID this new drive has (usually it has ID #2, with the system drive as ID #1).
Make a mount point with:
mkdir /0Use fx to repartition the disk. If the new drive is ID #2 on the internal SCSI bus, then the appropriate responses for fx are dksc, 0 (controller #), 2 (drive ID) and just use the default for the logical unit number. Select repartition and rootdrive. Use xfs as the filesystem type. I cannot think of a good reason to make a userroot drive with / and /usr in separate partitions (some old systems have them, but there's nothing stopping you having / and /usr in the same partition). While you are in fx check the drive label (usually this does not need modifying).
With the drive repartitioned, make the filesystem:
mkfs_xfs /dev/dsk/dks0d2s0which assumes your drive is on controller 0 (dks0) ID #2 (d2) with the root partition 0. Mount this filesystem on the mount point /0.
mount /dev/dsk/dks0d2s0 /0
cd / umount /proc umount -t nfs tar cvBpf - . | (cd /0; tar xBpf -) /etc/mntproc cd /0 /bin/rm -rf 0This unmounts the dummy /proc directory from the system to avoid confusing tar, unmounts all NFS mounts, clones the disk, remounts /proc and removes the uncessary duplication of the mount 0 inside the new root disk. (Not that it would hurt).
Finally, the volume header information from the root disk must be copied onto the target disk. Enter the following to do this:
cd /stand dvhtool -v get sash sash /dev/rdsk/dks0d1vh dvhtool -v get ide ide /dev/rdsk/dks0d1vh dvhtool -v creat sash sash /dev/rdsk/dks0d2vh dvhtool -v creat ide ide /dev/rdsk/dks0d2vh(this assumes your ROOT disk is ID #1 on controller #0 and the target disk is ID #2 on controller #0.) In the case of xray3, you might do:
cd /stand dvhtool -v get sash sash dvhtool -v get ide ide dvhtool -v creat sash sash /dev/rdsk/dks1d2vh dvhtool -v creat ide ide /dev/rdsk/dks1d2vhi.e. it's the same except the controller is 1 and we don't go out of our way to specify the location of the root disk.
cd /0 bru -xvjf ximpact1:/dev/rmt/tps1d4nr.8500cThe flags are to -x (extract), -v (verbose), -j (map absolute paths to relative ones). You can then update /etc/passwd, /etc/hosts, /etc/fstab etc if they are not from recent backups.
Finally, the volume header information from the root disk must be copied onto the target disk. Enter the following to do this:
cd /stand dvhtool -v get sash sash /dev/rdsk/dks0d1vh dvhtool -v get ide ide /dev/rdsk/dks0d1vh dvhtool -v creat sash sash /dev/rdsk/dks0d2vh dvhtool -v creat ide ide /dev/rdsk/dks0d2vh(this assumes your ROOT disk is ID #1 on controller #0 and the target disk is ID #2 on controller #0.) In the case of xray3, you might do:
cd /stand dvhtool -v get sash sash dvhtool -v get ide ide dvhtool -v creat sash sash /dev/rdsk/dks1d2vh dvhtool -v creat ide ide /dev/rdsk/dks1d2vhi.e. it's the same except the controller is 1 and we don't go out of our way to specify the location of the root disk.
Note that since we have R4400, R10000 and R12000 machines, I think it is necessary to clone from the same class of machine since the stand-alone-shell (sash) and IDE probably use different microcode.
As of Feb 2003 I have copied sash and ide from every root disk volume header into /stand so that these files will get backed up in the normal backup passes. The syntax (e.g.) "dvhtool -v get sash sash /dev/rdsk/dks0d1vh" works for everything except xray3, but probably the syntax "dvhtool -v get sash sash" is more general since the device defaults to the root disk (creates will not work using this method in the above protocols because you don't want to WRITE to the root disk).
cd /stand dvhtool -v get sash sash /dev/rdsk/dks0d1vh dvhtool -v get ide ide /dev/rdsk/dks0d1vh cd / umount /proc umount -t nfs umount /usr1 /usr2 # etc as necessary - not really necessary on Irix 6.2 tar syntax mount /xray0That should do it as a starter - this obviously assumes you have the xray0 directory mount in /etc/fstab to mount onto /xray0. Then go ahead and to the tar. Notice that the -E flag allegedly omits directories not on the local filesystem (i.e. contents of /xray0). The -B and -p flags are probably not necessary but do no harm. This works for Irix 6.5:
cd / tar cvBEpf - / | gzip > /xray0/ximpact1_root.tgzThis works for Irix 6.2 (tar does not have the -E flag in 6.2):
tar cvBpf - .[a-z]* /CDROM /bin /debug /dev /dumpster /etc /lib /lib32 \ /lib64 /nsmail /opt /proc /sbin /stand /temp /test \ /tmp /unix /usr /var /xi* /xr[1-9]* /xt* | gzip > /xray0/ximpact1_root.tgzwhich is a terribly elaborate way of getting around the /xray0 mount. You might be able to rsh from xray0 onto the SGI and do "tar cvBpf - /" and grab the feed, gzip it and stuff it on the local disk. Haven't tested this possibility yet. However it's still not entirely automatible since the umounts don't always "just work". Then when that is all done:
/etc/mntproc exportfs -a mount -aIf you don't have an /xray0 mount point in /etc/fstab you might consider adding the following:
xray0:/Volumes/Backup2/SGI_root_backups /xray0 nfs rw,bg,soft 0 0Finally, verify that the archive can be read on the remote machine with tar -tzvf (the SGIs are unique amongst our Unix boxes in that they don't have the -z flag to g[un]zip on the fly).