Increasing Storage Capacity

If you chose the XFS file system for any of your video storage volumes (a wise choice) and you wish to expand these volumes to a bigger disk (e.g. you wish to replace a 1TB disk with a 2TB disk), you should follow the steps outlined immediately below. On the other hand, if you chose the ext3 file system for any of your video storage volumes (you had your reasons, I'm sure), you should follow the steps shown at the end of this section. Otherwise, if the system files and video storage directory are all on one disk (single disk system), or you wish to expand the system disk of a system with more than disk, expanding such a system's disk is covered in the next section.

Note that the duration of the copy operation that is used to expand a volume depends on the amount of data on it (duh) so you may wish to clean up all of the old stuff that you were thinking of deleting, before you proceed. On the other hand, if you could do that, you probably wouldn't need bigger disks in the first place....

Begin by getting yourself a copy of Knoppix (the CD works perfectly fine):

     http://knopper.net/knoppix/index-en.html

And burning the ISO image onto a CD (or DVD, if you took that route).

Next, configure the machine who's disk is to be expanded with the old disk and new disk attached and a CD/DVD drive added, if necessary. In this example, we're presuming the original drive is on SATA0 and the new drive is on SATA1, which should map to /dev/sda and /dev/sdb respectively, when Knoppix is booted.

Put the Knoppix CD into the CD/DVD drive and boot it. Once Knoppix comes up (you're booting from a CD, remember, so be patient), open up a console window and become super user. To do this, simply type the "su" command. There's no password (how refreshing).

Before proceeding, you should check that the original and new drives are installed where you expect them to be. You can check this with:

     su
     /sbin/fdisk -lu /dev/sda
     /sbin/fdisk -lu /dev/sdb

or

     su
     /sbin/fdisk -lu

or, if you have disks that are larger than 2TB, you might want to use:

     su
     /sbin/parted -l

The first drive should show a valid partition table with one Linux partition, usually type "83" or, for GNU partitions, type "ee", while the second drive should have no partition table. Also, note each drive's size to ensure that the old and new drives are cabled where you expect them to be. Since, we're going to be wiping out all data on the new drive, in a minute, it pays to make sure you've got the right target in your sights.

If you have a regular format hard drive that is less than or equal to 2GB in size, you should now format the new drive using fdisk:

     su
     /sbin/fdisk -u /dev/sdb
       n                         (To create a new partition)
       p                         (As a primary partition)
       1                         (Partition 1)
       <cr>                      (To accept the first sector as the start)
       <cr>                      (To accept the last sector as the end)
       w                         (Write the partition table to the disk)

You can check your work with:

     su
     /sbin/fdisk -lu /dev/sdb

You should see something like this:

     Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
     81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
     Units = sectors of 1 * 512 = 512 bytes
     Sector size (logical/physical): 512 bytes / 4096 bytes
     I/O size (minimum/optimal): 4096 bytes / 4096 bytes
     Disk identifier: 0x59da471f
     Device Boot      Start         End      Blocks   Id  System
  /dev/sdb1            2048  3907029167  1953513560   83  Linux

If you have one of the newer 1.5 or 2 TB, advanced format hard drives (e.g. from Western Digital or Samsung), you need to pay particular attention to how you create the partitions on the drive so that they are always aligned on a 4K boundary. If you don't do this, performance will suffer heavily.

Since we are only creating a single partition, fdisk defaults to creating the partition starting at sector 63 (it keeps the first 63 sectors, from 0-62, for the partiton table and boot information). In the case of advanced format hard drives, this is a very unfortunate choice. Luckily for us, for an advanced format hard drive, later versions of fdisk default to creating the partition starting at sector 2048 (it keeps the first 2047 sectors, from 0-2047, for the partiton table and expanded boot information), so you can just allow fdisk to do it's thing. There is no need to override the starting sector number, just take the default.

However, if that's not to your liking and you wish to override the default choice (we don't advise it), you can partition the disk something like this:

     su
     /sbin/fdisk -u /dev/sdc
       n                         (To create a new partition)
       p                         (As a primary partition)
       1                         (Partition 1)
       64                        (Start the partition on a 4K boundary [i.e. 0
                                   mod 8].
       <cr>                      (To accept the last sector as the end)
       w                         (Write the partition table to the disk)

You can check your work with:

     su
     /sbin/fdisk -lu /dev/sdb

You should see something like this:

     Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
     255 heads, 63 sectors/track, 243201 cylinders, total 3907029168
     Units = sectors of 1 * 512 = 512 bytes
     Device Boot      Start         End      Blocks   Id  System
  /dev/sdb1              64  3907029167  1953514552   83  Linux

or maybe something like this:

     Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
     81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
     Units = sectors of 1 * 512 = 512 bytes
     Sector size (logical/physical): 512 bytes / 4096 bytes
     I/O size (minimum/optimal): 4096 bytes / 4096 bytes
     Disk identifier: 0x83a7c849
     Device Boot      Start         End      Blocks   Id  System
  /dev/sdb1            2048  3907029167  1953513560   83  Linux

If the new disk is bigger than 2TB, you must use parted to format the new disk because it is the only partition editor that is able to support disks larger than 2TB. Additionally, if you have a disk that is larger than 2TB, it is almost guaranteed to be an advanced format disk, so you must make sure to choose a starting sector for the partition that observes the 4K block boundary, to avoid severe performance hits. Also, to ensure compatibility with boot loaders that require more than 62 sectors following the partition table, it is smart to always start the first partition at sector 2048. Since parted has a parameter that automatically figures out what is the optimal alignment for a partition, based on disk geometry, we can just pick 2048 by default and it will tell us if that number won't work. So, run it like this:

     su
     /sbin/parted -a optimal /dev/sdb
       mklabel gpt
       mkpart primary xfs 2048s 100%  (use "ext2" for ext2/ext3)
       print
       quit

The print command should show something like this:

     Model: ATA WDC WD40EZRZ-00G (scsi)
     Disk /dev/sdc: 4001GB
     Sector size (logical/physical): 512B/4096B
     Partition Table: gpt
     Number  Start   End     Size    File system  Name     Flags
      1      1049kB  4001GB  4001GB               primary

We used the "s" after "2048" to tell parted that the first number was a starting sector number, not a size in megabytes. We used the "%" after "100" to tell parted that the ending number was 100% of the disk, in other words the last physical sector.

After creating a new partition table, regardless of which type you choose, you should ask the kernel to reread the partition table:

     su
     partprobe

If the original drive contains an ext2/ext3 file system, you should proceed to copy its data as outlined farther along in this section. Otherwise, verify that it does indeed contain an XFS file system with this command:

     df -T /dev/sda1

Once you've verified that the file system type is "xfs", the data on it should be copied to the new drive, as shown:

     su
     xfs_copy -d /dev/sda1 /dev/sdb1

Note that, if you have one of the advanced format hard drives (e.g. the newer drives from Western Digital or Samsung), you should first check that the block size of the original XFS file system is a multiple of 4096. Usually, it will be, because XFS chooses the page size of the OS as the block size and, under Linux, this is 4096. To check, you can dump the XFS superblock like this:

     xfs_db -c "sb 0" -c print -r /dev/sdb1 | grep blocksize

If the block size is not a multiple of 4096, there does not appear to be any way to alter it while doing xfs_copy so the only option appears to be to start from scratch with a brand new XFS file system on the target volume and copy the files with a regular copy command such as "cp" or "lftp mirror".

The copy will take quite a while, depending on the size of the original partition and how full it is (if you were smart and deleted all of the junk shows off the original partition, it will be faster). Incidentally, it will format the new partition with XFS, so there is no need to do this ahead of time.

Once the copy completes, the new disk will contain a copy of the original disk, but the new XFS file system will be same size as the original (bummer).

The next step requires that you mount the new file system. Knoppix thoughtfully provides predefined mount points in the "/media" directory for each of the attached devices that it finds. So, do:

     su
     mount /dev/sdb1 /media/sdb1

You can check that the mounted file system looks like the original with:

     ls -l /media/sdb1

If you're happy with the new file system (we will take care of any problems with file ownership discrepancies below so you can ignore them for the time being), you can increase its size to fill the entire partition (nice) with:

     su
     xfs_growfs /media/sdb1

Once that's done, unmount the new file system:

     su
     umount /media/sdb1

Not replacing an XFS drive? For ext3 file systems, you can use whatever copy method you usually use to copy a smaller disk to a larger disk. The brute force approach would be to use Knoppix, as is described above, but when you come to the part about using xfs_copy, do the following instead. Make the file system that you wish to use for the partition with mkfs or, to be more precisely, mkfs.ext3:

     su
     /sbin/mkfs.ext3 -b 4096 -j -L / -T news /dev/sdb1

Since the file system will only hold static data, there's no real reason to have fsck check it at boot time. This is especially relevant because for the large file systems that are found on 1 or 2TB disks, checking the file system could take a very, very long time (i.e. hours and hours). If you have several large disks on your video server and all of the file systems happen to get coincidentally checked, booting your system could easily take half a day. You will most certainly want to turn off this dubious feature with:

     su
     /sbin/tune2fs -c 0 -i 0 /dev/sdb1

Note that, in this example, we labeled the file system "/". You may wish to use some other label, chosen to match what's on the original drive. Be especially careful about this if your system boots with the "LABEL=" parameter in the GRUB configuration and/or mounts partitions using the "LABEL=" parameter in /etc/fstab.

The next step requires that you've mounted the old and new file systems. We have been assuming all along that you connected the old disk on /dev/sda and the new disk on /dev/sdb. If you are lucky, you can read the newly-created partition table without rebooting:

     su
     partprobe

Otherwise, you will need to reboot. Lately, partprobe has always worked for us. Before, not so much. Your mileage may vary.

Knoppix thoughtfully provides predefined mount points in the "/media" directory for each of the attached devices that it finds. So, do:

     su
     mount /dev/sda1 /media/sda1
     mount /dev/sdb1 /media/sdb1

Now, copy the original drive to the new drive:

     su
     cp --preserve=all -R /media/sda1/* /media/sdb1

Once you are done copying the files, unmount the old and new file systems:

     su
     umount /media/sda1
     umount /media/sdb1

Regardless of the type of file system that you choose, if you experience problems with device names changing whenever your system reboots, as we mentioned in the "File System Table" section, you can use UUIDs to mount your disk partitions.

When you set up fstab to use UUIDs, you must make doubly-sure that you change the UUIDs therein, whenever you make any disk changes. Even if you only reformat a partition, it will get a new UUID and you must update fstab. This is a big change from before when you just recabled a new drive and were off to the races. Failure to change the UUIDs in fstab, before you try to boot the newly-configured system, will likely render it unbootable. You will them be faced with booting Knoppix from a CD/DVD and correcting fstab, using leafpad, from there.

Since you've just added a new disk, pay attention to how your fstab is set up. If it does use UUIDs, go back to the "File System Table" section and read the part about discovering the UUIDs for your disk partitions and make a note of the new UUIDs.

The correct procedure to follow, when a UUID changes, is to edit fstab with the planned new UUID before you shut down the running system. That way, you can use your regular text editor on the system. Once the system is shut down, make the hardware change (e.g. install the new disk) and then reboot the system. As we said above, failure to remove the old UUID from fstab will likely result in your system failing to boot.

Mind you, if anything goes wrong with the hardware swap, etc. you could be faced with this same scenario so an alternative method is to just remove the line that mounts the old UUID from fstab (a comment works). This is probably the best method, if the disk in question isn't critical (i.e. the system disk). When you reboot the system, it will come up without the disk mounted but at least it will come up.

Once the system is booted, you can edit fstab with the correct UUID and then try auto-mounting everything (this way, if fstab is broken, your system will still be up and running and you'll have a chance to fix things):

     su
     mount -a -v

All done partitioning, copying and editing fstab? Shut down Knoppix and recable the new disk to replace the original. Boot the machine with the production OS, with the new disk installed, and see how it works.

Since copying file systems with cp, as shown above, can be pretty slow, you may opt to copy your ext3 file system using a smart copy program like Acronis. Many of these programs know how to analyze the file system directory structure and copy the files much more quickly than cp (and for sure, ftp) does. A significant improvement in copy time may result.

There's only one wrinkle, though. They may not know how to set up all of the flags in the file system directory properly. So, although all of the data is copied properly, your file system will fail to pass fsck's checks when you try to boot with the new disk (this is presuming that fsck is even run against the new file system). If this happens, you can usually heal the new file system with:

     su
     fsck /dev/sda1 -- -a

You may be doing this in single user mode, if the system fails to boot properly because the failing file system cannot be mounted.

An alternative to connecting the new disk to your running system and copying the files directly works well for expanding data-only drives. In this case, one may not wish to shut down a production system the whole time that the new drive is being set up. Instead, one can mount the new drive on a separate test or work system and copy the files using a network file copy program such as rsync or lftp, to set up the new drive ahead of time, while the production system is still running.

Cable the new drive up to the test/work system (we're assuming that it ends up on /dev/sdb, for the purposes of these notes), boot the OS of your choice (Knoppix anyone?) and proceed with building the new partition table as described above. After creating a new partition table, regardless of which type you choose, you should ask the kernel to reread the partition table:

     su
     partprobe

If you are creating an ext2/ext3 partition, the following command can be used to format and label it:

     su
     /sbin/mkfs.ext3 -b 4096 -j -L / -T news /dev/sdb1

However, if the drive is a data-only drive, you'll probably be setting it up as an XFS drive. In this case, you'll need to establish the file system directly instead of depending on xfs_copy to do it for you (as we did above). The XFS format command looks like this:

     su
     /sbin/mkfs.xfs -b size=4096 -L / /dev/sdb1

For an advanced format drive that uses 4096 byte physical sectors, you'll probably want to use something like this command:

     su
     /sbin/mkfs.xfs -b size=4096 -s size=4096 -L / /dev/sdb1

If there isn't already a mount point that you can use for the new file system on your test/work system, you should create one something like this:

     su
     mkdir /mnt/sdb1mr

Then, you can mount the new file system and change to the root directory:

     su
     mount /dev/sdb1 /mnt/sdb1mr
     cd /mnt/sdb1mr

We like to use lftp to mirror the data-only partition from the production system to the new data-only file system on the test/work system. This might go something like this:

     su
     lftp -u joeblow sftp://el-producto
     <super-secret-pw>
     mirror /mnt/sdb1 /mnt/sdb1mr

The mirror will run for a while (depending on how many gig o'bytes are to be copied).

If you built the new storage drive on a separate Knoppix system and would like to copy its content over to the Knoppix system before installing it into production, you have a small problem on your hands. Knoppix comes with ncftp instead of lftp. What can we say? Ncftp ain't lftp. Good luck mirroring directories from your production system to your build system with ncftp. We tried it....

There is, however, a solution. You can start up ssh on Knopix and then use lftp on the other system (i.e. the production system itself) to mirror existing video directories to the build system. On the Knoppix system, do:

     su
     /etc/init.d/ssh start
     passwd
       (enter the new, super-secret root password)
       (enter the new, super-secret root password, again)
     ifconfig  (you'll need the IP address below)

On the system that has the original video files, do:

     lftp -u root sftp://192.168.1.123   (get the DHCP address for the system
                                         running Knoppix, from ipconfig, as
                                         shown above)
     mirror -R /local/source/dir /remote/dest/dir

In this case, your remote destination directory will probably be something like /media/sdb1 on the Knoppix system. You will need the super-secret password that you created for root, in the previous step.

Once the mirroring is done, you can swap the new drive into the production system during a lull in the action.

However, if you wait too long and the production system keeps running, adding new recordings and deleting old ones, there is the possibility that a number of files on the original partition won't be mirrored, either because they were locked during the original mirror from the production system, or they were added/deleted after the mirror was done. It ain't the end of the world. Once the new drive is in production, you can just mount up the old drive on the test/work system and reverse mirror any missing files. First, on the test/work system, where the original drive is now installed on /dev/sdb:

     su
     mkdir /mnt/sdb1or
     mount /dev/sdb1 /mnt/sdb1or

From the el-producto system, where the mirrored drive is now installed on /dev/sdb and mounted on /mnt/sdb1, do:

     su
     lftp -ujoeblow test-oh-mundo
     <super-secret-pw>
     mirror --only-missing /mnt/sdb1or /mnt/sdb1

A slicker alternative to using lftp, if the old and the new drives can both be cabled up to your test/work system, is to use rsync, since it will avoid the need to run the "cleanup" script (as noted below) to clean up any deleted recording files, etc. -- that is any files that were deleted since the inital copy was made.

Assuming that the old drive is mounted on /media/sda1 and that the new drive is mounted on /media/sdb1, do the following:

     su
     rsync -av --delete -n /media/sda1/ /media/sdb1

This will produce a dry-run, which will show you all of the files that are about to be deleted and copied. If you are happy with the list, you can omit the "-n" from the command line:

     su
     rsync -av --delete /media/sda1/ /media/sdb1

If you run itno any trouble with permissions on the copied files, if necessary, you can set the permissions on the new file system and its contents more or less as follows (take a look at the original file system for anything special but this should work):

     su
     find /mnt/sdb1 -mindepth 1 -exec chown mythtv:mythtv \{\} \;
     find /mnt/sdb1 -mindepth 1 -type d -exec chmod u=rwx,g=rwxs,o=rx \{\} \;
     find /mnt/sdb1 -mindepth 1 -type f -exec chmod ug=rw,o=r \{\} \;

It might also be a good idea to run the "cleanup" script, which can be found in the "Freeing Up Storage" section above, to perform an orphan check for any files that got deleted by Myth after you mirrored the storage directory but before the system was shut down and restarted with the new data-only file system installed. This script should be run something like this:

#>>>>>>

     .../scripts/cleanup /mnt/sdb1 0 localhost root its-a-secret >cleanup.lst

Look in the cleanup.lst file for any files marked "Orphaned" and manually delete them.

Your migration to the new data-only storage disk should be complete.