Mastering Ubuntu Server
上QQ阅读APP看书,第一时间看更新

Adding additional storage volumes

At some point or another, you'll reach a situation where you'll need to add additional storage to your server. On physical servers, we can add additional hard disks, and on virtual or cloud servers, we can add additional virtual disks. Either way, in order to take advantage of the extra storage we'll need to determine the name of the device, format it, and mount it. In the case of LVM (which we'll discuss later in this chapter), we'll have the opportunity to expand an existing volume, often without a server reboot being necessary.

When a new disk is attached to our server, it will be detected by the system and given a name. In most cases, the naming convention of /dev/sda, /dev/sdb, and so on will be used. In other cases (such as virtual disks), this will be different, such as /dev/vda, /dev/xda, and possibly others. The naming scheme usually ends with a letter, incrementing to the next letter with each additional disk. The fdisk command is normally used for creating and deleting partitions, but it will allow us to determine which device name our new disk received. Basically, the fdisk -l command will give you the info, but you'll need to run it as root or with sudo:

# sudo fdisk -l

Output of the fdisk -l command, showing a device of /dev/vda1

I always recommend running this command before and after attaching a new device. That way, it will be obvious which device name is the one that's new. Once you have the name of the device, we will be able to interact with it and set it up. There's an overall process to follow when adding a new device, though. When adding additional storage to your system, you should ask yourself the following questions:

  • How much storage do you need? If you're adding a virtual disk, you can usually make it any size you want, as long as you have enough space remaining in the pool of your hypervisor.
  • After you attached it, what device name did it receive? As I mentioned, run the fdisk -l command as root to find out. Another trick is to use the following variation of the tail command, with which the output will update automatically as you add the disk. Just start the command, attach the disk, and watch the output. When done, press Ctrl + C on your keyboard:
    # tail -f /var/log/dmesg
    
  • How do you want it formatted? At the time of writing, the Ext4 filesystem is the most common. However, for different workloads, you may consider other options (such as XFS). When in doubt, use Ext4, but definitely read up on the other options to see if they may benefit your use case. ZFS is another option that's new in version 16.04 of Ubuntu, which you may also consider for additional volumes. We'll discuss formatting later in this chapter.

    Note

    It may be common knowledge to you by now, but the word filesystem is a term that can have multiple meanings on a Linux system depending on its context, and may confuse newcomers. We use it primarily to refer to the entire file and directory structure (the Linux filesystem), but it's also used to refer to how a disk is formatted for use with the distribution (for example, the Ext4 filesystem).

  • Where do you want it mounted? The new disk needs to be accessible to the system and possibly users, so you would want to mount (attach) it to a directory on your filesystem where your users or your application will be able to use it. In the case of LVM, which we also discuss in this chapter, you're probably going to want to attach it to an existing storage group. You can come up with your own directory for use with the new volume. But later on in this chapter, I'll discuss a few common locations.

With regards to how much space you should add, you would want to research the needs of your application or organization and find a reasonable amount. In the case of physical disks, you don't really get a choice beyond deciding which disk to purchase. In the case of LVM, you're able to be more frugal, as you can add a small disk to meet your needs (you can always add more later). The main benefit of LVM is being able to grow a filesystem without a server reboot. For example, you can start with a 30 GB volume and then expand it in increments of 10 GB by adding additional 10 GB virtual disks. This method is certainly better than adding a 200 GB volume all at once when you're not completely sure all that space will ever be used. LVM can also be used on physical servers as well, but would most likely require a reboot anyway since you'd have to open the case and physically attach a hard drive.

The device name, as we discussed, is found with the fdisk -l command. You can also find the device name of your new disk with the lsblk command. One benefit of lsblk is that you don't need root privileges and the information it returns is simplified. Either works fine:

The lsblk command in action, showing two disks with one partition each

On a typical server, the first disk (basically, the one that you installed Ubuntu Server on) will be given a device name of /dev/sda. Additional disks will be given the next available name, such as /dev/sdb, /dev/sdc, and so on. You'll also need to know the partition number as well. Device names for disk will also have numbers at the end, representing individual partitions. For example, the first partition of /dev/sda will be given /dev/sda1, while the second partition of /dev/sdc will be given /dev/sdc2. These numbers increment and are often easy to predict. As I mentioned before, your device naming convention may vary from server to server, especially if you're using a RAID controller or virtualization host such as VMWare or XenServer. If you haven't created a partition on your new disk yet, you will not see any partition numbers.

Next, you need to consider which filesystem to use. Ext4 is the most common filesystem type but many others exist, and new ones are constantly being created. At the time of writing, there are up and coming filesystems such as B-tree file system (Btrfs) being developed, while ZFS is not actually new but is new to Ubuntu (it's very common in the BSD community and has been around for a long time). Btrfs and ZFS are both not considered ready for stable use in Linux due to the fact that Btrfs is relatively new and ZFS is newly implemented in Ubuntu. At this point, it's believed that Ext4 won't be the default filesystem forever, as several are vying for the crown of becoming its successor. As I write this, though, I don't believe that Ext4 will be going away anytime soon.

An in-depth look at all the filesystem types is beyond the scope of this book due to the sheer number of them. One of the most common alternatives to Ext4 is XFS, which is a great filesystem if you plan on dealing with very large individual files or massive multi-terabyte volumes. XFS volumes are also really good for database servers due to their additional performance. In general, stick with Ext4 unless you have a very specific use case that gives you a reason to explore alternatives.

Finally, the process of adding a new volume entails determining where you want to mount it and adding the device to the /etc/fstab file, which will help with automatically mounting it each time the server is booted (which is convenient but optional). We'll discuss the /etc/fstab file and mounting volumes in a later section. Basically, Linux volumes are typically mounted inside an existing directory on the filesystem, and from that point on, you'll see free space for that volume listed when you execute the df -h command we worked through earlier.

It's very typical to use the /mnt directory as a top-level place to mount additional drives. If you don't have an application that requires a disk to be mounted in a specific place, a subdirectory of /mnt is a reasonable choice. I've also seen administrators make their own top-level directory, such as /store, to house their drives. When a new volume is added, a new directory would be created underneath the top-level directory. For example, you could have a backup disk attached to /mnt/backup or a file archive mounted at /store/archive. The directory scheme you choose to use is entirely up to you.

I just mentioned mounting a backup disk at /mnt/backup, which those of you with more experience may be thinking is a terrible idea. As a rule of thumb, adding another internal disk is not technically considered a valid backup location. If the entire server were to catastrophically fail, all internal disks may be a victim and also fail. However, you're certainly not limited to adding internal disks. In a typical Linux network, you could also mount external disks (such as USB 3.0 external hard disks or network attached storage devices) as well. The method of adding disks is mostly the same regardless of which type of disk you're adding (local, external, network, and so on). Just as before, external devices will need to be formatted and given a location on which to be mounted. Mounting volumes will be discussed later on in this chapter.