I replaced a 7 year old drive in my Linux server. The drive had been complaining about bad blocks in SMART forever (which still mystifies me; drives should be able to just remap those). But I was seeing other signs the drive might be failing, so better safe than sorry.
The thing that made this complicated is I wanted to be sure I understood how device names worked so I didn’t screw up which drive was mounted where. Old school names like /dev/sdb1 seem fine, but those names change if you remove drives, add new ones, etc. So I read up and learned the new hotness is to mount drives by naming their UUID, some stable unique string based on the drive partition’s geometry. /etc/fstab will take a UUID in the first column instead of a device name and happily mount it for you. That’s it, pretty simple. In particular udev is not involved and there are no symlinks required anywhere. I have no idea how mount finds the device named by UUID but it works, so I’m happy to remain ignorant.
I replaced the old 7200 RPM WD Blue with a 5400 RPM WD Blue. That’s kind of a cheap drive for a Linux server but I’m only using it as a backup volume. I keep being tempted to get an SSD for the main system volume.
Here’s the steps I followed for the new drive, mostly following this guide.
lshw -C disk: find the new hard drive. Easiest way is to match it via serial number, or other characteristics like size and model name. My new disk got named /dev/sdb, which awkwardly was what the disk I just took out was named too.
smartctl –smart=on /dev/sdb: turn on SMART for the disk. Honestly I don’t exactly know what this does but it seems like a good idea.
fdisk /dev/sdb: partition the disk. fdisk is the old school MBR partitioning, which is limited to 2TB maximum. My disk is 2TB so that’s OK. Newer systems use GPT (an EFI thing) and parted. I just made one large partition for the whole disk.
mkfs -t ext4 /dev/sdb1: make the filesystem. There’s some options here you could consider setting to get a bit more disk space or add checksumming to metadata but I stuck with the defaults. Fun fact: I was taught in 1990 to print out the list of superblock backups because if the disk failed it was the only way you were going to find them backup block IDs. I assume recovery tools have improved in the last 28 years. (Or more realistically, that the disk will be a lost cause.)
blkid | grep sdb1: find the UUID for the new partition
fstab: edit fstab to mount the new disk named by UUID.
All very easy really.