LVM to ZFS Migration

In a previous post, I described the hardware changes that I made to facilitate additional drive slots on my NAS Media Server.

We now need to migrate from an LVM system consisting of 40TB of redundant mirrored storage using mdadm to a ZFS system consisting of a single pool and a dataset. Below is a diagram depicting the logical layout of the old and the intended new system.

40TB of Redundant Storage
40TB of Redundant Storage
Logical Volume (LV)
Logical Volume (LV)
EXT 4 File System for Media Storage
EXT 4 File System for Media Storage
Old Storage System
Old Storage System
50TB of Redundant Storage
50TB of Redundant Storage
ZFS Dataset
ZFS Dataset
New Storage System
New Storage System
LV Group of 4 Physical Volumes (PV)
LV Group of 4 Physical Volumes (PV)
PV
PV
10TB
mdadm mirror
10TB…
PV
PV
8TB
mdadm mirror
8TB…
PV
PV
10TB
mdadm mirror
10TB…
PV
PV
12TB
mdadm mirror
12TB…
ZFS Pool
ZFS Pool
12TB
Mirror VDEV
12TB…
10TB
Mirror VDEV
10TB…
8TB
Mirror VDEV
8TB…
10TB
Mirror VDEV
10TB…
6TB
Mirror VDEV
6TB…
4TB
Mirror VDEV
4TB…
512GB
L2ARC Cache
512GB…

Before the migration, we must backup all the data from the LVM system. I cobbled together a collection of old hard drives and then proceeded to create another LVM volume as the temporary storage of the content. This temporary volume will not have any redundancy capability, so if any one of the old hard drives fails, then out goes all the content. The original LVM system is mounted on /mnt/airvideo and the temporary LVM volume is mounted on /mnt/av2.

I used the command below to proceed with the backup.

sudo rsync --delete -aAXv /mnt/airvideo /mnt/av2 > ~/nohup.avs.rsync.out 2>&1 &

I can then monitor the progress of the backup with:

tail -f ~/nohup.avs.rsync.out

The backup took a little more than 7 days to copy around 32 TB of data from our NAS server. During this entire process, all of the NAS services continued to run, so that downtime was almost non-existent.

Once the backup is completed, I wanted to move all the services to the backup before I started to dismantle the old LVM volume. The following steps were done:

  • Stop all services on other machines that were using the NAS;
  • Stop all services on the NAS that were using the /mnt/airvideo LVM volume;
    • sudo systemctl stop apache2 smbd nmbd plexmediaserver
  • Unmount the /mnt/airvideo volume, and create a soft-link of the same name to the backup volume at /mnt/av2;
    • sudo umount /mnt/airvideo
    • sudo ln -s /mnt/av2 /mnt/airvideo
  • Restart all services on the NAS and the other machines;
    • sudo systemctl start apache2 smbd nmbd plexmediaserver
  • Once again, the downtime here was minimal;
  • Remove or comment out the entry in the /etc/fstab file that automatically mounts the old LVM volume on boot. This is no longer necessary because ZFS is remounted by default;

Now that the services are all up and running, we can then start destroying the old LVM volume (airvideovg2/airvideo) and volume group (airvideovg2). We can obtain a list of all the physical volumes that make up the volume group.

sudo pvdisplay -C --separator ' | ' -o pv_name,vg_name

  PV | VG
  /dev/md1 | airvideovg2
  /dev/md2 | airvideovg2
  /dev/md3 | airvideovg2
  /dev/md4 | airvideovg2
  /dev/nvme0n1p1 | airvideovg2

The /dev/mdX devices are the mdadm mirror devices, each consisting of a pair of hard drives.

sudo lvremove airvideovg2/airvideo
Do you really want to remove and DISCARD active logical volume airvideovg2/airvideo? [y/n]: y
  Flushing 0 blocks for cache airvideovg2/airvideo.
Do you really want to remove and DISCARD logical volume airvideovg2/lv_cache_cpool? [y/n]: y
  Logical volume "lv_cache_cpool" successfully removed
  Logical volume "airvideo" successfully removed

sudo vgremove airvideovg2
  Volume group "airvideovg2" successfully removed

At this point, both the logical volume and the volume group are removed. We say a little prayer to ensure nothing happens with our temporary volume (/mnt/av2), that is currently in operation.

We now have to disassociate the mdadm devices from LVM.

sudo pvremove /dev/md1
Labels on physical volume "/dev/md1" successfully wiped.
sudo pvremove /dev/md2
Labels on physical volume "/dev/md2" successfully wiped.
sudo pvremove /dev/md3
Labels on physical volume "/dev/md3" successfully wiped.
sudo pvremove /dev/md4
Labels on physical volume "/dev/md4" successfully wiped.
sudo pvremove /dev/nvme0n1p1
Labels on physical volume "/dev/nvme0n1p1" successfully wiped.

You can find the physical hard drives associated with each mdadm device using the following:

sudo mdadm --detail /dev/md1
#or
sudo cat /proc/mdstat

We then have to stop all the mdadm devices and zero their superblock so that we can reuse the hard drives to set up our ZFS pool.

sudo mdadm --stop /dev/md1
mdadm: stopped /dev/md1
sudo mdadm --stop /dev/md2
mdadm: stopped /dev/md2
sudo mdadm --stop /dev/md3
mdadm: stopped /dev/md3
sudo mdadm --stop /dev/md4
mdadm: stopped /dev/md4

# Normally you also need to do a --remove after the --stop,
# but it looks like the 6.5 kernel did the remove automatically.
#
# For all partitions used in the md device

for i in sdb1 sdc1 sdp1 sda1 sdo1 sdd1 sdg1 sdn1
do
	sudo mdadm --zero-superblock /dev/${i}
done

Now with all of the old hard drives freed up, we can repurpose them to create our ZFS pool. Instead of using the /dev/sdX reference of the physical device, it is recommended to use /dev/disk/by-id with the manufacturer’s model and serial number so that the ZFS pool can be moved to another machine in the future. We also used the -f switch to let ZFS know that it is okay to erase the existing content on those devices. The command to create the pool we named vault is this:

zpool create -f vault mirror /dev/disk/by-id/ata-ST10000VN0008-2JJ101_ZHZ1KMA0-part1 /dev/disk/by-id/ata-WDC_WD101EFAX-68LDBN0_VCG6VRWN-part1 mirror /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1E8GW4-part1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1E8S0V-part1 mirror /dev/disk/by-id/ata-ST10000VN0004-1ZD101_ZA2C69FN-part1 /dev/disk/by-id/ata-ST10000VN0004-1ZD101_ZA2964KD-part1 mirror /dev/disk/by-id/ata-ST12000VN0008-2YS101_ZRT008SC-part1 /dev/disk/by-id/ata-ST12000VN0008-2YS101_ZV701XQV-part1

# The above created the pool with the old drives from the old LVM volume group
# We then added 4 more drives, 2 x 6TB, and 2 x 4TB drives to the pool

# Adding another 6TB mirror:

sudo zpool add -f vault mirror /dev/disk/by-id/ata-WDC_WD60EFRX-68L0BN1_WD-WX31D87HDU09-part1 /dev/disk/by-id/ata-WDC_WD60EZRZ-00GZ5B1_WD-WX11D374490J-part1

# Adding another 4TB mirror:

sudo zpool add -f vault mirror /dev/disk/by-id/ata-ST4000DM004-2CV104_ZFN0GTAK-part1 /dev/disk/by-id/ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0354579-part1

We also want to add the old NVMe as ZFS L2ARC cache.

ls -lh /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_500GB_S4P2NF0M419555D

lrwxrwxrwx 1 root root 13 Mar  2 16:02 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_500GB_S4P2NF0M419555D -> ../../nvme0n1

sudo zpool add vault cache /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_500GB_S4P2NF0M419555D 

We can see the pool using this command:

sudo zpool list -v vault

NAME                                                    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
vault                                                  45.4T  31.0T  14.4T        -         -     0%    68%  1.00x    ONLINE  -
  mirror-0                                             9.09T  8.05T  1.04T        -         -     0%  88.5%      -    ONLINE
    ata-ST10000VN0008-2JJ101_ZHZ1KMA0-part1                -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD101EFAX-68LDBN0_VCG6VRWN-part1               -      -      -        -         -      -      -      -    ONLINE
  mirror-1                                             7.27T  6.49T   796G        -         -     0%  89.3%      -    ONLINE
    ata-ST8000VN0022-2EL112_ZA1E8GW4-part1                 -      -      -        -         -      -      -      -    ONLINE
    ata-ST8000VN0022-2EL112_ZA1E8S0V-part1                 -      -      -        -         -      -      -      -    ONLINE
  mirror-2                                             9.09T  7.54T  1.55T        -         -     0%  82.9%      -    ONLINE
    ata-ST10000VN0004-1ZD101_ZA2C69FN-part1                -      -      -        -         -      -      -      -    ONLINE
    ata-ST10000VN0004-1ZD101_ZA2964KD-part1                -      -      -        -         -      -      -      -    ONLINE
  mirror-3                                             10.9T  8.91T  2.00T        -         -     0%  81.7%      -    ONLINE
    ata-ST12000VN0008-2YS101_ZRT008SC-part1                -      -      -        -         -      -      -      -    ONLINE
    ata-ST12000VN0008-2YS101_ZV701XQV-part1                -      -      -        -         -      -      -      -    ONLINE
  mirror-4                                             5.45T  23.5G  5.43T        -         -     0%  0.42%      -    ONLINE
    ata-WDC_WD60EFRX-68L0BN1_WD-WX31D87HDU09-part1         -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD60EZRZ-00GZ5B1_WD-WX11D374490J-part1         -      -      -        -         -      -      -      -    ONLINE
  mirror-5                                             3.62T  17.2G  3.61T        -         -     0%  0.46%      -    ONLINE
    ata-ST4000DM004-2CV104_ZFN0GTAK-part1                  -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0354579-part1         -      -      -        -         -      -      -      -    ONLINE
cache                                                      -      -      -        -         -      -      -      -  -
  nvme-Samsung_SSD_970_EVO_Plus_500GB_S4P2NF0M419555D   466G  3.58G   462G        -         -     0%  0.76%      -    ONLINE

Once the pool is created, we wanted to set some pool properties so that in the future when we replace these drives with bigger drives, the pool will automatically expand.

zpool set autoexpand=on vault

With the pool created, we can then create our dataset or filesystem and its associated mount point. We also want to ensure that the filesystem also supports posixacl.

zfs create vault/airvideo
zfs set mountpoint=/mnt/av vault/airvideo
zfs set acltype=posixacl vault
zfs set acltype=posixacl vault/airvideo

We mount the new ZFS filesystem on /mnt/av because the /mnt/airvideo is soft-linked to the temporary /mnt/av2 volume that is still in operation. We first have to re-copy all our content from the temporary volume to the new ZFS filesystem.

sudo rsync --delete -aAXv /mnt/av2/ /mnt/av > ~/nohup.avs.rsync.out 2>&1 &

This took around 4 days to complete. We can all breathe easy again because all the data now have redundancy again! We can now bring the new ZFS filesystem live.

sudo systemctl stop apache2.service smbd nmbd plexmediaserver.service
sudo rm /mnt/airvideo
sudo zfs set mountpoint=/mnt/airvideo vault/airvideo
sudo systemctl start apache2.service smbd nmbd plexmediaserver.service

zfs list

NAME             USED  AVAIL     REFER  MOUNTPOINT
vault           31.0T  14.2T       96K  /vault
vault/airvideo  31.0T  14.2T     31.0T  /mnt/airvideo

The above did not take long and the migration is completed!

df -h /mnt/airvideo

Filesystem      Size  Used Avail Use% Mounted on
vault/airvideo   46T   32T   15T  69% /mnt/airvideo

Getting the capacity of our new ZFS filesystem shows that we now have 46TB to work with! This should last for at least a couple of years I hope.

I also did a quick reboot of the system to ensure it can come back up with the ZFS filesystem in tack and without issues. It has now been running for 2 days. I have not collected any performance statistics, but the services all feel faster.

Media Server Storage Hardware Reconfiguration

Our media server has reached 89% utilization and needs storage expansion. The storage makeup on the server uses Logical Volume Manager (LVM) and software RAID called mdadm. I can expand the storage by swapping out the hard drives with the least capacity with new hard drives with a larger capacity like I have previously done.

I thought I try something different this time around. I would like to switch from LVM to ZFS, an LVM alternative that is very popular with modern mass storage systems, especially with TrueNAS.

All filled with 8 drives

Before I can attempt the conversion, I will first need to backup all of the content from the media server. The second issue is that I needed more physical expansion space on the server to house more hard drives. The existing housings are all filled except for a single slot, which is going to be insufficient.

A related issue is that I no longer have any free SATA slots available for the new hard drives, so I purchased GLOTRENDS SA3116J PCIe SATA Adapter Card with 16 SATA Ports. Once this is installed, I have more than enough SATA ports for additional storage.

PCIe SATA Card Installed

One downside of the SATA card is that it is limited to PCIe 3.0 x1 speed. This means data transfer is limited to a theoretical maximum of 1GB/s. Given that the physical hard drives top out at 200MB/s, I don’t think we need to be too concerned about this bottleneck. We will see in terms of practical usage in the future.

I am so lucky to have extra SATA power cables and extension cables laying around and my 850W existing power supply has ample power for the additional hard drives.

How do we store the additional hard drives with a full cabinet? I went to Amazon again and purchased a hard drive cage, Jaquiain 3.5 Inch HDD Hard Drive Cage 8X3.5 Inch HDD Cage. I did not have to buy any new hard drives yet, because I had plenty of old hard drives laying around. After I put together the cage with 8 really old and used hard drives, it looks something like this:

With this new additional storage, I am now able to backup the media content from my media server. However, before I do that there is one last thing that I need to do, and that is to experiment with an optimal ZFS pool configuration that will work with my content and usage. I will perform this experimentation with the additional storage before reconfiguring the old storage with ZFS. Please stay tuned for my findings.

After booting the system with 16 hard drives, I measured the power usage and it was hovering around 180W. This is not too bad, less than 2 traditional incandescent light bulbs.


Addendum:

During my setup, I had to spend hours deciphering an issue. My system did not recognize my old hard drives. After many trials, I finally narrowed down that the GLOTRENDS card is not compatible with an old 2TB Western Digital Enterprise Drive. This is the first time that I came across SATA incompatibilities.

There is another possibility that these drives were damaged by the usage of an incorrect modular power cable. I found that these drives also do not work with my USB3.0 HDD external dock as well. This gives additional credence that the physical drive has been damaged.

The troubled drive with the SATA card

All my other drives worked fine with the card.

Another discovery is that not all modular power cables will work with my ASUS ROG STRIX 850W power supply. Initially, I thought I would use an 8-pin PCIe to 6-pin adapter along with a 6-pin to SATA power cable designed for Corsair power supplies.

PCIe Power Adapter from Amazon
OwlTree PCI-e 6 Pin Male to 4 SATA 1 to 4 SATA Female Power Supply Splitter Supply Cable for Corsair Modular RM650X RM750X RM850X RM1000X

Using the above cables will cause the power supply not to start. I had to hunt for the original cables that came with the STRIX power supply.

Learned a lot from rejigging this media server. My reward is to see my server boot up with 16 hard drives and 2 NVMe SSD drives recognized. I have never built a system with so many drives and storage before.

Resizing LVM Volume with Cache

I had to increase the size of my media LVM logical volume again. In a previous post, I provided the instructions. I have done this many times. However, this time around, I ran into a snag.

Apparently this is the first time I try to increase the logical volume after I implemented LVM caching, which I wrote about in this post.

The steps in the “Linux LVM Super Simple to Expand” post are the same right up to and including the step involving the resizing of the physical volume. Afterwards, in order to resize the logical volume, we first have to disable the cache temporarily.

sudo lvconvert --splitcache /dev/airvideovg2/airvideo

Once the logical volume is no longer cached, then we can proceed with the resizing.

sudo lvresize -l +100%FREE /dev/airvideovg2/airvideo

Once the resize is completed, we can unmount the volume and perform the required resizing of the filesystem.

sudo systemctl stop smbd.service

sudo systemctl stop mpd.service

sudo systemctl stop apache2.service

sudo umount /mnt/airvideo

sudo e2fsck -y -f /dev/airvideovg2/airvideo

sudo resize2fs -p /dev/airvideovg2/airvideo

Note that e2fsck and resize2fs will take some time, between thirty minutes to an hour each. Once the file system is resized, we can reattach the cache.

sudo lvconvert --type cache --cachepool airvideovg2/lv_cache airvideovg2/airvideo

Usually it is a good idea to reboot the server after this just to make sure it mounts properly.

This is a small snag and LVM is still super simple to expand.

NVMe SSD with LVM Cache

I have been a huge fan of Apple’s fusion drives. They are an excellent compromise for affordable mass storage while still able to give you SSD performance. The concept is simple pair a fast but small SSD drive with a large but slow and much affordable, mechanical HDD. You get good performance and have lots of storage without breaking the bank.

I have falsely assumed that this capability only existed with Apple’s macOS operating system. This week I was pleasantly surprised to have discovered that LVM Cache can do more or less the same thing on Linux. This new found knowledge along with an excellent deal on a 500GB NVMe Samsung 970 Evo Plus M.2 drive gave me the itch to experiment this weekend with my NAS media server.

The hardware was easy enough to install, but I had to move one of the existing SATA connection because the M.2 slot on the motherboard shared a PCIe bus with a pair of SATA connections. Luckily I bothered to check the motherboard manual, otherwise I would have been scratching my head while the server fail to boot.

The software configurations were a bit more involved. Before I purchased the NVMe card, I did some experimentation with two external USB drives, one SSD and one HDD. I found this article to be super helpful in configuring LVM Cache with my test drives. However, these configurations were not fully restored after a reboot. After many hours of research on the Internet, I found this article indicating that my Ubuntu Linux distribution was missing the thin-provisioning-tools package. I also had experimented between the two different cache modes that were available, writethrough and writeback. I found out that the write back mode was a bit buggy and did not sync the cache and the storage drive. Yet another article to the rescue.

lvchange --cachesettings migration_threshold=16384 vg/cacheLV

I preferred the write back mode due to its better write performance characteristics. Apparently to fix the issue, I have to increase the migration threshold to something larger than the default of 2048 because the chunk size was too large.

Here are the steps that I did to configure my existing logical volume (airvideovg2/airvideo) to be cached by the NVMe drive that I just purchased. I first have to partitioned the NVMe drive.

Model: Samsung SSD 970 EVO Plus 500GB (nvme)
 Disk /dev/nvme0n1: 500GB
 Sector size (logical/physical): 512B/512B
 Partition Table: gpt
 Disk Flags: 
 

 Number  Start   End    Size   File system  Name     Flags
  1      1049kB  500GB  500GB               primary

Create an LVM physical volume with the NVMe partition that was created previously /dev/nvme0n1p1 and add it to the existing airvideovg2 volume group.

sudo pvcreate /dev/nvme0n1p1


sudo vgextend airvideovg2 /dev/nvme0n1p1

Create a cache pool logical volume and set its cache mode to write back and establish the migration threshold setting.

sudo lvcreate --type cache-pool -l 100%FREE -n lv_cache airvideovg2 /dev/nvme0n1p1



sudo lvchange --cachesettings migration_threshold=16384 airvideovg2/lv_cache

sudo lvchange --cachemode writeback airvideovg2/lv_cache

Finally link the cache pool logical volume to our original logical volume.

sudo lvconvert --type cache --cachepool airvideovg2/lv_cache airvideovg2/airvideo

Now my original logical volume is cached and I have gained SSD performance economically on my 20TB RAID setup for less than $200. Below is my final volume listing.

$ sudo lvs -a
   LV               VG          Attr       LSize   Pool       Origin           Data%  Meta%  Move Log Cpy%Sync Convert
   airvideo         airvideovg2 Cwi-aoC---  20.01t [lv_cache] [airvideo_corig] 0.01   11.78           0.00            
   [airvideo_corig] airvideovg2 owi-aoC---  20.01t                                                                    
   [lv_cache]       airvideovg2 Cwi---C--- 465.62g                             0.01   11.78           0.00            
   [lv_cache_cdata] airvideovg2 Cwi-ao---- 465.62g                                                                    
   [lv_cache_cmeta] airvideovg2 ewi-ao----  64.00m                                                                    
   [lvol0_pmspare]  airvideovg2 ewi-------  64.00m      

We can also use the command below to get a more detail listing.

sudo lvs -a -o+name,cache_mode,cache_policy,cache_settings,chunk_size,cache_used_blocks,cache_dirty_blocks

Upgrade completed. We’ll see how stable it is in the future.