Resizing LVM Volume with Cache

I had to increase the size of my media LVM logical volume again. In a previous post, I provided the instructions. I have done this many times. However, this time around, I ran into a snag.

Apparently this is the first time I try to increase the logical volume after I implemented LVM caching, which I wrote about in this post.

The steps in the “Linux LVM Super Simple to Expand” post are the same right up to and including the step involving the resizing of the physical volume. Afterwards, in order to resize the logical volume, we first have to disable the cache temporarily.

sudo lvconvert --splitcache /dev/airvideovg2/airvideo

Once the logical volume is no longer cached, then we can proceed with the resizing.

sudo lvresize -l +100%FREE /dev/airvideovg2/airvideo

Once the resize is completed, we can unmount the volume and perform the required resizing of the filesystem.

sudo systemctl stop smbd.service

sudo systemctl stop mpd.service

sudo systemctl stop apache2.service

sudo umount /mnt/airvideo

sudo e2fsck -y -f /dev/airvideovg2/airvideo

sudo resize2fs -p /dev/airvideovg2/airvideo

Note that e2fsck and resize2fs will take some time, between thirty minutes to an hour each. Once the file system is resized, we can reattach the cache.

sudo lvconvert --type cache --cachepool airvideovg2/lv_cache airvideovg2/airvideo

Usually it is a good idea to reboot the server after this just to make sure it mounts properly.

This is a small snag and LVM is still super simple to expand.

Ubuntu Server Missing Network Interface

I had an opportunity recently to install Ubuntu Server on a very old server, a Dell R710 that had 4 native network interfaces and 4 add-on network interfaces, resulting in a total of 8 network interfaces.

During the installation process, the installer did recognize all the physical network interfaces on the machine but because it did not successfully acquire DHCP addresses, I was forced to install Ubuntu without networking.

After the installation, only the loop back (lo) interface existed and all the other physical interfaces were missing. I had to use the netplan command to create the interfaces. This article was of tremendous help. I pretty well just followed its instructions.

I first created the 99-disable-network-config.cfg file with the contents as instructed by the article.

sudo su -

echo "network: {config: disabled}"  >>  /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

Followed by editing the 50-cloud-init.yaml file with the following contents:

vim /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: true

Once netplan is configured, I then executed the following command:

netplan generate
netplan apply

Once I rebooted the computer, the eno1 network interface now exists with a provisioned IP from my local DHCP server.

Much of the content in this blog is a verbatim reference from the article but I provided here so that it is more easily searched by me if I ever needed it in the future.

MacBook with Windows on External SSD

All of this started with one of my neighbour whose laptop broke down. The laptop stopped recognizing its internal SATA connection, so it will not boot. My neighbour ended up booting Windows from an external SSD using a Windows to Go solution to continue to use his laptop.

MacBook Air (13-inch, Mid 2013) running Windows 10 Home

This somehow got me thinking whether it is possible to boot Windows from an external SSD using a Mac. I knew Bootcamp allows you to create a dual boot scenario on the Mac, but the default procedure requires you to repartition your internal drive space to do so.

With external SSD drives coming down in price, for example you can get a 500GB Samsung T5 now for less than $130 CAD, it would seem a pretty sweet deal to have Windows on the side with your MacBook.

After doing some research, it seems like others have similar ideas. I am not going to detail all the steps, since you can find YouTube videos and other forums that have already done the deed. Instead, the high level process goes something like this:

  1. Use the Bootcamp Assistant App on the Mac to collect all the drivers on a USB stick or a local folder on your Mac. Do not use the wizard. You will need to use the Action menu. See Figure 1 below.
  2. Download a Windows ISO and use a Virtual Machine (e.g. Parallels, VirtualBox, etc.) to install the Windows ISO onto an external SSD drive. I first tried VirtualBox but ran into Catalina permission issues that I could not circumvent. I ended up doing it with Parallels which I will go into details later.
  3. Copy the drivers from the USB stick created in 1 into the desktop of the recently installed Windows on the SSD drive.
  4. Reboot your Mac and hold the option key down before the Apple logo shows and boot into the EFI portion that contains Windows.
  5. Make sure you have an external keyboard and mouse handy because the default Windows install may not recognize the native hardware yet. On my MacBook Air, I had no issues.
  6. Once Windows come up, login and run the Bootcamp setup from the desktop that was originally copied from the USB stick.
  7. Once this is all done, you can dual boot into Windows on the Mac as long as you have that SSD drive handy.
Figure 1: Remember to use the Action menu

So far everything works, and it is happily installing Visual Studio 2019. I even tried Cortana and the mic and speakers are working well. I did a quick Skype test call and the webcam is working well too.

I do want to document the steps that I performed with Parallels when installing Windows 10 onto the SSD. Those steps were not intuitive.

Step 1: Choose the Install Windows or another OS
Step 2: Choose Manually
Step 3: Don’t choose anything, but check the “Continue without a source” box at the bottom left hand corner

After this, stop the virtual machine and make the following custom configurations:

Step 4a: Select Hardware and configure the Hard Disk
Step 4b: Make sure your external media is plugged in, and select it in the Source. For example, Physical disk: Kingston DataTraveler 3.0 Media (disk2)
Step 5a: Change the boot order so that you can boot from CD, and connect the CD to your Windows ISO (not shown)

Start the Virtual Machine and it will go through the first part of the Windows installation. Once it is completed, it will reboot. Instead of booting from the external media, it will boot from the CD ISO image again. Simply shutdown the VM again and change the boot order again.

Step 5b: Change the boot order again to Hard Disk first, and restart the VM to complete its second part of the install

Once Windows 10 complete its installation, it will go through a user account setup process. If you are connected to the Internet during this stage, Windows 10 will force you to either use an existing Microsoft account or create one. This is unfortunate, but go ahead and create a temporary one. Remember to create a local administrator account and remove this temporary Microsoft account as the final step of the Windows setup.

Remember to copy the Bootcamp drivers from the USB stick to the Windows desktop before completing and shutting down the virtual machine.

Now you are ready to restart the Mac and dual boot into the external drive by holding the Option key while the machine restarts. The final step is run the Bootcamp Setup.exe program, which should be located inside the Bootcamp folder that your previously copied on the desktop. This is the last step of the Windows configuration on the SSD drive, and you can restart your Mac and dual boot into Windows one final time.

You are now running Windows natively to the Mac’s metal, without any simulators or Virtual Machines. This process is great to revitalize old MacBook’s lying around especially for students who need a Windows computer for their curriculum, but still want to retain their macOS. For more contemporary Mac’s, the small form factor and the speed of the Samsung T5 drive is a great fit for this type of situation. This is very cool!

Update: Potential Trouble with Major Windows Update

I have been told that a major Windows Update could encounter an error and a registry setting is required to fix this. The following page has more information on this. In summary, you have to set the following registry key PortableOperatingSystem from 1 to 0. This key can be found at registry location HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control. Thanks to Martin Little for this very helpful information.

Update: Mac’s with Secure Boot using the T2 Chip

To allow a Mac with the T2 chip to boot from an external drive certain settings have to be made with the Startup Security Utility. This utility can be accessed via the Mac’s recovery mode, under the Utilities menu. You want to disable secure boot and allow for external drive. Since the secure boot is disabled, set a firmware password to prevent a bad actor booting their own operating system with their own Live USB key.

Encrypted Live USB Stick

The goal is to create a USB key that contains a Linux based operating system. Any Linux compatible computer can then be booted with this USB key, temporarily borrowing the host computer. The hosted Linux OS can then access an encrypted partition that houses important private information that may be helpful in an emergency. This technique offers the maximum portability of accessible, private information such as your will, financial data, credentials, etc.

I previously had an USB key formatted with an encrypted Mac filesystem storing the same information. However, this is inconvenient because you will need to find a Mac in an emergency situation.

In the Linux community, you can create a Live USB key. The concept is to create an operating system that will run off of the USB key with any computer that you can plug the USB key to. However, many of these Live USB distributions does not remember any changes that you make while using the operating system. The next time you boot from the Live key, all your previous changes are gone, and the Linux environment reverts back to its original, pristine state. To remember the changes during uses, these changes have to be “persisted”.

I started to find the best methodology for creating a Live Linux USB that operates with an encrypted persistent partition.

All the commands in this article has been performed within the Ubuntu 18.04 LTS Desktop install. I installed this version on both VirtualBox and Parallels on the Mac. Both worked beautifully but Parallels has smoother integration with Mac.

I tried first the Kali distribution, using the instructions in this USB Persistence & Encrypted Persistence article (Article 1). However, the USB stick that I was using which was a Kingston DTSE9 G2 USB 3.0 32GB, was simply way too slow on writes causing the Live USB almost unusable.

I searched for an alternative USB stick and settled for the SanDisk 64GB Ultra Fit USB 3.1 Flash Drive. This new USB stick’s write performance was 4x faster than the Kingston.

After learning more about initramfs hooks, boot loaders, and a refresher on UEFI and BIOS booting process and partition layout strategies for USB storage devices, I decided to roll my own Live USB using the Ubuntu Desktop as a base along with the mkusb tool for the initial layout. The reason for the change is that I already have Ubuntu else where in the house so standardization is probably a better bet.

To improve performance further, I decided that it is not necessary to encrypt the persistent partition where the system configuration updates will be stored. Instead, I will create my own private encrypted partition to store only the private data that requires protection. Article 1, also provided details on how to use the LUKS technology to encrypt any Linux partition, so my exercise with Kali Linux was not a total waste of time.

Before I run mkusb, I needed to install it first by doing the following:

sudo add-apt-repository universe
sudo add-apt-repository ppa:mkusb/ppa
sudo apt-get update
sudo apt-get install mkusb mkusb-nox usb-pack-efi

I ran the mkusb tool (after sudo su - )1, with the following options:

We also chose msdos so that more computers will be compatible for booting. Once mkusb is completed, we will need to perform some custom partition layout. We will use the gparted program for this purpose so that the completed partition layout will look something like this:

Final MBR Partition Table

We first deleted the original usbdata partition and grew the extended partition (/dev/sdb2) to about 18 GB, approximately 6 GB for casper-rw, which the system will store any custom configurations or upgrades since this Live USB key is created. We create another logical partition called Personal that is around 12 GB in size, which will be encrypted and this is where we will store private, sensitive data for emergency use.

The remaining space will be allocated to USBDATA, a last primary partition for normal USB data sharing, the typical use case for a USB stick. We also want to make sure that the other FAT32 (usbboot) partition is not visible in Windows by setting the hidden partition flag. We did that with the gparted program as well.

Once the partition table is completed, we can now encrypt the Personal (/dev/sdb6) partition. For this, we went back to Article 1, which gave us the following instructions.

~# cryptsetup --verbose --verify-passphrase luksFormat /dev/sdb6
 WARNING!
 This will overwrite data on /dev/sdb6 irrevocably.
 Are you sure? (Type uppercase yes): YES
 Enter passphrase for /dev/sdb6: 
 Verify passphrase: 
 Key slot 0 created.
 Command successful.

~# cryptsetup luksOpen /dev/sdb6 myusb
Enter passphrase for /dev/sdb6:

~# mkfs.ext4 -L Personal /dev/mapper/myusb

~# cryptsetup luksClose /dev/mapper/myusb

All Done! Now we have a bootable USB stick that can be booted from any Ubuntu compatible computer. I can store my own personal data in a very safe and private way within the encrypted Personal partition, while any changes I make to the system will be preserved in between the uses of the USB stick. On top of it all, the USB still has 40+ GB (~37.5 GiB) of storage for normal USB transfer usage.

I spent sometime copying some confidential information which I think I will need in an emergency into the Personal partition. I want to duplicate the finished Live USB key, so that both my wife and I will have a copy always available to us on our physical keychain.

I did this on my Mac, and the command to duplicate the USB drive is:

sudo dd if=/dev/rdisk2 of=/dev/rdisk3 bs=4m conv=notrunc

If the USB key ended up to be lost, then whoever picks it up will need to:

  • Recognize that this is a bootable USB, otherwise it will just seem like 40GB USB Flash Drive;
  • Get the password needed to login to Linux; I thought about installing two factor authentication but decided not to, because any good hacker can simply access the partition from another Live Key;
  • If they do mount the partition manually, then they still need to obtain the LUKS key to decrypt the partition; I made the LUKS key to be different than the OS password and is twice as long.

I think the risk is worth the benefit of having critical info around in case of an emergency.

Update: WiFi on MacBooks

It looks like MacBooks uses Broadcom WiFi chips and most Linux distributions do not ship with these drivers. This can be easily solved by loading the following software:

sudo apt update
sudo apt install bcmwl-kernel-source

Even with the above software installed, there is still a little ritual:

  1. Launch the “Software and Updates” application;
  2. Select the “Additional Drivers” tab;
  3. Select “do not use this driver” and allow the process to go through and reboot the system;
  4. Re-enter the system and repeat steps 1 & 2, and then select the Broadcom drivers;
  5. Without rebooting, WiFi networks should be available for use

Unfortunately the above ritual will have to be performed every time the Live USB stick is powered off.

Update: Tried Linux Live Kit

I wanted to further customize my Live USB key. Instead of keeping a persistent partition, I thought I would keep a Linux VM at home and ensure that it is up to date and customized. At certain intervals, I would then create a Live USB key from the VM install.

I tried Linux Live Kit, but the results were disappointing. I was able to create a bootable USB key that worked, but the OS did not recognize the MacBook’s keyboard or trackpad. For some reasons, the drivers required did not get bundled during the process. I’ll have to read up on how I can create a Live USB key from scratch rather than depending on these tools, but it is more complicated than I thought, so for now this idea will have to be shelved until I have more time.

1For some reason mkusb will not work with the live persistence if I simply do a sudo mkusb or under a non-root account. The only way that I can get it to work is to run it within a root login session.

NAS RAID-1 Fail

This past weekend my media NAS server was intolerably slow. When I investigated, I found out that one of the RAID-1 partitions is experiencing read errors and is timing out. I decided to risk a reboot and to my surprise the RAID-1 partition did not recover with one fail drive, but mdstat recorded with an inactive status, something like this:

md2 : inactive sdc1[0](S)

After some Google search, I found that I had to do the following to resurrect the md2 device.

madam --stop /dev/md2
mdadm --assemble --force /dev/md2

This reactivated the md2 partition. I replaced the failed drive and re-added the new drive to the md2 device. The RAID-1 partition is now rebuilding.

The inactive state is a new experience for me, so this was a bit of a surprise.

During this exercise I also found out that the SATA connectors on my SATA add-on card were loose causing intermittent connections. I will have to find a way to address this in the future.

NVMe SSD with LVM Cache

I have been a huge fan of Apple’s fusion drives. They are an excellent compromise for affordable mass storage while still able to give you SSD performance. The concept is simple pair a fast but small SSD drive with a large but slow and much affordable, mechanical HDD. You get good performance and have lots of storage without breaking the bank.

I have falsely assumed that this capability only existed with Apple’s macOS operating system. This week I was pleasantly surprised to have discovered that LVM Cache can do more or less the same thing on Linux. This new found knowledge along with an excellent deal on a 500GB NVMe Samsung 970 Evo Plus M.2 drive gave me the itch to experiment this weekend with my NAS media server.

The hardware was easy enough to install, but I had to move one of the existing SATA connection because the M.2 slot on the motherboard shared a PCIe bus with a pair of SATA connections. Luckily I bothered to check the motherboard manual, otherwise I would have been scratching my head while the server fail to boot.

The software configurations were a bit more involved. Before I purchased the NVMe card, I did some experimentation with two external USB drives, one SSD and one HDD. I found this article to be super helpful in configuring LVM Cache with my test drives. However, these configurations were not fully restored after a reboot. After many hours of research on the Internet, I found this article indicating that my Ubuntu Linux distribution was missing the thin-provisioning-tools package. I also had experimented between the two different cache modes that were available, writethrough and writeback. I found out that the write back mode was a bit buggy and did not sync the cache and the storage drive. Yet another article to the rescue.

lvchange --cachesettings migration_threshold=16384 vg/cacheLV

I preferred the write back mode due to its better write performance characteristics. Apparently to fix the issue, I have to increase the migration threshold to something larger than the default of 2048 because the chunk size was too large.

Here are the steps that I did to configure my existing logical volume (airvideovg2/airvideo) to be cached by the NVMe drive that I just purchased. I first have to partitioned the NVMe drive.

Model: Samsung SSD 970 EVO Plus 500GB (nvme)
 Disk /dev/nvme0n1: 500GB
 Sector size (logical/physical): 512B/512B
 Partition Table: gpt
 Disk Flags: 
 

 Number  Start   End    Size   File system  Name     Flags
  1      1049kB  500GB  500GB               primary

Create an LVM physical volume with the NVMe partition that was created previously /dev/nvme0n1p1 and add it to the existing airvideovg2 volume group.

sudo pvcreate /dev/nvme0n1p1


sudo vgextend airvideovg2 /dev/nvme0n1p1

Create a cache pool logical volume and set its cache mode to write back and establish the migration threshold setting.

sudo lvcreate --type cache-pool -l 100%FREE -n lv_cache airvideovg2 /dev/nvme0n1p1



sudo lvchange --cachesettings migration_threshold=16384 airvideovg2/lv_cache

sudo lvchange --cachemode writeback airvideovg2/lv_cache

Finally link the cache pool logical volume to our original logical volume.

sudo lvconvert --type cache --cachepool airvideovg2/lv_cache airvideovg2/airvideo

Now my original logical volume is cached and I have gained SSD performance economically on my 20TB RAID setup for less than $200. Below is my final volume listing.

$ sudo lvs -a
   LV               VG          Attr       LSize   Pool       Origin           Data%  Meta%  Move Log Cpy%Sync Convert
   airvideo         airvideovg2 Cwi-aoC---  20.01t [lv_cache] [airvideo_corig] 0.01   11.78           0.00            
   [airvideo_corig] airvideovg2 owi-aoC---  20.01t                                                                    
   [lv_cache]       airvideovg2 Cwi---C--- 465.62g                             0.01   11.78           0.00            
   [lv_cache_cdata] airvideovg2 Cwi-ao---- 465.62g                                                                    
   [lv_cache_cmeta] airvideovg2 ewi-ao----  64.00m                                                                    
   [lvol0_pmspare]  airvideovg2 ewi-------  64.00m      

We can also use the command below to get a more detail listing.

sudo lvs -a -o+name,cache_mode,cache_policy,cache_settings,chunk_size,cache_used_blocks,cache_dirty_blocks

Upgrade completed. We’ll see how stable it is in the future.

Two New 8TB Drive for Our NAS

Our NAS has run out of space again. I saw a deal that the Seagate IronWolf 8TB NAS Hard Drive was on sale at newegg for $309 CAD. I jumped at the chance and purchased two.

I am now following the same step as I outlined in this post. Replacing two old 4TB drives with these two new 8TB drives.

So far so good. Hopefully when all is said and done, my NAS will have a total of 18TB in a RAID 1 configuration of 6 hard drives in total. Two 4TB, two 6TB, and the two new 8TB.

I noticed that I could fit two more drives in my chassis and may decide to re-add the two old 4 TB back in, but first I’ll have to check if my power supply can handle the demand.

I really like this mdadm and LVM setup.

Update: After 2 mdadm syncs, each of which was around 8 hours, and a pvresize that also took another 5 hours. I had to convert the filesystem from 32 bits to 64 bits using these very helpful instructions. Only after I converted to 64 bits can I then expand the existing filesystem to more than 16TB. It was a learning and yet rewarding experience. Next step is to reuse the 2 old 4TB drives in the same chassis and add them to the logical volume.

Linux LVM Super Simple to Expand

During the Boxing Day sales event of 2017, I purchased a couple of Seagate Barracuda ST4000DM004 (4TB) hard drives. The intention is to expand our main home network storage, which is a network accessible / attached storage (NAS) managed by our Linux server using mdadm and Logical Volume Manager (LVM).

However I procrastinated until this weekend and finally performed the upgrade. The task went smoothly without any hiccups. I have to give due credit to the following site: https://raid.wiki.kernel.org/index.php/Growing. It really provided very detail information for me to follow. It was a great help.

One major concern I had was whether I can do this without any data loss, and a question of how much down time would this upgrade require.

I had a logical volume, named /dev/airvideovg2/airvideo, consisted of 100% usage of a volume group which was made up of three RAID-1 multiple devices (md). Since I ran out of physical drive bays, to perform the upgrade, I had to effectively replace 2 older drives which were 2TB in size with the newer 4TB drives. The old drives were Western Digital WDC WD20EZRX-00D8PB0 (2TB). I can use these old drives for other needs.

First I had to find the md containing the 2TB pair in RAID-1 (mirror) configuration. I did this with a combination of lsblk and mdadm commands. For example:

$ lsblk
NAME                       MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sdb                          8:0    0  1.8T  0 disk
└─sdb1                       8:1    0  1.8T  0 part
  └─md2                      9:3    0  1.8T  0 raid1
    └─airvideovg2-airvideo 252:0    0   10T  0 lvm   /mnt/airvideo

$ sudo mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sat Nov 12 18:01:36 2016
     Raid Level : raid1
     Array Size : 1906885632 (1725.90 GiB 2000.65 GB)
  Used Dev Size : 1906885632 (1725.90 GiB 2000.65 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Mar 11 09:12:05 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : avs:2  (local to host avs)
           UUID : 3d1afb64:878574e6:f9beb686:00eebdd5
         Events : 55191

    Number   Major   Minor   RaidDevice State
       2       8       81        0      active sync   /dev/sdf1
       3       8       17        1      active sync   /dev/sdb1

I found that I needed to replace /dev/md2 which consisted of two /dev/sdf1 and /dev/sdb1 partitions. These partitions belonged respectively to the old WD drives. I have 6 hard drives in the server chassis, so I needed to get the serial number of the drives to ensure that I was swapping the right one. I used the hdparm command to get serial number of /dev/sdb and /dev/sdf. For example:

$ sudo hdparm -i /dev/sdb

/dev/sdb:

 Model=WDC WD20EZRX-00D8PB0, FwRev=0001, SerialNo=WD-WCC4M4UDRZLD
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0
 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=7814037168
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4 
 DMA modes:  mdma0 mdma1 mdma2 
 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 
 AdvancedPM=no WriteCache=enabled
 Drive conforms to: unknown:  ATA/ATAPI-4,5,6,7

 * signifies the current active mode

Before I physically replace the drives, I must first remove them from the md device by doing the following.

sudo mdadm -f /dev/md2 /dev/sdb1
sudo mdadm -r /dev/md2 /dev/sdb1

After swapping out only one of the drives, and replaced it with a new one, I rebooted the machine. I first have to partitioned the new drive using the parted command.

sudo parted /dev/sdb

Once within parted, execute the following commands to create a single partition using the whole drive.

mklabel gpt
mkpart primary 2048s 100%

I learned that I can only start at sector 2048 to achieve optimal alignment. Once I created the /dev/sdb1 partition, I have to add it back into the RAID.

sudo mdadm --add /dev/md2 /dev/sdb1

As soon as the new partition with the new drive was added into the RAID, a drive resynchronization begins automatically. The resync took more than 3 hours. Once the resync is completed, I did the same thing with the remaining old drive in the RAID, and performed another resync. This time from the first new drive to the second new drive. After another 3+ hours, we can now grow the RAID device.

sudo mdadm --grow /dev/md2 --bitmap none
sudo mdadm --grow /dev/md2 --size max
sudo mdadm --wait /dev/md2
sudo mdadm --grow /dev/md2 --bitmap internal

The third command (–wait) took another 3+ hours to complete. After a full day of RAID resync, we now have the same /dev/md2 that is 4TB instead of 2TB in size. However the corresponding physical volume of LVM still needed to be resized. We did this with:

sudo pvresize /dev/md2

Once the physical volume is resized, we can then extend the logical volume to use up the remaining free space that we just added.

lvresize -l +100%FREE /dev/airvideovg2/airvideo

This took a few minutes but at least it was not 3+ hours.

Aside from the down time that I had to use to swap the hard drives, the logical volume was usable throughout the resynchronization and resizing process. This was impressive. However, now I have to take the volume offline by first umount the volume and changing it to inactive status.

sudo lvchange -an /dev/airvideovg2/airvideo

Note that I had stop the smbd and other services that was using the volume before I can unmount it.

The last step is to resize the file system of the logical volume, but before I can do that I was forced to perform a file system check.

e2fsck -f /dev/airvideovg2/airvideo
resize2fs -p /dev/airvideovg2/airvideo

I rebooted the machine to ensure all the new configurations held, and voila I upgraded my 8TB network attached storage to 10TB! Okay it was not super simple, but the upgrade process was pretty simple and painless. The down time was minimal. The LVM and mdadm guys did a really good job here.

Three Cheers for Software RAID

Last year, I built a NAS machine as referenced by my previous post here. This month, my media drive composed of an LVM logical volume of almost 5TB is almost filled.

This drive contains all purchased media and our home videos for Plex, as well as our time machine backups. To alleviate the storage shortage, I purchased two 4TB Western Digital WD40EFRX hard drives. This weekend I took the plunge and installed the two new drives into my NAS computer. In the spirit of the moment, without performing a backup, I proceeded to:

  • Use parted to partition the drives;
  • Use mdadm to create a RAID 1 device of the two drives;
  • Created a new LVM physical volume with the new RAID device;
  • Added the volume to the existing volume group;
  • Extend the logical volume to newly added 4TB;
  • and finally, Extend the ext4 file system to include the 4TB

In the end, I now have a near 9TB network drive that should last me for quite sometime.

In researching how to extend the logical volume, I also found out how I can upgrade my drives in my existing physical volumes to higher capacity drives without having to recopy everything to another drive first. This should come in handy in the future, because my NAS computer box has no more drive bays.

I know I probably should have taken a backup before this, but everything worked and I couldn’t be happier. Hurray, hurray, and hurray for software RAID.

Thank you Gigabyte and Western Digital

This morning I upgraded my media server. The server was running on a seven years old motherboard, Gigabyte GA-M61PME-S2P, and an AMD Athlon II X2 245 processor. The hard drive is the oldest, a Western Digital WDC WD800JB-00JJ 80GB hard drive that was released back in 2007. This makes the drive 9 years old! At the time of this update, all systems were nominal and operating without issues. Running Ubuntu 16.04.1 LTS (Xenial Xerus), next to my iMac, it was the lowest maintenance box I’ve ever put together.

The real reason for the upgrade is for me to reduce the power footprint of a box that is effectively running 7×24. Even when it was idling, the old components where clocking in at 100+ W of power usage.

I decided to replace the motherboard and cpu with an ASRock AM1H-ITX and a system on a chip AMD Athlon 5350 APU. I was able to get the system down to 55W. I also replaced the old WD800JB with a Seagate BarraCuda 7200.10 ST3500630AS 500GB hard drive, along with the existing 4 WD Green EZRX hard drives. A total of 5 traditional mechanical hard drives. Along with new Kingston HyperX Fury Black 8GB memory, this entire upgrade cost less than $220 CAD with taxes included.

I want to shoutout to Clonezilla. What an amazing job they did in creating a super simple piece of software to clone drives and partitions. Of course Linux is just so wonderful to work with. After changing the CPU and Motherboard, the original Ubuntu installation boot up and run without any major issues. The only wrinkle I had was the ethernet port that came with the new motherboard had a new logical name (enp3s0 vs eth0). Luckily, I know how to fix that. My LVM volume assembled without a hitch. Everything is now running fine with the new hardware and configuration.

My next step in the coming months is to shop for a 500GB SSD. Perhaps I can find one during Cyber Monday or Black Friday sales. This should further reduce my power footprint and also increase my performance.