Adding Ceiling Fans to HomeKit

I have two legacy ceiling fans in the house. One upstairs and the second in the living room. Both uses a radio frequency remote control. I could replace the fan or its remote control units to be more “smart”. However, I found out about this Bond Bridge product, which acts a WiFi to RF bridge for these products. Both my Hampton Bay fans are supported.

Hampton Bay Fan

I had some issues setting up the Bond Bridge to my home WiFi network, but their customer support was extremely helpful. After setting up both of my fans on the Bond Home app, and tested the light and fan speed controls, I integrated the Bond Bridge to my Homebridge server on my NAS.

I had to use the homebridge-bond plugin, which by now I was old hat in setting up these homebridge plugin’s. A quick edit in the homebridge configuration file as instructed by the plugin, and I can control the fans with Siri and the HomeKit app.

Next step is to probably wait for Black Friday and get 2 HomePod mini, one for upstairs and one in the basement, so that our voice commands can be picked up throughout the house. All common accessories save the basement has now been integrated into HomeKit.

Teckin Smart LED Bulb

I found a pair of these on Amazon for around $30 CAD (after a $2 coupon saving). They look like fun to install. I figured that now that I know how to installed Tuya devices with the Homebridge, these would be great additions to the common areas of the house, should we need some colour added to our lives.

Amazon was extremely helpful and these bulbs came the next day. Amazon Prime is such a great service!

I proceeded to add these devices to the TuyaSmart app without any issues. Tested the lights using the app. I then logged into the Tuya developer site to ensure that the new devices were registered.

I provided the configuration into Homebridge using the following template from the homebridge-tuya-lan plugin page. Unfortunately, the provided sample template did not work, because the datapoint identifier, a terminology that represents a numerical id that uniquely identifies a specific device function, such as power on/off the device. My vague understanding is that the OEM, in this case Teckin, can pick and choose the datapoint identifier when creating their product, and map specific numerical values to the various functions of their devices. I gleaned this from Tuya’s developer documentation here.

Therefore, the provided sample of:

"dpPower": 1

was simply incorrect. The dpPower setting needs to have the correct numerical value that points to the power on/off functionality of the device. The default of 1 was not working, and I now have the challenge of finding the right value.

Through much research on Tuya’s site and Google, I found out that each device has a signature / schema. I found out that I can get the current status by executing the following command line (key and id has been replaced with fake ones):

% tuya-cli get --ip 192.168.168.8 --key 8ddeadbeef5456ed --id 55deadbeefdeadbeef40 -a --protocol-version 3.3
{
  devId: '55deadbeefdeadbeef40',
  dps: {
    '20': false,
    '21': 'white',
    '22': 1000,
    '23': 188,
    '24': '00bc03e803e8',
    '25': '',
    '26': 0
  }
}

I then guessed that the datapoint identifier started with 20 instead of 1. Also based on the value of 1000 for dps '22', I also deduced that I had to change the colorFunction from HEXHSB to HSB, because it was not using HEX to denote ranges. The last hint that I got was from this comment on a forum. Ultimately, consolidating all of the above knowledge, I arrived to the final configuration that looks like this:

{
    "name": "Smart Bulb 2",
    "type": "RGBTWLight",
    "manufacturer": "Teckin",
    "model": "SB50 Smart Bulb",
    "id": "55deadbeefdeadbeef40",
    "key": "8ddeadbeef5456ed",

    "dpPower": 20,
    "dpMode": 21,
    "dpBrightness": 22,
    "dpColorTemperature": 23,
    "dpColor": 24,
    "minWhiteColor": 140,
    "maxWhiteColor": 400,
    "colorFunction": "HSB",

    "minBrightness": 10,
    "scaleBrightness": 1000,
    "scaleWhiteColor": 1000
}

The lights finally worked with Homebridge and therefore also worked with HomeKit. I thought adding these couple of bulbs would be done in a few minutes, but it took a little more effort than I thought.

I couldn’t be happier that they now work with Siri!

Gosund (Tuya) Smart Outlet with HomeKit

Recently I received an Amazon email, and I found the above Gosund Smart Socket promotion. Four smart plugs for $33 CAD. Unfortunately, it was not HomeKit compatible. I did not want to have any of my smart IoT devices connected to Amazon or Google so no thank you Alexa and Google Home.

A few years ago, I built my own smart garage door opener and hooked it up to the Homebridge server that is running on my NAS media server. The Homebridge server allows non-certified IoT devices to be connected to HomeKit. My garage door opener being one of them. I did a cursory search on Google and found that it should be possible to connect the Gosund outlets to HomeKit using Homebridge, so I took the plunge and made the purchase.

The plugs came and I downloaded the Gosund app and setup one of the outlet. It worked like a charm through the Gosund app. As I was setting up this single outlet with Homebridge, I found tuya-convert. This is an alternative to Homebridge. Instead of registering the device to Homebridge, tuya-convert claims that I can just flash the firmware and I can add the Gosund plug directly to HomeKit. Sounds attractive and I have to give it a shot. Long story short, I was successful with replacing the firmware, but when configuring the plug I provided wrong configuration data and as such I was locked out of the plug, effectively bricking the plug. Nothing ventured, nothing gained. While doing this exercise, I learned a lot about how to use esp-homekit-devices project to turn any ESP8266 chip set and make it HomeKit compatible. This could be very handy in a future project, but for now let’s go back to Homebridge.

I found that the version of homebridge on my NAS server was outdated, and so was the version of node. I backed up my existing homebridge configuration and proceed to uninstall homebridge.

I installed the latest stable version of node as of the writing of this blog, v12.19.0. I then followed these instructions and installed the latest version of Homebridge. This new version came with a web based UI as well. For convenience, this is what I did:

sudo npm install -g --unsafe-perm homebridge homebridge-config-ui-x
sudo hb-service install --user homebridge

I then reinstalled the Homebridge plugins that I previously had, which included homebridge-camera-ffmpeg, and my custom homebridge-kl-garage.

The Gosund plugs effectively uses the Tuya IoT Platform. So instead of using the Gosund App, I downloaded the TuyaSmart App from the App Store. The user interface is nearly identical to the Gosund App and I re-added the outlet with the TuyaSmart App.

Now I’m ready to install the homebridge-tuya plugin using the instructions here:

% sudo npm install -g --unsafe-perm homebridge-tuya

As per the instructions, I watched the YouTube video and followed its steps using the QR-Code method.

However, I found the video to be incomplete and I ran into issues when running the tuya-cli command. Essentially I got an error indicating that I did not have permission to run the API.

Using the information I had from the video and after some more Google searches, here are the steps which I followed and they worked for me.

First, install the tuya-cli command:

sudo npm i @tuyapi/cli -g

Next I had to create an account with iot.tuya.com. The sign-up process was a bit tricky because their email sending out the verification code seem to be slower than the allotted 60 seconds before the whole process timeout. It took me a couple of tries before I was able to create an account.

Once the account is created, I proceeded to create a project called HomeKit as part of Cloud Development. Below is a screenshot from the site.

After project creation

Click into the project and under Device Management, we need to link devices using the Link devices by App Account tab. When you click on the Add App Account, a QR-Code will be presented which needs to be scanned from the TuyaSmart App.

After linking the App account

To scan the QR-Code, open the TuyaSmart App and select the Me tab at the bottom, and tap on the scan icon in the upper right hand corner.

Use TuyaSmart App
to Scan QR-Code

Once the TuyaSmart App scans the QR-Code, you will see the Account along with a device count that you previously linked to your App.

You should be able to list all the devices that you previously registered / paired with the TuyaSmart App. I had to select America before I see the devices. See below.

Devices previously added to the TuyaSmart App will be displayed

You will need the virtual id which is the identifier below the device name. Go back to Project Overview and take note of the client id and secret:

The Client ID is the same as the API key for tuya-cli

Once you have these three pieces of information, you can then find the keys for your devices that you will need to configure the homebridge-tuya plugin. To do this, execute the following (note that the API key and secret below are fake):

% DEBUG=* tuya-cli wizard
? The API key from tuya.com: akkopy4vox723px9kcb23
? The API secret from tuya.com 3hfjodfu672kfm08711kpsnbvzzuyerk
? Provide a 'virtual ID' of a device currently registered in the app: 46616355e09806ca6ba7

The above command should yield something like (again the key is fake):

[
  {
    name: 'Mini Smart Plug',
    id: '46616355e09806ca6ba7',
    key: '823a8ee651beefdead'
  }
]

Now that we have the id and key for the Gosund outlet, we can then configure Homebridge using the homebridge-tuya plugin. We use the Homebridge web interface to do this.

Homebridge Web UI

The configuration for the plugin looks something like:

{
    "platform": "TuyaLan",
    "name": "TuyaLan",
    "devices": [
        {
            "name": "Mini Smart Plug",
            "type": "Outlet",
            "manufacturer": "Gosund",
            "model": "WP3 Mini Smart Plug",
            "id": "46616355e09806ca6ba7",
            "key": "823a8ee651beefdead"
        }
    ]
}

Once I restarted Homebridge on my NAS server, my Home App on my iPhone showed the smart plugs all configured. Below is what it looks like once I configured three of the Gosund smart plugs.

Home App

The integration is pretty good. The plugs are pretty cheap that I decided to buy four more.

Update: I had to add the "encoderOptions": "-preset ultrafast" property in the videoConfig object, as well as ensure the "audio" property is set to false of the homebridge-camera-ffmpeg plug-in configuration to fix the HomeKit camera streaming. With the latest version 3.0.3, the picture freezes and only get audio if this encoder option was not provided. Below is a complete sample for one of the Unifi cameras:

"cameras": [
                {
                    "name": "Dining Room",
                    "videoConfig": {
                        "source": "-rtsp_transport http -re -i rtsp://192.168.168.198:7447/5e42fef4a8faffa2326b5d38_0",
                        "maxStreams": 4,
                        "maxWidth": 1280,
                        "maxHeight": 720,
                        "maxFPS": 15,
                        "maxBitrate": 600,
                        "vcodec": "h264",
                        "packetSize": 188,
                        "mapvideo": "0:1",
                        "mapaudio": "0:0",
                        "audio": false,
                        "encoderOptions": "-preset ultrafast",
                        "debug": false
                    }
                }
]

Resizing LVM Volume with Cache

I had to increase the size of my media LVM logical volume again. In a previous post, I provided the instructions. I have done this many times. However, this time around, I ran into a snag.

Apparently this is the first time I try to increase the logical volume after I implemented LVM caching, which I wrote about in this post.

The steps in the “Linux LVM Super Simple to Expand” post are the same right up to and including the step involving the resizing of the physical volume. Afterwards, in order to resize the logical volume, we first have to disable the cache temporarily.

sudo lvconvert --splitcache /dev/airvideovg2/airvideo

Once the logical volume is no longer cached, then we can proceed with the resizing.

sudo lvresize -l +100%FREE /dev/airvideovg2/airvideo

Once the resize is completed, we can unmount the volume and perform the required resizing of the filesystem.

sudo systemctl stop smbd.service

sudo systemctl stop mpd.service

sudo systemctl stop apache2.service

sudo umount /mnt/airvideo

sudo e2fsck -y -f /dev/airvideovg2/airvideo

sudo resize2fs -p /dev/airvideovg2/airvideo

Note that e2fsck and resize2fs will take some time, between thirty minutes to an hour each. Once the file system is resized, we can reattach the cache.

sudo lvconvert --type cache --cachepool airvideovg2/lv_cache airvideovg2/airvideo

Usually it is a good idea to reboot the server after this just to make sure it mounts properly.

This is a small snag and LVM is still super simple to expand.

Ubuntu Server Missing Network Interface

I had an opportunity recently to install Ubuntu Server on a very old server, a Dell R710 that had 4 native network interfaces and 4 add-on network interfaces, resulting in a total of 8 network interfaces.

During the installation process, the installer did recognize all the physical network interfaces on the machine but because it did not successfully acquire DHCP addresses, I was forced to install Ubuntu without networking.

After the installation, only the loop back (lo) interface existed and all the other physical interfaces were missing. I had to use the netplan command to create the interfaces. This article was of tremendous help. I pretty well just followed its instructions.

I first created the 99-disable-network-config.cfg file with the contents as instructed by the article.

sudo su -

echo "network: {config: disabled}"  >>  /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

Followed by editing the 50-cloud-init.yaml file with the following contents:

vim /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: true

Once netplan is configured, I then executed the following command:

netplan generate
netplan apply

Once I rebooted the computer, the eno1 network interface now exists with a provisioned IP from my local DHCP server.

Much of the content in this blog is a verbatim reference from the article but I provided here so that it is more easily searched by me if I ever needed it in the future.

MacBook with Windows on External SSD

All of this started with one of my neighbour whose laptop broke down. The laptop stopped recognizing its internal SATA connection, so it will not boot. My neighbour ended up booting Windows from an external SSD using a Windows to Go solution to continue to use his laptop.

MacBook Air (13-inch, Mid 2013) running Windows 10 Home

This somehow got me thinking whether it is possible to boot Windows from an external SSD using a Mac. I knew Bootcamp allows you to create a dual boot scenario on the Mac, but the default procedure requires you to repartition your internal drive space to do so.

With external SSD drives coming down in price, for example you can get a 500GB Samsung T5 now for less than $130 CAD, it would seem a pretty sweet deal to have Windows on the side with your MacBook.

After doing some research, it seems like others have similar ideas. I am not going to detail all the steps, since you can find YouTube videos and other forums that have already done the deed. Instead, the high level process goes something like this:

  1. Use the Bootcamp Assistant App on the Mac to collect all the drivers on a USB stick or a local folder on your Mac. Do not use the wizard. You will need to use the Action menu. See Figure 1 below.
  2. Download a Windows ISO and use a Virtual Machine (e.g. Parallels, VirtualBox, etc.) to install the Windows ISO onto an external SSD drive. I first tried VirtualBox but ran into Catalina permission issues that I could not circumvent. I ended up doing it with Parallels which I will go into details later.
  3. Copy the drivers from the USB stick created in 1 into the desktop of the recently installed Windows on the SSD drive.
  4. Reboot your Mac and hold the option key down before the Apple logo shows and boot into the EFI portion that contains Windows.
  5. Make sure you have an external keyboard and mouse handy because the default Windows install may not recognize the native hardware yet. On my MacBook Air, I had no issues.
  6. Once Windows come up, login and run the Bootcamp setup from the desktop that was originally copied from the USB stick.
  7. Once this is all done, you can dual boot into Windows on the Mac as long as you have that SSD drive handy.
Figure 1: Remember to use the Action menu

So far everything works, and it is happily installing Visual Studio 2019. I even tried Cortana and the mic and speakers are working well. I did a quick Skype test call and the webcam is working well too.

I do want to document the steps that I performed with Parallels when installing Windows 10 onto the SSD. Those steps were not intuitive.

Step 1: Choose the Install Windows or another OS
Step 2: Choose Manually
Step 3: Don’t choose anything, but check the “Continue without a source” box at the bottom left hand corner

After this, stop the virtual machine and make the following custom configurations:

Step 4a: Select Hardware and configure the Hard Disk
Step 4b: Make sure your external media is plugged in, and select it in the Source. For example, Physical disk: Kingston DataTraveler 3.0 Media (disk2)
Step 5a: Change the boot order so that you can boot from CD, and connect the CD to your Windows ISO (not shown)

Start the Virtual Machine and it will go through the first part of the Windows installation. Once it is completed, it will reboot. Instead of booting from the external media, it will boot from the CD ISO image again. Simply shutdown the VM again and change the boot order again.

Step 5b: Change the boot order again to Hard Disk first, and restart the VM to complete its second part of the install

Once Windows 10 complete its installation, it will go through a user account setup process. If you are connected to the Internet during this stage, Windows 10 will force you to either use an existing Microsoft account or create one. This is unfortunate, but go ahead and create a temporary one. Remember to create a local administrator account and remove this temporary Microsoft account as the final step of the Windows setup.

Remember to copy the Bootcamp drivers from the USB stick to the Windows desktop before completing and shutting down the virtual machine.

Now you are ready to restart the Mac and dual boot into the external drive by holding the Option key while the machine restarts. The final step is run the Bootcamp Setup.exe program, which should be located inside the Bootcamp folder that your previously copied on the desktop. This is the last step of the Windows configuration on the SSD drive, and you can restart your Mac and dual boot into Windows one final time.

You are now running Windows natively to the Mac’s metal, without any simulators or Virtual Machines. This process is great to revitalize old MacBook’s lying around especially for students who need a Windows computer for their curriculum, but still want to retain their macOS. For more contemporary Mac’s, the small form factor and the speed of the Samsung T5 drive is a great fit for this type of situation. This is very cool!

Update: Potential Trouble with Major Windows Update

I have been told that a major Windows Update could encounter an error and a registry setting is required to fix this. The following page has more information on this. In summary, you have to set the following registry key PortableOperatingSystem from 1 to 0. This key can be found at registry location HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control. Thanks to Martin Little for this very helpful information.

Update: Mac’s with Secure Boot using the T2 Chip

To allow a Mac with the T2 chip to boot from an external drive certain settings have to be made with the Startup Security Utility. This utility can be accessed via the Mac’s recovery mode, under the Utilities menu. You want to disable secure boot and allow for external drive. Since the secure boot is disabled, set a firmware password to prevent a bad actor booting their own operating system with their own Live USB key.

Encrypted Live USB Stick

The goal is to create a USB key that contains a Linux based operating system. Any Linux compatible computer can then be booted with this USB key, temporarily borrowing the host computer. The hosted Linux OS can then access an encrypted partition that houses important private information that may be helpful in an emergency. This technique offers the maximum portability of accessible, private information such as your will, financial data, credentials, etc.

I previously had an USB key formatted with an encrypted Mac filesystem storing the same information. However, this is inconvenient because you will need to find a Mac in an emergency situation.

In the Linux community, you can create a Live USB key. The concept is to create an operating system that will run off of the USB key with any computer that you can plug the USB key to. However, many of these Live USB distributions does not remember any changes that you make while using the operating system. The next time you boot from the Live key, all your previous changes are gone, and the Linux environment reverts back to its original, pristine state. To remember the changes during uses, these changes have to be “persisted”.

I started to find the best methodology for creating a Live Linux USB that operates with an encrypted persistent partition.

All the commands in this article has been performed within the Ubuntu 18.04 LTS Desktop install. I installed this version on both VirtualBox and Parallels on the Mac. Both worked beautifully but Parallels has smoother integration with Mac.

I tried first the Kali distribution, using the instructions in this USB Persistence & Encrypted Persistence article (Article 1). However, the USB stick that I was using which was a Kingston DTSE9 G2 USB 3.0 32GB, was simply way too slow on writes causing the Live USB almost unusable.

I searched for an alternative USB stick and settled for the SanDisk 64GB Ultra Fit USB 3.1 Flash Drive. This new USB stick’s write performance was 4x faster than the Kingston.

After learning more about initramfs hooks, boot loaders, and a refresher on UEFI and BIOS booting process and partition layout strategies for USB storage devices, I decided to roll my own Live USB using the Ubuntu Desktop as a base along with the mkusb tool for the initial layout. The reason for the change is that I already have Ubuntu else where in the house so standardization is probably a better bet.

To improve performance further, I decided that it is not necessary to encrypt the persistent partition where the system configuration updates will be stored. Instead, I will create my own private encrypted partition to store only the private data that requires protection. Article 1, also provided details on how to use the LUKS technology to encrypt any Linux partition, so my exercise with Kali Linux was not a total waste of time.

Before I run mkusb, I needed to install it first by doing the following:

sudo add-apt-repository universe
sudo add-apt-repository ppa:mkusb/ppa
sudo apt-get update
sudo apt-get install mkusb mkusb-nox usb-pack-efi

I ran the mkusb tool (after sudo su - )1, with the following options:

We also chose msdos so that more computers will be compatible for booting. Once mkusb is completed, we will need to perform some custom partition layout. We will use the gparted program for this purpose so that the completed partition layout will look something like this:

Final MBR Partition Table

We first deleted the original usbdata partition and grew the extended partition (/dev/sdb2) to about 18 GB, approximately 6 GB for casper-rw, which the system will store any custom configurations or upgrades since this Live USB key is created. We create another logical partition called Personal that is around 12 GB in size, which will be encrypted and this is where we will store private, sensitive data for emergency use.

The remaining space will be allocated to USBDATA, a last primary partition for normal USB data sharing, the typical use case for a USB stick. We also want to make sure that the other FAT32 (usbboot) partition is not visible in Windows by setting the hidden partition flag. We did that with the gparted program as well.

Once the partition table is completed, we can now encrypt the Personal (/dev/sdb6) partition. For this, we went back to Article 1, which gave us the following instructions.

~# cryptsetup --verbose --verify-passphrase luksFormat /dev/sdb6
 WARNING!
 This will overwrite data on /dev/sdb6 irrevocably.
 Are you sure? (Type uppercase yes): YES
 Enter passphrase for /dev/sdb6: 
 Verify passphrase: 
 Key slot 0 created.
 Command successful.

~# cryptsetup luksOpen /dev/sdb6 myusb
Enter passphrase for /dev/sdb6:

~# mkfs.ext4 -L Personal /dev/mapper/myusb

~# cryptsetup luksClose /dev/mapper/myusb

All Done! Now we have a bootable USB stick that can be booted from any Ubuntu compatible computer. I can store my own personal data in a very safe and private way within the encrypted Personal partition, while any changes I make to the system will be preserved in between the uses of the USB stick. On top of it all, the USB still has 40+ GB (~37.5 GiB) of storage for normal USB transfer usage.

I spent sometime copying some confidential information which I think I will need in an emergency into the Personal partition. I want to duplicate the finished Live USB key, so that both my wife and I will have a copy always available to us on our physical keychain.

I did this on my Mac, and the command to duplicate the USB drive is:

sudo dd if=/dev/rdisk2 of=/dev/rdisk3 bs=4m conv=notrunc

If the USB key ended up to be lost, then whoever picks it up will need to:

  • Recognize that this is a bootable USB, otherwise it will just seem like 40GB USB Flash Drive;
  • Get the password needed to login to Linux; I thought about installing two factor authentication but decided not to, because any good hacker can simply access the partition from another Live Key;
  • If they do mount the partition manually, then they still need to obtain the LUKS key to decrypt the partition; I made the LUKS key to be different than the OS password and is twice as long.

I think the risk is worth the benefit of having critical info around in case of an emergency.

Update: WiFi on MacBooks

It looks like MacBooks uses Broadcom WiFi chips and most Linux distributions do not ship with these drivers. This can be easily solved by loading the following software:

sudo apt update
sudo apt install bcmwl-kernel-source

Even with the above software installed, there is still a little ritual:

  1. Launch the “Software and Updates” application;
  2. Select the “Additional Drivers” tab;
  3. Select “do not use this driver” and allow the process to go through and reboot the system;
  4. Re-enter the system and repeat steps 1 & 2, and then select the Broadcom drivers;
  5. Without rebooting, WiFi networks should be available for use

Unfortunately the above ritual will have to be performed every time the Live USB stick is powered off.

Update: Tried Linux Live Kit

I wanted to further customize my Live USB key. Instead of keeping a persistent partition, I thought I would keep a Linux VM at home and ensure that it is up to date and customized. At certain intervals, I would then create a Live USB key from the VM install.

I tried Linux Live Kit, but the results were disappointing. I was able to create a bootable USB key that worked, but the OS did not recognize the MacBook’s keyboard or trackpad. For some reasons, the drivers required did not get bundled during the process. I’ll have to read up on how I can create a Live USB key from scratch rather than depending on these tools, but it is more complicated than I thought, so for now this idea will have to be shelved until I have more time.

1For some reason mkusb will not work with the live persistence if I simply do a sudo mkusb or under a non-root account. The only way that I can get it to work is to run it within a root login session.

NAS RAID-1 Fail

This past weekend my media NAS server was intolerably slow. When I investigated, I found out that one of the RAID-1 partitions is experiencing read errors and is timing out. I decided to risk a reboot and to my surprise the RAID-1 partition did not recover with one fail drive, but mdstat recorded with an inactive status, something like this:

md2 : inactive sdc1[0](S)

After some Google search, I found that I had to do the following to resurrect the md2 device.

madam --stop /dev/md2
mdadm --assemble --force /dev/md2

This reactivated the md2 partition. I replaced the failed drive and re-added the new drive to the md2 device. The RAID-1 partition is now rebuilding.

The inactive state is a new experience for me, so this was a bit of a surprise.

During this exercise I also found out that the SATA connectors on my SATA add-on card were loose causing intermittent connections. I will have to find a way to address this in the future.

NVMe SSD with LVM Cache

I have been a huge fan of Apple’s fusion drives. They are an excellent compromise for affordable mass storage while still able to give you SSD performance. The concept is simple pair a fast but small SSD drive with a large but slow and much affordable, mechanical HDD. You get good performance and have lots of storage without breaking the bank.

I have falsely assumed that this capability only existed with Apple’s macOS operating system. This week I was pleasantly surprised to have discovered that LVM Cache can do more or less the same thing on Linux. This new found knowledge along with an excellent deal on a 500GB NVMe Samsung 970 Evo Plus M.2 drive gave me the itch to experiment this weekend with my NAS media server.

The hardware was easy enough to install, but I had to move one of the existing SATA connection because the M.2 slot on the motherboard shared a PCIe bus with a pair of SATA connections. Luckily I bothered to check the motherboard manual, otherwise I would have been scratching my head while the server fail to boot.

The software configurations were a bit more involved. Before I purchased the NVMe card, I did some experimentation with two external USB drives, one SSD and one HDD. I found this article to be super helpful in configuring LVM Cache with my test drives. However, these configurations were not fully restored after a reboot. After many hours of research on the Internet, I found this article indicating that my Ubuntu Linux distribution was missing the thin-provisioning-tools package. I also had experimented between the two different cache modes that were available, writethrough and writeback. I found out that the write back mode was a bit buggy and did not sync the cache and the storage drive. Yet another article to the rescue.

lvchange --cachesettings migration_threshold=16384 vg/cacheLV

I preferred the write back mode due to its better write performance characteristics. Apparently to fix the issue, I have to increase the migration threshold to something larger than the default of 2048 because the chunk size was too large.

Here are the steps that I did to configure my existing logical volume (airvideovg2/airvideo) to be cached by the NVMe drive that I just purchased. I first have to partitioned the NVMe drive.

Model: Samsung SSD 970 EVO Plus 500GB (nvme)
 Disk /dev/nvme0n1: 500GB
 Sector size (logical/physical): 512B/512B
 Partition Table: gpt
 Disk Flags: 
 

 Number  Start   End    Size   File system  Name     Flags
  1      1049kB  500GB  500GB               primary

Create an LVM physical volume with the NVMe partition that was created previously /dev/nvme0n1p1 and add it to the existing airvideovg2 volume group.

sudo pvcreate /dev/nvme0n1p1


sudo vgextend airvideovg2 /dev/nvme0n1p1

Create a cache pool logical volume and set its cache mode to write back and establish the migration threshold setting.

sudo lvcreate --type cache-pool -l 100%FREE -n lv_cache airvideovg2 /dev/nvme0n1p1



sudo lvchange --cachesettings migration_threshold=16384 airvideovg2/lv_cache

sudo lvchange --cachemode writeback airvideovg2/lv_cache

Finally link the cache pool logical volume to our original logical volume.

sudo lvconvert --type cache --cachepool airvideovg2/lv_cache airvideovg2/airvideo

Now my original logical volume is cached and I have gained SSD performance economically on my 20TB RAID setup for less than $200. Below is my final volume listing.

$ sudo lvs -a
   LV               VG          Attr       LSize   Pool       Origin           Data%  Meta%  Move Log Cpy%Sync Convert
   airvideo         airvideovg2 Cwi-aoC---  20.01t [lv_cache] [airvideo_corig] 0.01   11.78           0.00            
   [airvideo_corig] airvideovg2 owi-aoC---  20.01t                                                                    
   [lv_cache]       airvideovg2 Cwi---C--- 465.62g                             0.01   11.78           0.00            
   [lv_cache_cdata] airvideovg2 Cwi-ao---- 465.62g                                                                    
   [lv_cache_cmeta] airvideovg2 ewi-ao----  64.00m                                                                    
   [lvol0_pmspare]  airvideovg2 ewi-------  64.00m      

We can also use the command below to get a more detail listing.

sudo lvs -a -o+name,cache_mode,cache_policy,cache_settings,chunk_size,cache_used_blocks,cache_dirty_blocks

Upgrade completed. We’ll see how stable it is in the future.

Two New 8TB Drive for Our NAS

Our NAS has run out of space again. I saw a deal that the Seagate IronWolf 8TB NAS Hard Drive was on sale at newegg for $309 CAD. I jumped at the chance and purchased two.

I am now following the same step as I outlined in this post. Replacing two old 4TB drives with these two new 8TB drives.

So far so good. Hopefully when all is said and done, my NAS will have a total of 18TB in a RAID 1 configuration of 6 hard drives in total. Two 4TB, two 6TB, and the two new 8TB.

I noticed that I could fit two more drives in my chassis and may decide to re-add the two old 4 TB back in, but first I’ll have to check if my power supply can handle the demand.

I really like this mdadm and LVM setup.

Update: After 2 mdadm syncs, each of which was around 8 hours, and a pvresize that also took another 5 hours. I had to convert the filesystem from 32 bits to 64 bits using these very helpful instructions. Only after I converted to 64 bits can I then expand the existing filesystem to more than 16TB. It was a learning and yet rewarding experience. Next step is to reuse the 2 old 4TB drives in the same chassis and add them to the logical volume.