Wake-on-LAN for Linux

My sons upgraded their gaming computers last Christmas, and I ended up using their old parts to build a couple of Linux servers running Ubuntu servers. The idea is to use these extra servers as video encoders since they will have dedicated GPUs. However, the GPUs are also pretty power-hungry. Since they don’t need to be up 24 hours a day, I thought keeping these servers asleep until they are required would be good. At the same time, it would be pretty inconvenient to go to the servers to physically power them up when I needed them. The thought of configuring their Wake-on-LAN came to mind.

I found this helpful article online. I first found out that the Network Interface on the old motherboards supports Wake-on-LAN (WOL). Below is a series of commands that I executed to find out whether WOL is supported or not, and if so, then enable the support.

% sudo nmcli connection show
NAME                UUID                                  TYPE      DEVICE
Wired connection 1  d46c707a-307b-3cb2-8976-f127168f80e6  ethernet  enp2s0

% sudo ethtool enp2s0 | grep -i wake
	Supports Wake-on: pumbg
	Wake-on: d

The line that reads,

Supports Wake-on: pumbg

indicates the WOL capabilities, and the line that reads,

Wake-on: d

indicates its current status. Each letter has a meaning:

  • d (disabled), or
  • triggered by
    • p (PHY activity), 
    • u (unicast activity),
    • m (multicast activity),
    • b (broadcast activity), 
    • a (ARP activity),
    • g (magic packet activity)

We will use the magic packet method. Below are the commands used to enable WOL based on the magic packet trigger.

% sudo nmcli connection modify d46c707a-307b-3cb2-8976-f127168f80e6 802-3-ethernet.wake-on-lan magic

% sudo nmcli connection up d46c707a-307b-3cb2-8976-f127168f80e6
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)

% sudo ethtool enp2s0 | grep -i wake
	Supports Wake-on: pumbg
	Wake-on: g

The above changes will persist even after the machine reboots. We put the machine to sleep by using the following command:

% sudo systemctl suspend

We need the IP address and the MAC address of the machine to wake the computer up using the wakeonlan utility.

% ifconfig
enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.168.185  netmask 255.255.255.0  broadcast 192.168.168.255
        inet6 fd1a:ee9:b47:e840:6cd0:bf9b:2b7e:afb6  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::41bc:2081:3903:5288  prefixlen 64  scopeid 0x20<link>
        inet6 fd1a:ee9:b47:e840:21b9:4a98:dafd:27ee  prefixlen 64  scopeid 0x0<global>
        ether 1c:1b:0d:70:80:84  txqueuelen 1000  (Ethernet)
        RX packets 33852015  bytes 25769211052 (25.7 GB)
        RX errors 0  dropped 128766  overruns 0  frame 0
        TX packets 3724164  bytes 4730498904 (4.7 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

We use the above ifconfig to find the addresses highlighted in bold. Once we have the required information, we can then wake the computer up remotely by executing the wakeonlan command from another computer.

% wakeonlan -i 192.168.168.255 -p 4343 1c:1b:0d:70:80:84
Sending magic packet to 192.168.168.185:4343 with 1c:1b:0d:70:80:84

Note that the above IP address used is the broadcast address and not the machine’s direct IP address. Now I can place these servers to sleep and only turn them on remotely when I need them.

EXT4-fs Errors on NVME SSD

In my previous post, I replaced my NVME boot disk on our media server thinking that the disk was defective because the file system (EXT4-fs) was reporting numerous htree_dirblock_to_tree:1080 errors.

The errors continue to persist with the new disk, so I can eliminate the possibility of hardware as the cause of the issue.

I noticed that the htree_dirblock_to_tree:1080 errors were caused by the tar command and the time in which these errors occur coincided when the media server is being backed up. Apparently, the backup process is causing these errors with the tar command.

This backup process has remained unchanged for quite some time and has worked really well for us. I guess for some reason there is a bug in the kernel or in the tar command that is not quite compatible with NVME devices.

I had to resort to finding an alternative backup methodology. I ended up using the rsync method instead.

sudo rsync --delete \
  --exclude 'dev' \
  --exclude 'proc' \
  --exclude 'sys' \
  --exclude 'tmp' \
  --exclude 'run' \
  --exclude 'mnt' \
  --exclude 'media' \
  --exclude 'cdrom' \
  --exclude 'lost+found' \
  --exclude 'home/kang/log' \
  -aAXv / /mnt/backup

It looks like this method is faster and can perform incremental backup. However, instead of backing up to an archive file, which I later need to extract and prepare during the restoration process, I have to back it up to a dedicated backup device. Since the old NVME disk is perfectly fine, I reused it as my backup device. I have partitioned this backup device in the same layout as the current boot disk.

Device          Start        End    Sectors   Size Type
/dev/sdi1        2048    2203647    2201600     1G Microsoft basic data
/dev/sdi2     2203648 1921875967 1919672320 915.4G Linux filesystem
/dev/sdi3  1921875968 1953523711   31647744  15.1G Linux swap

The only exception is that the first partition is not marked as boot and esp, so during the restoration process I will have to mark that partition accordingly with the parted command by using the following commands:

set 1 boot on
set 1 esp on

The idea is that at 3am every night/morning, I will backup the root filesystem to the second partition of the backup drive. If anything happens with the current boot disk, the backup drive can act as an immediately available replacement, after a grub-install preparation as mentioned in the previous article.

Let us see how this new backup process works and hopefully, we can bid a final farewell to the htree_dirblock_to_tree:1080 errors!

Update: 2023-12-22

It looks like even with the rsync command, the htree_dirblock_to_tree:1080 errors still came back during the backup process. I decided to upgrade the kernel from vmlinuz-5.15.0-91-generic to vmlinuz-6.2.0-39-generic. Last night (2023-12-23 early morning) was the first backup after the kernel upgrade, and no errors were recorded. I hope this behavior persists and it is not a one-off.

Replacing NVME Boot Disk

A few months ago, the boot disk of our media server begin to incur some errors, such as the ones below:

Dec 17 03:01:35 avs kernel: [32515.068669] EXT4-fs error (device nvme1n1p2): htree_dirblock_to_tree:1080: inode #10354778: comm tar: Directory block failed checksum
Dec 17 03:02:35 avs kernel: [32575.183005] EXT4-fs error (device nvme1n1p2): htree_dirblock_to_tree:1080: inode #13500463: comm tar: Directory block failed checksum
Dec 17 03:02:35 avs kernel: [32575.183438] EXT4-fs error (device nvme1n1p2): htree_dirblock_to_tree:1080: inode #13500427: comm tar: Directory block failed checksum

The boot disk is a NVME device and I thought it may be due to over heating, so I purchased a heat sink and installed it. Unfortunately the errors persisted after the heat sink.

I decided to replace the boot disk with the exact same model which was the Samsung 980Pro 1TB. This should have been a pretty easy maintenance task. We clone the drive, and swap in the new drive. However, Murphy is sure to strike!

My usual goto cloning utility is Clonezilla, unfortunately this utility did not like cloning NVME drives. The utility resulted in a kernel panic after trying multiple versions. I am not sure what is the problem here. It could be Clonezilla or the USB 3.0 NVME enclosure that I was using for the new disk.

I resigned to using the dd command:

dd if=/dev/source of=/dev/target status=progress

Unfortunately this would have taken way too long something like 20+ hours, so I gave up with this approach.

I decided to do a good old restore of the nightly backup. I started by cloning the partition table:

sfdisk -d /dev/olddisk | sfdisk /dev/newdisk

I then proceeded with the restore of the nightly backup. Murphy strikes twice! The nightly backup was corrupted! I guess it is not surprising when the root directory’s integrity is in question. The whole reason why we are doing this exercise.

Without the nightly backup, I had to resort to a live backup. I booted system again, and performed:

sudo su -
mount /dev/new_disk_root_partition /mnt/newboot
cd /
tar -cvpf - --exclude=/tmp --exclude=/home/kang/log --exclude=/span --exclude="/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache" --one-file-system / | tar xvpzf - -C /mnt/newboot --numeric-owner

The above took about an hour. I then copy the /span directory manually, because this directory tends to change while the server is up and running.

With all the contents copied, I forgot how to install grub and had to re-teach myself again. I had to use a live copy Ubuntu USB and use that to boot up the machine, and then mount both the root and efi partitions respectively.

nvme1n1                              259:0    0 931.5G  0 disk
├─nvme1n1p1                          259:1    0     1G  0 part  /boot/efi
├─nvme1n1p2                          259:2    0 915.4G  0 part  /
└─nvme1n1p3                          259:3    0  15.1G  0 part  [SWAP]

And install GRUB.

sudo su -
mkdir /efi
mount /dev/nvme1n1p1 /efi
mount /dev/nvme1n1p2 /mnt
grub-install --efi-directory /efi --root-directory /mnt

I also have fix the /etc/fstab to ensure the root partition and /boot/efi partition are properly referenced by their corresponding, correct UUID. The blkid command came in handy to find the UUID. For the swap partition, I had to use the mkswap command before I get the UUID.

After I rebooted, I reinstalled GRUB one more time with the following as super user:

grub-install /dev/nvme1n1

I also updated the initramfs using:

update-initramfs -c -k all

For something that should have taken less than an hour, it took the majority of the day. The server is now running with the new NVME replacement disk. Hopefully this resolves the file system corruptions. We have to wait and see!

Update: The Day After

The same errors occurred again! I noticed that these corruptions occur when we do a system backup. How ironic! I later confirmed that performing the tar command on the root directory during the backup process can cause such an error. I now have to see why this is. I will disable the system backup for the next few days to see if the errors come back or not.

Giving Old MacBooks New Life

In the past, when MacBooks cannot run the latest MacOS X operating system, I typically relegate them to physical archive. I know that security patches can still be applied for some time, but sometimes missing the latest features can be an impediment to other shared devices within the Apple Wall Garden. For example, your latest iPhone may not work as well with the older MacBook etc.

Recently I found out about OpenCore Legacy Patcher (OCLP). This is an excellent tool that intercepts the booting process so that ultimately newer operating systems can run on older hardware. OCLP’s explanation on the Boot Process does a much better job in explaining this than I can, so please go check it out.

I am not going to explain the step by step process of using OCLP. Mr. Macintosh does a much better job than I can.

Video has 2 Examples: 1) Fresh Sonoma Install & 2) Upgrade Install to Sonoma

I use the fresh install process to install Sonoma (the latest MacOS at time of this writing). I did successfully on the following computers:

  • MacBook Air Early 2015
  • MacBook Pro 15″ 2016
  • MacBook Pro 15″ 2017

Once Sonoma is installed, the new operating system can also participate in future updates from Apple. However, the exception is before installing the updates, one has to ensure that OCLP must be updated. The update process is explained here.

Since these computers are relegated to legacy anyways, this process does not have much risk, and perhaps adds more life to your old hardware.

Rescue USB using Ventoy

In a previous post, I described how I created an encrypted USB as a mechanism to pass information to my sons should anything happen to myself or my wife during our vacation last year.

Well we are about to go on another long trip, and I decided to streamline the process with Ventoy.

Instead of creating a custom Live image whenever a new Ubuntu distribution is released, I have decided to use Ventoy to separate the Linux distribution away from the encrypted data.

Even though Ventoy supports persistence live distributions, I stayed away from them because I want to be able to replace the current distributions on the USB with new distributions with the least amount of work.

Below are the instructions that I used to create this Ventoy USB in an Ubuntu desktop environment.

Download Ventoy from https://github.com/ventoy/Ventoy/releases. Since we are on an Ubuntu operating system, so we want to download the tar.gz file. Once the tar.gz file is downloaded, extract the file and you should have a ventoy-X.Y.Z subdirectory with X.Y.Z being the version number of Ventoy.

Identify the target USB key device using the lsblk command (e.g. /dev/sdb) and goto the ventoy-X.Y.Z subdirectory and execute a command like the following:

sudo ./Ventoy2Disk.sh -I -r 10000 /dev/sdb

The above command will reserve 10000 MB as a tail end partition which we can use for a LUKS (encrypted) partition. We can create this LUKS partition called Succession by using the GNOME Disks app. We use a key that is at least 24 characters in length. It can be longer if you like but it becomes quite cumbersome to type.

Mount the LUKS partition, and then copy the the private data to the LUKS partition which was previously named Succession. My private data resides on the bigbird host.

scp -r bigbird:/Volumes/Personal\\ Information /media/kang/Succession

After the copying is completed, ensure that the “Personal Information” directory has the proper permission set (e.g. chmod 777 "Personal Information").

Assuming that all the ISO images are in the ISO directory, copy all the ISO images by executing the following command:

tar cf - ISO | (cd /media/kang/Ventoy; tar xvf - )

I included the following ISO images:

  • clonezilla-live-3.1.0-22-amd64.iso
  • kali-linux-2023.3-live-amd64.iso
  • kali-linux-2023.3-live-arm64.iso
  • ubuntu-22.04.3-desktop-amd64.iso
  • ventoy-1.0.96-livecd.iso
  • Win10_22H2_English_x64v1.iso

Note that not all the above ISO images are required, but the live Linux distrbituions are convenient in case you want to access the emergency information in the Succession LUKS partition in a hurry. The other ISO’s are just handy to have.

NOTE: When booting into a linux kernel with a PC with a discrete GPU that may not be compatible with a Live Distribution, you may need to use the nomodeset boot option.

Below is a YouTube video I made that shows how to gain access to the private encrypted data on the USB.

Booting from the USB on a gaming PC using a discrete GPU

Experimental Machine for AI

The advent of the Large Language Model (LLM) is in full swing within the tech community since the debut of ChatGPT by openAI. Platforms such as Google Colab, and similar variants from Amazon and Facebook allows software developer to experiment with LLM’s. The hosted model of the data center based GPU’s makes training and refinement of LLM’s tolerable.

What about using LLM on a local computer away from the cloud?

Projects such as llama.cpp by Georgi Gerganov makes it possible to run the Facebook open sourced Llama 2 model on a single MacBook. The existence of llama.cpp gives hope on creating a desktop that is powerful enough to some local development with LLM’s away from the cloud. This post documents an experimental procedure in building a desktop machine using parts readily available from the Internet to see if we can do some AI development with LLM’s.

Below is a list of sourced parts from EBay, Amazon and CanadaComputers, a local computer store. All prices are in Canadian dollars and includes relevant taxes.

ItemsPrice
NVIDIA Tesla P40 24GB GDDR5 Graphics Card (sourced from EBay)$275.70
Lian-Li Case O11D Mini -X Mid Tower Black (sourced from Amazon)$168.49
GDSTIME 7530 75mm x 30mm 7cm 3in 12V DC Brushless Small Mini Blower Cooling Fan for Projector, Sleeve Bearing 2PIN (sourced from Amazon)$16.94
CORSAIR Vengeance LPX 64GB (4 x 32GB) DDR4 3200 (PC4-25600) C16 1.35V Desktop Memory – Black (sourced from Amazon)$350.28
AMD Ryzen 7 5700G 8-Core, 16-Thread Unlocked Desktop Processor with Radeon Graphics (sourced from Amazon)$281.35
Noctua NH-D15 chromax.Black, Dual-Tower CPU Cooler (140mm, Black) (sourced from Amazon)$158.14
Asus AM4 TUF Gaming X570-Plus (Wi-Fi) ATX motherboard with PCIe 4.0, dual M.2, 12+2 with Dr. MOS power stage, HDMI, DP, SATA 6Gb/s, USB 3.2 Gen 2 and Aura Sync RGB lighting (sourced from Amazon)$305.09
Samsung 970 EVO Plus 2TB NVMe M.2 Internal SSD (MZ-V7S2T0B/AM) (sourced from Amazon)$217.72
Lian Li PS SP850 850W APFC 80+ GOLD Full modular SFX Power Supply, Black (sourced from CanadaComputers)$225.99
Miscellaneous 120mm case fans and cables purchased from CanadaComputers$63.17

The total cost of the above materials is $2,062.87 CAD.

The Nvidia Tesla P40 (Pascal Architecture) specializes for Inferencing limited to INT8 based operations and does not support any FP related operations, so it may not be optimal for machine learning. However recent claims have been made that INT8 / Q8_0 quantization can yield some promising results. Let us see what our experimentation will yield once the machine is built.

A custom design 3D fan shroud has to be designed and 3D printed because the P40 does not natively come with active cooling. The P40 is originally designed to operate in a data center so cooling is provided by the server chassis. The custom shroud design is posted on Thingiverse and some photos of the finished shroud is shown below.

Note that M3 screws were used to secure the shroud to the P40 GPU card. The GDSTIME fan came with the screws.

I also made a mistake by initially getting a 1000W ATX power supply that ended not fitting the case, because the case is built for SFX and SFX-L power supplies. Lesson learned!

Once the machine is built I performed a 12 hours MemTest86+. It turned out that running the memory at the XMP profile was a bit unstable. I had to clock the memory back from its 3200MHz rating to 3000MHz.

After more than 12 hours with 3 passes.

The BIOS settings had to be configured so that Resize BAR is ON. This is required for the P40 to function properly.

Turn on Resize BAR

The next step is to install Ubuntu 22.04.3 LTS with Nvidia GPU and CUDA drivers. The latter was quite challenging. The traditional way of installing using the package manager did not work. The best way is to goto this site, and pick the run file like below:

Beside to use the runfile

The run file had to be run in recovery mode using the console because the installation will fail if an X11 window manager is running. Also all previous Nvidia drivers had to be removed and purged. The Ubuntu default installation process may have installed them.

A detail that was left out of the instructions is to set the appropriate shell paths once the installation is completed. The following changes were made with /etc/profile.d so that all users can benefit. If the login shell is using zsh, then /etc/zsh/zshenv has to be changed. Without this change, commands such as nvcc and other CUDA toolkit commands will not be found. The same is true for CUDA related share libraries.

$cat /etc/profile.d/cuda-path.sh

export CUDA_HOME="/usr/local/cuda"

if [[ ! ${PATH} =~ .*cuda/bin.* ]]
then
    export PATH="${PATH}:/usr/local/cuda/bin"
fi

if [[ ! ${LD_LIBRARY_PATH} =~ .*cuda/lib64.* ]]
then
    export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/local/cuda/lib64"
fi

if [[ ! ${LD_LIBRARY_PATH} =~ .*/usr/local/lib.* ]]
then
    export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/local/lib"
fi

In this hardware configuration the AMD CPU has integrated graphics, and the P40 does not have any HDMI or DisplayPort connections. We need to change the X11 configuration so that it will only use the AMD CPU while dedicating the P40 GPU for CUDA based computation. The following configurations have to be made in /etc/X11/xorg.conf:

$cat /etc/X11/xorg.conf

Section "Device"
    Identifier      "AMD"
    Driver          "amdgpu"
    BusId           "PCI:10:0:0"
EndSection

Section "Screen"
    Identifier      "AMD"
    Device          "AMD"
EndSection

The BusId can be obtained using the lspci command and be sure to change any hexadecimal notations to decimal in the configuration file. Without this xorg.conf configuration, the Ubuntu desktop will not start properly.

When everything is done properly, the command nvidia-smi should show the following:

Fri Aug 25 17:33:31 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.10              Driver Version: 535.86.10    CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla P40                      Off | 00000000:01:00.0 Off |                  Off |
| N/A   22C    P8               9W / 250W |      0MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

The machine is now ready for user account configurations.

A quick video encoding using ffmpeg with hardware acceleration and CUDA was performed to test the GPU usage. It was a bit of a challenge to compile ffmpeg with CUDA support. This is when I found out that I was missing the PATH configurations made above.

For good measure, gpu-burn was run for an hour to ensure that the GPU is functioning correctly.

Next step is to download and setup the tool chain for LLM development. We will save that for another posting.

Update: The runfile (local) method did not preserve through a system update using apt. I had to re-perform the installation with deb (local) methodology. I guess after not using the GPU for the desktop, we no longer have to run the operating system in recovery mode to install using the deb (local) method.

New 1.5Gbps Internet Service

On April, 4th, I received a promotional offer from Rogers offering Ignite Internet service at 1.5Gbps plus Streaming for $114.99 per month.

I procrastinated a bit because I wanted to make sure that I can actually make use of this service. However, when I checked my bill for April, I noticed that my total monthly charges is at $102.99.

Note the above price prior to discount is at $117.99. I was curious to see if Rogers can get me a good deal without the Streaming service. I called Rogers support line and received a person who was not very helpful and simply just quoted conditions and deals to me. AI will do a number of these types of people soon.

I decided to try an alternative route by using Twitter (@RogersHelp). I direct messaged Rogers on Twitter and received wonderful help. They offered me the 1.5Gbps service at only $104.99 (with a 24 months commitment). This will be somewhat on par with my current payment and I will get 50% more throughput.

There is another question. Will my networking equipment make use of the 1.5Gbps? My networking setup has the Rogers Ignite WiFi Gateway (ARRIS Group, XB7 Modem) and is connected with a Cat5e cable to my Unifi Dream Machine Pro, using one of its 1Gbps RJ45 port.

Cat5e
Cat5e
Rogers XB7 Modem Unifi Dream Machine (UDM) Pro(Firewall / Router)
Internet
Internet
Home Network
Home Network
Text is not SVG – cannot display

How can we overcome the 1Gbps limit on the UDM Pro’s RJ45 port? Luckily the UDM Pro has an 10G SFP+ port as well. I went to Amazon and purchased a 10G SFP+ RJ45 Copper Module Transceiver.

The above will auto negotiate a 2.5Gbps to 10Gbps connection from the XB7 to my UDM Pro. Of course I will not be getting 2.5G or 10G speeds. These are just the physical maximum per the respective devices. Rogers will throttle my inbound and outbound traffic to 1.5Gbps and 50Mbps respectively.

After installing the SFP+ module and rewire the existing Cat5e cable, I had to reboot the both the XB7 modem and the UDM Pro. Once everything came back up, I had another problem. How do I test that I actually get 1.5Gbps? I cannot do it from any WiFi devices or any wired devices in my house because they are all limited to the 1Gbps port speed from my networking switches. Once again, Unifi had thought of this already and provide a speed test functionality on its management dashboard.

The tested speed seems to be better than expected.

So you can see from the above screen shot, that we are now getting what we are paying for. I also performed a double test from two different machines that are routed to a switch that has a 10Gbps connection to my UDM Pro, and each machine received a 700Mbps to 800Mbps download speed, which is around 1.5Gbps in aggregate. Mission accomplished.

Unifi just came out with a new firmware update that enable the UDM Pro to perform load balancing of more than one WAN connection. When the SkyLink service becomes more economically feasible, we can attach a satellite based internet service as a compliment to the existing Rogers service. This way during a power outage, we can continue to get Internet.

Playing with Proxmox

Prior to the holidays in 2022, I upgraded my media NAS server as detailed here. After this upgrade, I repurposed the old server’s components and built another PC.

Originally I was going to use this extra PC as a simple online media encoder, since encoding videos in the HEVC codec takes a lot of CPU power. I did this for about a month. My son, Kalen had an old GTX1060 6GB graphics card that he was going to place on Kijiji for resale. I offered to purchase this graphics card off of him so that that I can pair it up with this repurposed PC. The new idea was to turn this PC into my gaming PC. I don’t do many 3D intensive gaming, so an older GPU is certainly good enough for me.

Off I went installing Windows 10 Pro on the PC. I also discovered at this time the Windows Subsystem for Linux (WSL). I thought it would be a wonderful idea for me to have the gaming PC and not lose the ability for the PC to double as a media encoder through the use of a Linux distribution using WSL. My hope is that Linux with WSL will yield near metal based performance. Long story short, the performance of ffmpeg, the tool that I use for video encoding, was disappointing. Apparently there is a bug in WSL v2 that forced ffmpeg to only use 50% of its CPU power. There was nothing wrong with the concept of having a dual purpose PC for gaming and a handy Linux distribution for other endeavours.

The problem is with the Windows hosted Hypervisor, a software layer that usually runs between the hardware and the operating. I know of another hypervisor called Proxmox. This is a perfect opportunity for me to try Proxmox out. Before I installed Proxmox, I maxed out the memory of this repurposed PC to 64GB. It only had 16GB before and I thought this would not be enough.

One of the worries I had was how to get the raw GPU performance from Proxmox? Apparently there is a GPU passthrough option. Before installing Proxmox, I had to make some BIOS adjustments on the PC.

  • Enable IOMMU
  • Enable SVM Mode (same as Intel VT-x)
  • Enable AMD ACS

Only the SVM Mode is required for Proxmox, the other two are required for GPU Passthrough. After I installed the Proxmox server, I followed the instructions outlined in the following sites:

  1. From 3os.org: GPU Passthrough to VM;
  2. From pve.proxmox.com;
  3. And from reddit.

The first site was more clear and was the most helpful. I used the second and third sites as an alternate source and backup reference. Thanks to the above sites, I was able to get Proxmox running and created two virtual machines (VM’s). The first is an Ubuntu distribution called workervm and the second is a Windows 10 Pro instance with a GPU passthrough, called win10. Below is a screenshot of the Proxmox control administration site.

Proxmox control panel (click to enlarge)

Below is the workervm (Linux VM) configuration:

workervm configuration for Ubuntu instance

I had to make sure the processor type is set to [host] to get the most performance out of the virtual CPU’s. The Windows VM configuration uses a different BIOS, specifically a UEFI BIOS. We also have to ensure that the Machine type is set to q35. The Windows VM also has the EFI Disk and TPM State configured, and of course the extra PCI Device to represent our GPU passthrough card. Check out the full configuration for the Windows 10 VM below:

win10 configuration for Windows 10 Pro instance

After installing Windows 10 Pro, the network interface is not recognized. To remedy this situation I had to install virtio-win as described by this site here. After the installation of virtio-win, and a reboot. I had networking, connectivity to the Internet, and the Device Manager output from the Windows 10 Pro instance as shown below. Notice that Windows recognized the native NVIDIA GeForce GTX 1060 6GB card.

Windows 10 Pro VM instance Device Manager

I tried to test out the GPU throughput with some 3D rendering demos and tested a couple of games from Steam using Remote Desktop. The performance was okay, but not stellar, and could have been better. I did some more research, and apparently Parsec, a virtual desktop sharing tool, is probably better for remote gaming.

I went ahead and installed Parsec on both the Windows 10 Pro VM, and on my Mac mini, which I used to remotely play games on the previous VM. This worked out to be quite well.

Now the repurpose PC is a Proxmox server hosting as many VM’s that the hardware can bear. The workervm instance can be used for video encoding and other generic Linux oriented work or trials. The win10 instance will be used for gaming and hosting our tax filing software, called TurboTax, which only runs on Windows.

In the near future, I will also be testing out Proxmox with virtual containers instead of machines. The containers are more light weight and less resource intensive. It will be another new adventure here.

Panel Snow Coverage

Today is January 13, 2023. We had an icy snow storm last night that lasted until this morning, and I was curious what the roof condition was like. Just how much of the panels were covered in snow?

Solar energy for today

Our peak energy production was at around 11am when we generated a little over 800Wh, which is inlined with what we kind of get on a cloudy, misty, winter day. In contrast, the best we got so far was on January 7th at 1pm. We generated 5,494Wh. That was a sunny day with no snow coverage on the panels.

A quick drone survey of our roof this afternoon at around 3pm.

I was kind of impressed that we got that much with so much of the panels covered. Watch the above video to see just how much of the panels are covered today. Our total production for today is only about 3,400Wh.

Below are the stats per panel.

Per panel generation statistics for today.

As you can see above, every panel contributed even the covered ones! There will be two sunny days over the weekend, so we will see!

Update: 2023-01-14

I did another roof survey with my drone, seeing that today it was a sunny sky day.

Roof survey on Jan. 14 (day after storm)
Solar energy production on Jan. 14

We have generated over 10,000 Wh of energy today about 3 times more than yesterday. The survey was conducted when it was still -6 ºC outside, so way below freezing.

Managing Audio Books with Plex

Library Setup

I have a membership with Audible and I sometimes also get other audio book sources. Recently I experimented with combining all of my audio books into a centralized place. Since I already have a Plex server running, I thought it would be a good place to do this.

I did a little research and came across a couple of very helpful articles:

  • A Reddit article;
  • and some really detail information on GitHub;

The main points are:

  • I have a single folder to store all of my audio books. Inside the folder, each audio book is stored as an “m4b” file.
  • Ensure that audio books have a poster image and that its artist and album_artist tags are set to the author. Where appropriate, the audio book should also contain chapter metadata.
  • Download and install the Audnexus agent;
  • Create a music library on Plex by adding the audio book folder, and set the agent to Audnexus
Note the Agent setting
  • Ensure that the advanced option of “Store track progress” is checked.
Ensure that Store Track Progress is checked!

Each book in the library will be represented as an album, and the author will be mapped to album artist. Once the library is created, you can download and play the audio books from the desktop using the Plex app. However, the more common use case is to listen to the audio books while on the go.

Using Prologue to Play Audio Books

We first have to download the Prologue App. I did not get any of the In-App premium functionality, and just stayed with the free version.

Point the app to my Plex server’s URL, and all the audio books from the library should now be accessible and playable on the iPhone or iPad with chapter, bookmark, and last-left-position support.

This is a really neat solution, and I am impressed how Plex and Prologue together formed a dynamic duo in this manner.