Plug-in Economics for Prius Prime

According to Toyota, our new 2020 Prius Prime PHEV gets around 4.3L / 100km of city driving. We will use this number since it is not too far off of the combine driving number of 4.4L / 100km. This means at the time of writing this post, the current fuel price at our neighbourhood pump is at $1.15 / L. If you do some fancy math, the Prime will yield us 20.2km per dollar invested at the pump (20.2 km/$).

Ontario Electricity Costs (Fall of 2019)

As depicted by the chart on the right, in Ontario we have three tiers of charging rates. The Prime in the winter can do about 35km with a 9kWh battery. The exact numbers are 40km / 8.8kWh, but this is perfect condition, and we use some battery for heating the vehicle. This will yield us the following:

TierYield
On-Peak18.7km/$
Mid-Peak27.0km/$
Off-Peak38.5km/$

So by comparing the above numbers, it makes perfect sense to charge the vehicle during Off and Mid Peak hours, and not so much during On-Peak hours. However the On-Peak comparison is so close that if the mileage rating was at 4.5L/100km then it is a wash.

With a bit more fancy math, you can actually calculate how much does gas have to cost per Litre before On-Peak charges make sense. This turns out to be around $1.24/L.

Hopefully you find this information helpful.

Let’s Plug-In

On October 30th, 2019, we purchased a Toyota Prius Prime 2020, choosing the Upgrade trim without the technology package. We traded in our 2012 Toyota Sienna 8 passenger Minivan with approx. 90,000km for $11,000. After all the government incentives, fees, taxes, and dealer’s rebates, we ended up forking out less than $27,000 for the vehicle. The only thing we opted for was the rust protection device.

We now have this plug-in hybrid electrical vehicle (PHEV) for almost a week. The vehicle is very comfortable to drive, and much more refined than my 2013 Subaru Impreza. The Prius comes in three drive modes, Eco, Normal and Power. I find the Eco mode to be too slow and has too much accelerator latency. I prefer the Normal mode. The Power mode can be pretty fun especially when you have a fully charged battery.

There are plenty of YouTube videos and written articles already talking about how the car drives, and I agree with their positive take on the Prius Prime. Therefore, I won’t repeat what has already been said. I will focus on what impact the ownership of a Prius Prime has on our residential electrical consumption.

We have yet to invest in a level 2 charger (240V – 16A) for the house yet, so we are just using our regular 120V plug to charge the 8.8kWh battery for the vehicle. So instead of charging the vehicle in 2 hours with the level 2 charger, we find that it takes around 5 hours to fully charge the vehicle. Toyota’s charging specification is pretty dead on and accurate here.

I raided our utility company’s web site and was able to extract the following graphs. Either click on the image or this link to open the graphs.

The consumption graphs above points to a day with no electrical vehicle as a baseline, followed by three days of charging the Prius Prime in the evenings. It looks like charging the Prius only amounts to an average of 1.5 kWh increase from baseline per hour of charge. The graph shows about four hours of heavy charging follow by a lower power charge during the last hour and a half.

At the current off-peak rate of ~$0.10 per kWh, we are looking at about an increase of less than a $1 per day, and this gives you about a realistic 36km of pure EV mode (all electric) of range per charge. So for a month, $30 will give you around 1,000km of range!

We have driven the car for about 5.5 days, and racked up in excess of 300km. We still have 7/8 of a gas tank left, and the only reason why we used the gas is due to a test drive to the Toronto Premium Outlet mall in Milton. Otherwise our daily usage pattern, which consists of largely local errands, would allow us to just keep on using the battery.

Now the game is up. How long do you have to wait for me to update this blog entry when I fuel up our new Prius Prime for the first time? Watch and see, any wagers?

Leviton Decora Smart Dimmer with HomeKit

I purchased these DH6HD HomeKit compatible dimmer switches from Leviton from February of 2018 (over 1 1/2 years now).

When they work, they are great. BUT! My HomeKit app frequently report these switches with “Not Responding”. The only remedy that I know of is to remove the accessory and then re-add it again.

The process of adding the accessory is extremely frustrating and time consuming. Adding the accessory to the WiFi network is a hit and miss affair. It really is a crapshoot.

Today after three tries adding the Leviton without success, I almost gave up. Finally I discovered the following process in this reddit article. Even the technique outlined by the article did not work until I restarted the avahi-daemon.service on my Linux server, figuring that it may interfere with the Bonjour discovery process when adding the accessory.

Using the WiFi setup of the iPhone to add the Leviton device to the WiFi network definitely works smoother than using the Home app. Here are the steps:

  1. Reset the light switch by pressing and holding the on position of the switch until its LED light switches rapidly from red and amber. This can take more than 10 seconds.
  2. Set the iPhone to the appropriate WiFi network.
  3. Goto the iPhone WiFi menu, and you will the Leviton switch available for adding to the WiFi network. Add this device to the network.
  4. If an error is encountered when adding to the network, then restart the avahi-daemon service (or other mDNS service that may be competing).
  5. Once the switch is added to the network, proceed to add the switch with the Home app.

Apparently there is a firmware update for these switches. However, the update from 1.4.13 to 1.4.32 fail with the Leviton iOS app.

If you are thinking of getting a light switch for your home automation project, I would steer away from these switches!

NAS RAID-1 Fail

This past weekend my media NAS server was intolerably slow. When I investigated, I found out that one of the RAID-1 partitions is experiencing read errors and is timing out. I decided to risk a reboot and to my surprise the RAID-1 partition did not recover with one fail drive, but mdstat recorded with an inactive status, something like this:

md2 : inactive sdc1[0](S)

After some Google search, I found that I had to do the following to resurrect the md2 device.

madam --stop /dev/md2
mdadm --assemble --force /dev/md2

This reactivated the md2 partition. I replaced the failed drive and re-added the new drive to the md2 device. The RAID-1 partition is now rebuilding.

The inactive state is a new experience for me, so this was a bit of a surprise.

During this exercise I also found out that the SATA connectors on my SATA add-on card were loose causing intermittent connections. I will have to find a way to address this in the future.

Old Media Server with OpenVPN

I am in the process of building and configuring a media server for my parents. After my recent media server upgrade, I have extra gear lying around. By purchasing a power supply and a small case, I can cobble together another media server with my old processor and motherboard. I will call this my parent’s media server. The goal is to replace the current Raspberry PI unit that is currently running OSMC acting as their media server. Although the OSMC solution with Raspberry PI has been working really well, it is under powered to play any HEVC encoded video at full 1080p HD resolution.

I wanted to convert the majority of our video media to HEVC simply to save storage space. If I do this with my media library, I will not be able to share our media with them because of their under powered Raspberry PI.

To solve this issue, I installed Ubuntu 18.04 along with Kodi on my parent’s media server that I just created. I have been testing this solution for the past couple of weeks and both the hardware and media player works really well.

I also configured the box to auto mount USB disks, and installed SAMBA so that both videos and music files can be shared with other devices on the same network. The SAMBA is primarily used by my parents with their SONOS speakers.

With this media server at their location, I can also consider future upgrades such as replacing their WiFi network with a Ubiquiti solution, and even ponder on a site-to-site VPN solution with both of our networks.

Perhaps that is looking too far into the future. My immediate concern is how to remotely administer the box. With the Raspberry PI, I just had a simple SSH setup. However with the extra horse power, and a full blown Ubuntu distribution, I can now setup OpenVPN.

I followed these instructions on the DigitalOcean site, and it worked flawlessly. During the setup, I made a major error. I skipped the firewall (ufw) setup on the box, thinking that I don’t need a firewall because an external firewall already exists. However, OpenVPN will not route external traffic to the internal private network if IP masquerading (NAT) is not setup properly. Thanks to a coworker’s advice, I configured the firewall with IP forwarding NAT, but also change all default actions to ACCEPT so that the firewall only function as a NAT router. Lesson learned!

Since this VPN will only be used by me for remote management, I will not configure any HTTPS tunnelling or install and configure ObfsProxy. We will continue to use UDP and stick with the default 1194 port.

We will do some final testing before finally deploying it to my parent’s place.

NVMe SSD with LVM Cache

I have been a huge fan of Apple’s fusion drives. They are an excellent compromise for affordable mass storage while still able to give you SSD performance. The concept is simple pair a fast but small SSD drive with a large but slow and much affordable, mechanical HDD. You get good performance and have lots of storage without breaking the bank.

I have falsely assumed that this capability only existed with Apple’s macOS operating system. This week I was pleasantly surprised to have discovered that LVM Cache can do more or less the same thing on Linux. This new found knowledge along with an excellent deal on a 500GB NVMe Samsung 970 Evo Plus M.2 drive gave me the itch to experiment this weekend with my NAS media server.

The hardware was easy enough to install, but I had to move one of the existing SATA connection because the M.2 slot on the motherboard shared a PCIe bus with a pair of SATA connections. Luckily I bothered to check the motherboard manual, otherwise I would have been scratching my head while the server fail to boot.

The software configurations were a bit more involved. Before I purchased the NVMe card, I did some experimentation with two external USB drives, one SSD and one HDD. I found this article to be super helpful in configuring LVM Cache with my test drives. However, these configurations were not fully restored after a reboot. After many hours of research on the Internet, I found this article indicating that my Ubuntu Linux distribution was missing the thin-provisioning-tools package. I also had experimented between the two different cache modes that were available, writethrough and writeback. I found out that the write back mode was a bit buggy and did not sync the cache and the storage drive. Yet another article to the rescue.

lvchange --cachesettings migration_threshold=16384 vg/cacheLV

I preferred the write back mode due to its better write performance characteristics. Apparently to fix the issue, I have to increase the migration threshold to something larger than the default of 2048 because the chunk size was too large.

Here are the steps that I did to configure my existing logical volume (airvideovg2/airvideo) to be cached by the NVMe drive that I just purchased. I first have to partitioned the NVMe drive.

Model: Samsung SSD 970 EVO Plus 500GB (nvme)
 Disk /dev/nvme0n1: 500GB
 Sector size (logical/physical): 512B/512B
 Partition Table: gpt
 Disk Flags: 
 

 Number  Start   End    Size   File system  Name     Flags
  1      1049kB  500GB  500GB               primary

Create an LVM physical volume with the NVMe partition that was created previously /dev/nvme0n1p1 and add it to the existing airvideovg2 volume group.

sudo pvcreate /dev/nvme0n1p1


sudo vgextend airvideovg2 /dev/nvme0n1p1

Create a cache pool logical volume and set its cache mode to write back and establish the migration threshold setting.

sudo lvcreate --type cache-pool -l 100%FREE -n lv_cache airvideovg2 /dev/nvme0n1p1



sudo lvchange --cachesettings migration_threshold=16384 airvideovg2/lv_cache

sudo lvchange --cachemode writeback airvideovg2/lv_cache

Finally link the cache pool logical volume to our original logical volume.

sudo lvconvert --type cache --cachepool airvideovg2/lv_cache airvideovg2/airvideo

Now my original logical volume is cached and I have gained SSD performance economically on my 20TB RAID setup for less than $200. Below is my final volume listing.

$ sudo lvs -a
   LV               VG          Attr       LSize   Pool       Origin           Data%  Meta%  Move Log Cpy%Sync Convert
   airvideo         airvideovg2 Cwi-aoC---  20.01t [lv_cache] [airvideo_corig] 0.01   11.78           0.00            
   [airvideo_corig] airvideovg2 owi-aoC---  20.01t                                                                    
   [lv_cache]       airvideovg2 Cwi---C--- 465.62g                             0.01   11.78           0.00            
   [lv_cache_cdata] airvideovg2 Cwi-ao---- 465.62g                                                                    
   [lv_cache_cmeta] airvideovg2 ewi-ao----  64.00m                                                                    
   [lvol0_pmspare]  airvideovg2 ewi-------  64.00m      

We can also use the command below to get a more detail listing.

sudo lvs -a -o+name,cache_mode,cache_policy,cache_settings,chunk_size,cache_used_blocks,cache_dirty_blocks

Upgrade completed. We’ll see how stable it is in the future.

Media Server Upgrade

Two and half years ago, I performed a CPU and motherboard upgrade to my media server. You can read the account here.

Although the AMD Athlon 5350 APU was energy efficient, it proved to be under power for on demand video encoding when Plex wanted to transcode video for a player on a device that is not compatible with the playing video. For example, when an Apple TV (not 4K) wants to play 4K material from Plex on my media server, the server will have to transcode the 4K material to a compatible 1080p format. Unfortunately, this is very CPU intensive and if more than one person in the house hold is trying to do the same thing, which is not unheard of, this causes stuttered playback issues.

Given the choice between saving a few dollars a year versus usability, I choose usability. Therefore I started to research what I need for the upgrade. My goal is upgrade the system so that transcoding will not be an issue and I can also use the system for future video encoding of security camera footages. We can also use the system for background video encoding of family videos as well.

I continue to prefer the AMD brand, and decided on the following combo:

  • AMD Ryzen 5 2400G Processor with Radeon RX Vega 11 Graphics (YD2400C5FBBOX)
  • GIGABYTE B450 AORUS M Motherboard
  • Corsair Vengeance LPX 16GB (2x8GB) DDR4 DRAM 2666MHz (CMK16GX4M2A2666C16)

The above were all purchased through Amazon and cost me a grand total of $473.24. The AMD CPU was the most expensive part costing almost $190.

Taking out the old motherboard and CPU combo and replacing them with the new parts went smoothly. The side SATA connectors bucked against one of my HDD chassis so I opted not to use them, and decided to connect all of my RAID SATA connectors to the SATA accessory card that I purchased and discussed in this post.

Last time I did an upgrade like this, the Ubuntu operating system had no problems and booted without any issues. Unfortunately, this time is very different. After the machine posted, Ubuntu booted into a blank, black screen. After some research, I learned to reboot the Ubuntu kernel with the nomodeset option. I learned to press and hold the shift key so that I can select the desired kernel that I wanted via the GRUB menu, and I learned to press the ‘e’ key in the GRUB menu to modify the boot options. Finally pressing F10 to boot with the custom changes (effective for only one time).

The above trick got me a login prompt. After I gained access to the command prompt, I noticed that the kernel did not recognize any ethernet devices. I now have a machine that is not connected to the network. After some more Internet research I found out that the current 4.15 Linux Kernel that I have is insufficient to run on the Raven Ridge architecture, the AMD code name for the Zen CPU and Vega GPU combination on a single chip. I have to upgrade to the 4.18 Linux Kernel.

However I cannot upgrade through the Internet, because the machine is not on the Internet. I have to download the Debian packages on a USB stick with another machine and manually install them. At this point, I learned that you cannot simply download a single package for this. I had to decide whether to go with the Linux Mainline Kernel packages or go with the Ubuntu HWE (Hardware Enablement) packages. After reading through Ubuntu’s LTS Enablement Stack article, I decided to HWE packages. I found the linux-generic-hwe packages and their prerequisites on pkgs.org. This took several iterations as I did not get all the dependent packages on the first try.

Once all the packages were installed, the machine booted without the need for the nomodeset option. However, the internet interface device was still not there. I had to run the command netpath, to find out that new motherboard’s ethernet device’s logical name was em1. To register the new logical name, I had to edit /etc/network/interfaces file.

Finally, the machine booted with an active ethernet connection. As a sanity check, I executed:

sudo apt-get install --install-recommends linux-generic-hwe-18.04 

Ensuring that my new media server has all the required kernel packages. We are still not done. The IP address of the server has changed, because we now have a different MAC address, so the DHCP server provisioned a different IP. I tried to change the Unifi Controller to provision a static IP address to this new server but I was unsuccessful. I suspect that the new server is also running the Unifi Controller may have something to do with it. Since the IP address has changed, I needed to update the following configurations:

  • Firewall rules
  • Unifi Controller name space configurations
  • Samba configurations because we only allow for local machines to share

All of this took from 4:30pm to 11:00pm last night, 6.5 hours worth of hardware assembly, research with Google, trial and error, and finally success. I cannot imagine if Google and the super helpful community forums did not existed. Fingers crossed that the new media server will run smoothly.

More Home IT Upgrades

This past weekend I continued to upgrade our NAS server. Last weekend, I upgraded my raid array with an additional 8TB of mirrored storage. This yielded two old 4TB WD Blue HDD. I noticed that my case has a total of 9 internal storage bays. One was used by my 500GB SSD Boot Drive, and 6 were populated by HDD drives making up the current raid array. This means I have 2 more storage bays left. However these remaining bays were meant for 5.25″ storage devices like Optical Disc Players. For me to place my old 4TB WD Blue HDD into these bays, I will need a 5.25″ to 3.5″ bay converter. I had one, and purchased the other one on Amazon. I ended up buying the ORICO Aluminum 5.25 inch to 2.5 or 3.5 Inch Internal Hard Disk Drive Mounting Kit.

I also did not have enough SATA slots and purchased the IOCrest SI-PEX40071 SATA III 8 Port Controller Card. This card along with the 4 builtin SATA slots on the motherboard gave me enough SATA connections for my 9 drives.

Once I installed the old 4TB drives, I proceeded to create another md raid level 1 device and created a matching physical volume which I used to extend the current logical volume group. When the setup is completed, I ended up with a 20TB+ fully mirrored NAS server. I love LVM in combination of mdadm.

I figured while everything is fresh on my mind, I minus well proceed with the dreaded 16.04 to 18.04 Linux Ubuntu upgrade.

The upgrade was surprisingly very smooth. However the new version of OpenVPN caused some troubles. The new OpenVPN no longer works with my old PureVPN configuration files, because the certificate files that came from PureVPN used an outdated and deprecated hash algorithm. After getting the new configuration files from PureVPN, everything worked like a charm.

I also have to reinstall the Unifi Controller along with Let’s Encrypt certbot utility.

Super happy with the outcome and the upgrades should last another 2 to 3 years.

Two New 8TB Drive for Our NAS

Our NAS has run out of space again. I saw a deal that the Seagate IronWolf 8TB NAS Hard Drive was on sale at newegg for $309 CAD. I jumped at the chance and purchased two.

I am now following the same step as I outlined in this post. Replacing two old 4TB drives with these two new 8TB drives.

So far so good. Hopefully when all is said and done, my NAS will have a total of 18TB in a RAID 1 configuration of 6 hard drives in total. Two 4TB, two 6TB, and the two new 8TB.

I noticed that I could fit two more drives in my chassis and may decide to re-add the two old 4 TB back in, but first I’ll have to check if my power supply can handle the demand.

I really like this mdadm and LVM setup.

Update: After 2 mdadm syncs, each of which was around 8 hours, and a pvresize that also took another 5 hours. I had to convert the filesystem from 32 bits to 64 bits using these very helpful instructions. Only after I converted to 64 bits can I then expand the existing filesystem to more than 16TB. It was a learning and yet rewarding experience. Next step is to reuse the 2 old 4TB drives in the same chassis and add them to the logical volume.

Creating DVD Video Discs

Recently I created a video to commemorate my mom’s 80th birthday. Of course once the video is created, there is always the challenge of distributing the video. For people who are always online and have a respectable bandwidth, they can simply view the video online, as I have made arrangements to post it here on my blog site. The video is embedded in The Grand Birthday post. What about others who are not online savvy or are still clung to their DVD players.

I usually use a program called Burn on my Mac to burn videos into DVD Video discs. However I find the process unsatisfying. I needed something that can be applied to mass processing. I also did not like the unprofessional DVD menu that Burn applies to the DVD disc. Also the program is quite old and I fear may not work for future versions of macOS.

I came across this Convert any Movie to DVD Video wiki link, and found some really useful information. After reading through their process, I found, practiced, and proven this trimmed down version on my Mac.

First I had to install several utilities through the brew packaging system on my Mac.

brew install ffmpeg dvdauthor cdrtools

I use the above utilities to perform the following steps:

  1. Convert the source video (typically optimized for my Apple devices and my TV’s) to an NTSC DVD compatible format;
  2. Author a DVD directory structure using the video;
  3. Create an ISO from the DVD directory structure for archiving and burning purposes;
  4. Burn the ISO to a physical DVD-R disc.

The first step is to convert the video:

ffmpeg -i original.mp4 -target ntsc-dvd -r 29.97 -s 720x480 -aspect 16:9 -b 8000k -g 12 -mbd rd -flags +aic -trellis 1 -cmp 2 -subcmp 2 video.mpg

I ended up using the above command which supposedly yields the most optimum  quality in terms of viewing. The output video.mpg is DVD compatible. The above command assumes an aspect of 16:9, which is what most home videos are shot at today.

I then use the dvdauthor tool to create the DVD directory structure. Before I use the tool, I first have to create an XML file containing how I like the DVD to be configured. Below is a bare minimum XML file configuration that I used to simply create a DVD disc containing a single movie. The tool gives me the option in the future to add menus, chapters, etc.

    <dvdauthor format="ntsc">
        <vmgm />
        <titleset>
            <titles>
                <subpicture lang="en" />
                <audio lang="en" />
                <pgc>
                    <vob file="/Users/kanglu/Downloads/video.mpg" />
                </pgc>
            </titles>
        </titleset>
    </dvdauthor>

I then proceed to run the tool with the above XML file, which I named dvd.xml.

    dvdauthor -o dvd -x dvd.xml

This will result in a folder called dvd which will contain the contents of the DVD disc. Once I have the folder, I can then create the ISO file.

    mkisofs -dvd-video -udf -o dvd.iso dvd

The resulting dvd.iso file is a good archiving format in case I want to make more DVD discs in the future. At this point, I no longer need video.mpgdvd.xml, and the dvd folder. The ISO file is all I need to create a DVD Video disc containing my video. After sticking in a blank DVD-R disc, I executed the following command.

    hdiutil burn dvd.iso

I repeated the above hdiutil command with several more blank discs to make a bunch of discs for distribution. The resulting DVD Video disc contains a single video without any confusing menu system; the way I like it — keep it simple and stupid.

Too bad not everybody has Plex or Kodi. Even a Raspberry Pi with OSMC installed would be wonderful. That will make future distribution of family videos a lot easier!

However, I am now happy to have a workflow that works for me. I hope you will find this helpful.