Random Thoughts on Individualism

I liken individualism to cancerous cells to society, some are malignant and some are benign. If you have too much of the former, you embark on a social evolutionary force of society that may not be sustainable. Especially malignant forces that consumes social resources and then breaks foundational bonds between individuals, acting as a divisive force in society. I would also include those who are extremely wealthy and successful individuals here, since they contribute to the income gap and their individual morals act as tribal magnets creating a divisive force.

The benign goes with the flow or fails to seek enough influence or creates enough havoc to be a social impact.

Like malignant cells group together and prosper or grow to form tribes (tumors).

My current bet (not sure), is that a society that concentrates on individualism will ultimately result in a bilateral class structure where one class dominates over the other. The stability and growth potential will then depend on the attenuation of the tension between the classes. Done correctly, the subjugated class may not feel it. Done incorrectly results in a revolution.

All forms of government must face this reality of individualism. The difference is the way in which they manage it. The big experiment of the West is they let the tumors grow. This experiment has reached the typical dynastic period of 200–300 years, so it will be interesting to see if it is still sustainable or not. In the past, there has been self imposed chemotherapies to the likes of wars that perform resets on the malignant cells, so to create more room for individualism to flourish.

Vancouver Trip

On April 9th, 2024 both my wife and I hopped on a flight to Vancouver to visit some friends and family. I got an excellent deal from Expedia for TD paying $1035 CAD for 2 return trips and a rental SUV for the week. We ended up getting a fancy Nissan Rogue for the trip.

We did not do any sightseeing other than Port Moody. Our primary purpose while staying in the City of Vancouver is to sample the Chinese food in Vancouver. This means hanging out largely in Richmond and doing some dim sum.

On the fourth day of our trip, we took the BC Ferries to Vancouver Island, visiting Nanaimo for two days, and spending our final two days in Victoria. We finished the week-long trip by taking a return ferry to Tsawwassen terminal, which is close enough to Richmond for us to get another good Chinese B.B.Q pork meal at HK BBQ Master, which I highly recommend!

Our rough schedule looked something like this:

  • Day 1 – Fish & chips along with ice cream at Port Moody; visited Natalie’s beautiful home on the hills, and gave Zoey (her cat) a nice rub; and then to Burnaby to stay at our cousin’s place and got introduced to Maple our new, small canine friend;
  • Day 2 – Lunch, snacks & dinner in Richmond; love the pineapple bun with its tasty, thick, cold butter; savored the lamb served at Hao’s;
  • Day 3 – Dim sum at Kirin restaurant in New Westminister and catching up with Agnes; followed by dessert at La Foret Jubilee with Natalie joining us;
  • Day 4 – Ferry from Horseshoe Bay to Departure Bay at Nanaimo; did some hiking and scenery;
  • Day 5 – More scenery in Nanaimo and shopping at The Old Country Market at Coombs; What goat on the roof?
  • Day 6 – Drive to Victoria with a stop in Chemainus to inspect the murals, hike on the Kinsol Trestle bridge, with a late lunch at OEB Breakfast, and dinner at Finn’s;
  • Day 7 – Hiked the Beacon’s Hill Park with a huge breakfast at Blue Fox Cafe; met a couple of new friends, one named Lynda at the cafe; visited the Butterfly Gardens; drove along the Malahat scenic views; experienced high tea at Pendray Inn and Tea House; and finally dinner at the Pagliacci’s.
  • Day 8 – Took the ferry back from Swartz Bay to Tsawwassen just in time to do lunch at HK BBQ Master; say goodbye to Derrick and Maple; and back on the afternoon flight to Toronto; Darci welcomed us back home at 2:00am on the following day!

The most memorable part of the trip was of course getting to meet up with our family and friends. The best food from this trip had to be from HK BBQ Master, in my opinion, with an honorable mention of the Banh-mi sandwiches that Derrick got for us from his famous Vietnamese sandwich vendor.

What would we do differently? I think knowing what we know about Nanaimo, we would probably skip all of Nanaimo1 and reallocate the days to be one more in Vancouver and an extra day in Victoria. We would also switch the order of our visit to Vancouver and the Island. This way we can spend more time during the weekend with our friends and family.

With this trip and our visit to Victoria, the capital city of British Columbia, we have completed our tour of all of the provincial capitals in Canada. Checkbox checked!

Below is a summary video produced by Carol of our trip.

Produced, Directed, and Edited by Carol
  1. I decided to visit Nanaimo out of curiosity based on the Gweilo 60 YouTube channel that I follow. ↩︎

Found Two HBA Cards for My NAS

About three weeks ago, I was casually browsing eBay and found this little gem, a Host Bus Adapter that can do PCIe 2.0 x8 (~4 to 8GB/s). This is way better than the one that I purchased earlier (GLOTRENDS SA3116J PCIe SATA Adapter Card) which can operate on a single lane of PCIe 3.0 yielding only 1GB/s. I could not pass it up at a price of only $ 40 CAD, so I purchased two of these to replace the old adapter card I had.

LSI 6Gbps SAS HBA 9200-8i IT Mode ZFS FreeNAS unRAID + 2*SFF-8087 SATA

This new card LSI 6Gbps SAS HBA 9200-8i only supports 8 SATA ports per card, so I had to get two of them to support all of the hard drives that I have. These SAS HBA cards must have the IT (initiator target) mode firmware because the default firmware (IR mode) supports a version of hardware RAID, which I did not want. With the IT mode, the hard drives will be logically separated on the card and only share the physical bandwidth of the PCIe bus. This is a must for ZFS.

With these new cards, my write throughput to my NAS hard drives now averages around 500MB/s. Previously, I was only getting about half of this.

I wish I would have found these sooner. Now I have two spare PCIe SATA expansion cards, one supporting 8 ports, and the other supporting 16 ports. I will place them on another server. Perhaps in a future Proxmox cluster project.

Tesla FSD Trial

Thanks to my cousins who also purchased Tesla using my Tesla referral code, I have some referral credits that I can use to select a 3 months trial of Tesla’s Full Self Driving (FSD) capability. On March 1st, 2024, I turned this feature on.

Today, we went out for lunch at Mr. Congee in Richmond Hill, Ontario. I thought I give this FSD a try.

It was pretty easy to set up. After agreeing with all the legal stuff and enable the feature, I just set the destination, set it to drive mode, wait for the autopilot icon to show, and double tap the right stalk and away we go!

There are three settings to FSD: chill, average, aggressive/assertive. I just left it on the default, average mode.

Pulling out of the driveway, the car was a bit jerky, but once it got on the road, it made all the right decisions. I override the mode on our neighbourhood feeder road from Leslie Street just to make sure that I can override the mode, and then quickly re-engage the FSD.

On this occasion, all traveling was done on regular roads, no highways, so it was more challenging for the car. It made all the turns correctly, but I did have to override it once when it did not recognize the restricted Via Bus Lane on Yonge Street. It even pulled into the parking area at Mr. Congee, but did not fully complete the trip by parking the car. I had to park it manually.

On the way back, it hesitated too much on a left-hand turn. I had to press the accelerator to help it along. Doing this did not override the FSD mode.

I will be driving to Montreal in about 4 weeks, so I will be looking forward to testing FSD on the highway.

My initial assessment is that I probably would not have paid any more money to gain this feature. Once again, thanks to my cousins who allowed me to experience this through the use of my Tesla redemption credits.

LVM to ZFS Migration

In a previous post, I described the hardware changes that I made to facilitate additional drive slots on my NAS Media Server.

We now need to migrate from an LVM system consisting of 40TB of redundant mirrored storage using mdadm to a ZFS system consisting of a single pool and a dataset. Below is a diagram depicting the logical layout of the old and the intended new system.

40TB of Redundant Storage
40TB of Redundant Storage
Logical Volume (LV)
Logical Volume (LV)
EXT 4 File System for Media Storage
EXT 4 File System for Media Storage
Old Storage System
Old Storage System
50TB of Redundant Storage
50TB of Redundant Storage
ZFS Dataset
ZFS Dataset
New Storage System
New Storage System
LV Group of 4 Physical Volumes (PV)
LV Group of 4 Physical Volumes (PV)
PV
PV
10TB
mdadm mirror
10TB…
PV
PV
8TB
mdadm mirror
8TB…
PV
PV
10TB
mdadm mirror
10TB…
PV
PV
12TB
mdadm mirror
12TB…
ZFS Pool
ZFS Pool
12TB
Mirror VDEV
12TB…
10TB
Mirror VDEV
10TB…
8TB
Mirror VDEV
8TB…
10TB
Mirror VDEV
10TB…
6TB
Mirror VDEV
6TB…
4TB
Mirror VDEV
4TB…
512GB
L2ARC Cache
512GB…

Before the migration, we must backup all the data from the LVM system. I cobbled together a collection of old hard drives and then proceeded to create another LVM volume as the temporary storage of the content. This temporary volume will not have any redundancy capability, so if any one of the old hard drives fails, then out goes all the content. The original LVM system is mounted on /mnt/airvideo and the temporary LVM volume is mounted on /mnt/av2.

I used the command below to proceed with the backup.

sudo rsync --delete -aAXv /mnt/airvideo /mnt/av2 > ~/nohup.avs.rsync.out 2>&1 &

I can then monitor the progress of the backup with:

tail -f ~/nohup.avs.rsync.out

The backup took a little more than 7 days to copy around 32 TB of data from our NAS server. During this entire process, all of the NAS services continued to run, so that downtime was almost non-existent.

Once the backup is completed, I wanted to move all the services to the backup before I started to dismantle the old LVM volume. The following steps were done:

  • Stop all services on other machines that were using the NAS;
  • Stop all services on the NAS that were using the /mnt/airvideo LVM volume;
    • sudo systemctl stop apache2 smbd nmbd plexmediaserver
  • Unmount the /mnt/airvideo volume, and create a soft-link of the same name to the backup volume at /mnt/av2;
    • sudo umount /mnt/airvideo
    • sudo ln -s /mnt/av2 /mnt/airvideo
  • Restart all services on the NAS and the other machines;
    • sudo systemctl start apache2 smbd nmbd plexmediaserver
  • Once again, the downtime here was minimal;
  • Remove or comment out the entry in the /etc/fstab file that automatically mounts the old LVM volume on boot. This is no longer necessary because ZFS is remounted by default;

Now that the services are all up and running, we can then start destroying the old LVM volume (airvideovg2/airvideo) and volume group (airvideovg2). We can obtain a list of all the physical volumes that make up the volume group.

sudo pvdisplay -C --separator ' | ' -o pv_name,vg_name

  PV | VG
  /dev/md1 | airvideovg2
  /dev/md2 | airvideovg2
  /dev/md3 | airvideovg2
  /dev/md4 | airvideovg2
  /dev/nvme0n1p1 | airvideovg2

The /dev/mdX devices are the mdadm mirror devices, each consisting of a pair of hard drives.

sudo lvremove airvideovg2/airvideo
Do you really want to remove and DISCARD active logical volume airvideovg2/airvideo? [y/n]: y
  Flushing 0 blocks for cache airvideovg2/airvideo.
Do you really want to remove and DISCARD logical volume airvideovg2/lv_cache_cpool? [y/n]: y
  Logical volume "lv_cache_cpool" successfully removed
  Logical volume "airvideo" successfully removed

sudo vgremove airvideovg2
  Volume group "airvideovg2" successfully removed

At this point, both the logical volume and the volume group are removed. We say a little prayer to ensure nothing happens with our temporary volume (/mnt/av2), that is currently in operation.

We now have to disassociate the mdadm devices from LVM.

sudo pvremove /dev/md1
Labels on physical volume "/dev/md1" successfully wiped.
sudo pvremove /dev/md2
Labels on physical volume "/dev/md2" successfully wiped.
sudo pvremove /dev/md3
Labels on physical volume "/dev/md3" successfully wiped.
sudo pvremove /dev/md4
Labels on physical volume "/dev/md4" successfully wiped.
sudo pvremove /dev/nvme0n1p1
Labels on physical volume "/dev/nvme0n1p1" successfully wiped.

You can find the physical hard drives associated with each mdadm device using the following:

sudo mdadm --detail /dev/md1
#or
sudo cat /proc/mdstat

We then have to stop all the mdadm devices and zero their superblock so that we can reuse the hard drives to set up our ZFS pool.

sudo mdadm --stop /dev/md1
mdadm: stopped /dev/md1
sudo mdadm --stop /dev/md2
mdadm: stopped /dev/md2
sudo mdadm --stop /dev/md3
mdadm: stopped /dev/md3
sudo mdadm --stop /dev/md4
mdadm: stopped /dev/md4

# Normally you also need to do a --remove after the --stop,
# but it looks like the 6.5 kernel did the remove automatically.
#
# For all partitions used in the md device

for i in sdb1 sdc1 sdp1 sda1 sdo1 sdd1 sdg1 sdn1
do
	sudo mdadm --zero-superblock /dev/${i}
done

Now with all of the old hard drives freed up, we can repurpose them to create our ZFS pool. Instead of using the /dev/sdX reference of the physical device, it is recommended to use /dev/disk/by-id with the manufacturer’s model and serial number so that the ZFS pool can be moved to another machine in the future. We also used the -f switch to let ZFS know that it is okay to erase the existing content on those devices. The command to create the pool we named vault is this:

zpool create -f vault mirror /dev/disk/by-id/ata-ST10000VN0008-2JJ101_ZHZ1KMA0-part1 /dev/disk/by-id/ata-WDC_WD101EFAX-68LDBN0_VCG6VRWN-part1 mirror /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1E8GW4-part1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1E8S0V-part1 mirror /dev/disk/by-id/ata-ST10000VN0004-1ZD101_ZA2C69FN-part1 /dev/disk/by-id/ata-ST10000VN0004-1ZD101_ZA2964KD-part1 mirror /dev/disk/by-id/ata-ST12000VN0008-2YS101_ZRT008SC-part1 /dev/disk/by-id/ata-ST12000VN0008-2YS101_ZV701XQV-part1

# The above created the pool with the old drives from the old LVM volume group
# We then added 4 more drives, 2 x 6TB, and 2 x 4TB drives to the pool

# Adding another 6TB mirror:

sudo zpool add -f vault mirror /dev/disk/by-id/ata-WDC_WD60EFRX-68L0BN1_WD-WX31D87HDU09-part1 /dev/disk/by-id/ata-WDC_WD60EZRZ-00GZ5B1_WD-WX11D374490J-part1

# Adding another 4TB mirror:

sudo zpool add -f vault mirror /dev/disk/by-id/ata-ST4000DM004-2CV104_ZFN0GTAK-part1 /dev/disk/by-id/ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0354579-part1

We also want to add the old NVMe as ZFS L2ARC cache.

ls -lh /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_500GB_S4P2NF0M419555D

lrwxrwxrwx 1 root root 13 Mar  2 16:02 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_500GB_S4P2NF0M419555D -> ../../nvme0n1

sudo zpool add vault cache /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_500GB_S4P2NF0M419555D 

We can see the pool using this command:

sudo zpool list -v vault

NAME                                                    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
vault                                                  45.4T  31.0T  14.4T        -         -     0%    68%  1.00x    ONLINE  -
  mirror-0                                             9.09T  8.05T  1.04T        -         -     0%  88.5%      -    ONLINE
    ata-ST10000VN0008-2JJ101_ZHZ1KMA0-part1                -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD101EFAX-68LDBN0_VCG6VRWN-part1               -      -      -        -         -      -      -      -    ONLINE
  mirror-1                                             7.27T  6.49T   796G        -         -     0%  89.3%      -    ONLINE
    ata-ST8000VN0022-2EL112_ZA1E8GW4-part1                 -      -      -        -         -      -      -      -    ONLINE
    ata-ST8000VN0022-2EL112_ZA1E8S0V-part1                 -      -      -        -         -      -      -      -    ONLINE
  mirror-2                                             9.09T  7.54T  1.55T        -         -     0%  82.9%      -    ONLINE
    ata-ST10000VN0004-1ZD101_ZA2C69FN-part1                -      -      -        -         -      -      -      -    ONLINE
    ata-ST10000VN0004-1ZD101_ZA2964KD-part1                -      -      -        -         -      -      -      -    ONLINE
  mirror-3                                             10.9T  8.91T  2.00T        -         -     0%  81.7%      -    ONLINE
    ata-ST12000VN0008-2YS101_ZRT008SC-part1                -      -      -        -         -      -      -      -    ONLINE
    ata-ST12000VN0008-2YS101_ZV701XQV-part1                -      -      -        -         -      -      -      -    ONLINE
  mirror-4                                             5.45T  23.5G  5.43T        -         -     0%  0.42%      -    ONLINE
    ata-WDC_WD60EFRX-68L0BN1_WD-WX31D87HDU09-part1         -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD60EZRZ-00GZ5B1_WD-WX11D374490J-part1         -      -      -        -         -      -      -      -    ONLINE
  mirror-5                                             3.62T  17.2G  3.61T        -         -     0%  0.46%      -    ONLINE
    ata-ST4000DM004-2CV104_ZFN0GTAK-part1                  -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0354579-part1         -      -      -        -         -      -      -      -    ONLINE
cache                                                      -      -      -        -         -      -      -      -  -
  nvme-Samsung_SSD_970_EVO_Plus_500GB_S4P2NF0M419555D   466G  3.58G   462G        -         -     0%  0.76%      -    ONLINE

Once the pool is created, we wanted to set some pool properties so that in the future when we replace these drives with bigger drives, the pool will automatically expand.

zpool set autoexpand=on vault

With the pool created, we can then create our dataset or filesystem and its associated mount point. We also want to ensure that the filesystem also supports posixacl.

zfs create vault/airvideo
zfs set mountpoint=/mnt/av vault/airvideo
zfs set acltype=posixacl vault
zfs set acltype=posixacl vault/airvideo

We mount the new ZFS filesystem on /mnt/av because the /mnt/airvideo is soft-linked to the temporary /mnt/av2 volume that is still in operation. We first have to re-copy all our content from the temporary volume to the new ZFS filesystem.

sudo rsync --delete -aAXv /mnt/av2/ /mnt/av > ~/nohup.avs.rsync.out 2>&1 &

This took around 4 days to complete. We can all breathe easy again because all the data now have redundancy again! We can now bring the new ZFS filesystem live.

sudo systemctl stop apache2.service smbd nmbd plexmediaserver.service
sudo rm /mnt/airvideo
sudo zfs set mountpoint=/mnt/airvideo vault/airvideo
sudo systemctl start apache2.service smbd nmbd plexmediaserver.service

zfs list

NAME             USED  AVAIL     REFER  MOUNTPOINT
vault           31.0T  14.2T       96K  /vault
vault/airvideo  31.0T  14.2T     31.0T  /mnt/airvideo

The above did not take long and the migration is completed!

df -h /mnt/airvideo

Filesystem      Size  Used Avail Use% Mounted on
vault/airvideo   46T   32T   15T  69% /mnt/airvideo

Getting the capacity of our new ZFS filesystem shows that we now have 46TB to work with! This should last for at least a couple of years I hope.

I also did a quick reboot of the system to ensure it can come back up with the ZFS filesystem in tack and without issues. It has now been running for 2 days. I have not collected any performance statistics, but the services all feel faster.

Media Server Storage Hardware Reconfiguration

Our media server has reached 89% utilization and needs storage expansion. The storage makeup on the server uses Logical Volume Manager (LVM) and software RAID called mdadm. I can expand the storage by swapping out the hard drives with the least capacity with new hard drives with a larger capacity like I have previously done.

I thought I try something different this time around. I would like to switch from LVM to ZFS, an LVM alternative that is very popular with modern mass storage systems, especially with TrueNAS.

All filled with 8 drives

Before I can attempt the conversion, I will first need to backup all of the content from the media server. The second issue is that I needed more physical expansion space on the server to house more hard drives. The existing housings are all filled except for a single slot, which is going to be insufficient.

A related issue is that I no longer have any free SATA slots available for the new hard drives, so I purchased GLOTRENDS SA3116J PCIe SATA Adapter Card with 16 SATA Ports. Once this is installed, I have more than enough SATA ports for additional storage.

PCIe SATA Card Installed

One downside of the SATA card is that it is limited to PCIe 3.0 x1 speed. This means data transfer is limited to a theoretical maximum of 1GB/s. Given that the physical hard drives top out at 200MB/s, I don’t think we need to be too concerned about this bottleneck. We will see in terms of practical usage in the future.

I am so lucky to have extra SATA power cables and extension cables laying around and my 850W existing power supply has ample power for the additional hard drives.

How do we store the additional hard drives with a full cabinet? I went to Amazon again and purchased a hard drive cage, Jaquiain 3.5 Inch HDD Hard Drive Cage 8X3.5 Inch HDD Cage. I did not have to buy any new hard drives yet, because I had plenty of old hard drives laying around. After I put together the cage with 8 really old and used hard drives, it looks something like this:

With this new additional storage, I am now able to backup the media content from my media server. However, before I do that there is one last thing that I need to do, and that is to experiment with an optimal ZFS pool configuration that will work with my content and usage. I will perform this experimentation with the additional storage before reconfiguring the old storage with ZFS. Please stay tuned for my findings.

After booting the system with 16 hard drives, I measured the power usage and it was hovering around 180W. This is not too bad, less than 2 traditional incandescent light bulbs.


Addendum:

During my setup, I had to spend hours deciphering an issue. My system did not recognize my old hard drives. After many trials, I finally narrowed down that the GLOTRENDS card is not compatible with an old 2TB Western Digital Enterprise Drive. This is the first time that I came across SATA incompatibilities.

There is another possibility that these drives were damaged by the usage of an incorrect modular power cable. I found that these drives also do not work with my USB3.0 HDD external dock as well. This gives additional credence that the physical drive has been damaged.

The troubled drive with the SATA card

All my other drives worked fine with the card.

Another discovery is that not all modular power cables will work with my ASUS ROG STRIX 850W power supply. Initially, I thought I would use an 8-pin PCIe to 6-pin adapter along with a 6-pin to SATA power cable designed for Corsair power supplies.

PCIe Power Adapter from Amazon
OwlTree PCI-e 6 Pin Male to 4 SATA 1 to 4 SATA Female Power Supply Splitter Supply Cable for Corsair Modular RM650X RM750X RM850X RM1000X

Using the above cables will cause the power supply not to start. I had to hunt for the original cables that came with the STRIX power supply.

Learned a lot from rejigging this media server. My reward is to see my server boot up with 16 hard drives and 2 NVMe SSD drives recognized. I have never built a system with so many drives and storage before.

What is the Chinese Government?

Recently I have been faced with questions and accusations on how corrupt and dictatorial the Chinese government is to its population. Most of these comments come from people who have not visited and experienced China. Their conclusion largely stems from the feeding of Western mainstream media.

I provide this post as a source of “alternative” information so that anyone can get a quick introduction of what is the Chinese government and how is it being run?

Take a look at this TED talk by Eric Li. He quickly summarizes the differences between the Chinese government that are based on merit versus one that is based on votes practiced by the West.

Eric Li’s TED Talk: A tale of two political systems

Once you finished the above video, I encourage you to read the following answer to the Quora question included below. You can click on the question to go directly to the Quora site and gain additional insights from the other answers. I just happen to particularly like this one.


Don’t people in China wish to live in a democratic country?

Answered by YN Chen on Quora on Nov 5, 2023

I am a Chinese, have studied in the UK and traveled to many countries.

For me, China is democratic – probably even more democratic than western countries.

Of course, I am referring to the original meaning of the word democracy – the power of the state belongs to the people and the people have the right to rule the government.

Nowadays, democracy in the west often refers to multi-party competition, where the ruling party are elected by universal suffrage.

But this approach has some significant problems. As voters are ordinary people who has no specialized knowledge on managing the country, the core competitiveness of the election process becomes the ability to publicize public opinion, personal affinity, and persuasion, which have little to do with whether they can actually formulate and implement policies well, but are more relevant to the resources of the society and the media operation behind them.

In the west, the rule of the people is in a single choice question of political preference, and the frequency of being able to make a choice is once every four years. If you are the minority voter, you will not be able to get a satisfactory result in those four years.

In contrast, China’s “democracy” works like this:

  1. A huge system of officials that everyone can enter by studying and taking exams – from the smallest local township government to the central government, all within the same pyramid-tested promotion system. For Chinese graduates, it is a very common career selection to pick an official position related to their major from an open government list, take a test on logic and issue processing skills, and become a government official. All newcomers need to start from the basic positions and get enough practical results before they are internally elected with promotion.
  2. The criterion value of the government affairs is “people first”. The most important judgment dimension is whether they can improve the life of the majority and satisfy the people.
  3. Public opinion monitoring and feedback mechanism. All levels of government have set up channels to receive public opinion, such as emails, petitions reception, or social media. For every actual problem, the government must give feedback or specific plans within a period of time; and after a period of time, they must do regular follow-up visits to ensure that the problem has been solved satisfactorily. All this is counted in the KPIs of government staff. If the people are not satisfied with this government’s response, they can complain to a higher level of government, which has absolute power over the next level of government, and the government department complained against will be penalized and monitored.

In China, the rule of the people is in the government’s “people first” evaluation criteria, and in the mechanism of feedback and resolution of specific issues that are highly valued. However, if your opinion is detrimental to the interests of the underprivileged, or if you are not looking for a solution, but simply venting your negative feelings and trying to get more people to share your negative feelings, then your opinion might be refused or ignored, or be deferred in to future considerations.

I think this is why people say: in the west, you can change the government, but you can’t change the policies; however in China, you can’t change the government, but you can change the policies.

Of course, both mechanisms have their own drawbacks. For example, since the core competence of universal suffrage is the ability of influencing public opinion, so having control of the media and enough money is almost equivalent to having a high probability of obtaining the highest power in the country; in China, it is very difficult to make the complex internal promotion completely transparent, and it is not easy for the people to monitor inefficiencies and corruption inside the system.

But for me and at least 80%+ Chinese people, the current one party Chinese government is still very satisfactory.

As for the so-called “Communist Party is not the same as government”: in fact, the CCP is not the same as the Soviet Union type of “communism”, for example, China has its market economy system and is running well. Actually when there is only one political party, the notion of party advocacy would be extremely weakened. In the case of China, people would tend to feel that the Chinese system is more like a parliamentary system even within the government. China is a country with a secular culture, and ideology discussion is not really that important, what matters to this government council is simply about insisting with the people-oriented value, and making people living in better lives.

To be honest, I think that the vast majority of the world’s people don’t care about politics.

People care more about their own lives – whether they can live healthy and happy lives with the people they love, whether the society is fair, safe and free, whether they can enjoy their civil rights as a human being, whether their problems can be solved and whether their dreams can be realized.

Also, I agree that China is better for ordinary people, small and medium-sized entrepreneurs to live in, but not for the extremely rich guys. If you are a rich tycoon or celebrity and has no interest in benefiting ordinary people, then the Chinese government might supervising you with very strict rules, you will have more freedom and power in the West.

But as for me, China is not bad.


Finally, to get a deeper dive, I recommend the following book:

The New China Playbook: Beyond Socialism and Capitalism (by: Keyu Jin)

Wake-on-LAN for Linux

My sons upgraded their gaming computers last Christmas, and I ended up using their old parts to build a couple of Linux servers running Ubuntu servers. The idea is to use these extra servers as video encoders since they will have dedicated GPUs. However, the GPUs are also pretty power-hungry. Since they don’t need to be up 24 hours a day, I thought keeping these servers asleep until they are required would be good. At the same time, it would be pretty inconvenient to go to the servers to physically power them up when I needed them. The thought of configuring their Wake-on-LAN came to mind.

I found this helpful article online. I first found out that the Network Interface on the old motherboards supports Wake-on-LAN (WOL). Below is a series of commands that I executed to find out whether WOL is supported or not, and if so, then enable the support.

% sudo nmcli connection show
NAME                UUID                                  TYPE      DEVICE
Wired connection 1  d46c707a-307b-3cb2-8976-f127168f80e6  ethernet  enp2s0

% sudo ethtool enp2s0 | grep -i wake
	Supports Wake-on: pumbg
	Wake-on: d

The line that reads,

Supports Wake-on: pumbg

indicates the WOL capabilities, and the line that reads,

Wake-on: d

indicates its current status. Each letter has a meaning:

  • d (disabled), or
  • triggered by
    • p (PHY activity), 
    • u (unicast activity),
    • m (multicast activity),
    • b (broadcast activity), 
    • a (ARP activity),
    • g (magic packet activity)

We will use the magic packet method. Below are the commands used to enable WOL based on the magic packet trigger.

% sudo nmcli connection modify d46c707a-307b-3cb2-8976-f127168f80e6 802-3-ethernet.wake-on-lan magic

% sudo nmcli connection up d46c707a-307b-3cb2-8976-f127168f80e6
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)

% sudo ethtool enp2s0 | grep -i wake
	Supports Wake-on: pumbg
	Wake-on: g

The above changes will persist even after the machine reboots. We put the machine to sleep by using the following command:

% sudo systemctl suspend

We need the IP address and the MAC address of the machine to wake the computer up using the wakeonlan utility.

% ifconfig
enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.168.185  netmask 255.255.255.0  broadcast 192.168.168.255
        inet6 fd1a:ee9:b47:e840:6cd0:bf9b:2b7e:afb6  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::41bc:2081:3903:5288  prefixlen 64  scopeid 0x20<link>
        inet6 fd1a:ee9:b47:e840:21b9:4a98:dafd:27ee  prefixlen 64  scopeid 0x0<global>
        ether 1c:1b:0d:70:80:84  txqueuelen 1000  (Ethernet)
        RX packets 33852015  bytes 25769211052 (25.7 GB)
        RX errors 0  dropped 128766  overruns 0  frame 0
        TX packets 3724164  bytes 4730498904 (4.7 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

We use the above ifconfig to find the addresses highlighted in bold. Once we have the required information, we can then wake the computer up remotely by executing the wakeonlan command from another computer.

% wakeonlan -i 192.168.168.255 -p 4343 1c:1b:0d:70:80:84
Sending magic packet to 192.168.168.185:4343 with 1c:1b:0d:70:80:84

Note that the above IP address used is the broadcast address and not the machine’s direct IP address. Now I can place these servers to sleep and only turn them on remotely when I need them.

EXT4-fs Errors on NVME SSD

In my previous post, I replaced my NVME boot disk on our media server thinking that the disk was defective because the file system (EXT4-fs) was reporting numerous htree_dirblock_to_tree:1080 errors.

The errors continue to persist with the new disk, so I can eliminate the possibility of hardware as the cause of the issue.

I noticed that the htree_dirblock_to_tree:1080 errors were caused by the tar command and the time in which these errors occur coincided when the media server is being backed up. Apparently, the backup process is causing these errors with the tar command.

This backup process has remained unchanged for quite some time and has worked really well for us. I guess for some reason there is a bug in the kernel or in the tar command that is not quite compatible with NVME devices.

I had to resort to finding an alternative backup methodology. I ended up using the rsync method instead.

sudo rsync --delete \
  --exclude 'dev' \
  --exclude 'proc' \
  --exclude 'sys' \
  --exclude 'tmp' \
  --exclude 'run' \
  --exclude 'mnt' \
  --exclude 'media' \
  --exclude 'cdrom' \
  --exclude 'lost+found' \
  --exclude 'home/kang/log' \
  -aAXv / /mnt/backup

It looks like this method is faster and can perform incremental backup. However, instead of backing up to an archive file, which I later need to extract and prepare during the restoration process, I have to back it up to a dedicated backup device. Since the old NVME disk is perfectly fine, I reused it as my backup device. I have partitioned this backup device in the same layout as the current boot disk.

Device          Start        End    Sectors   Size Type
/dev/sdi1        2048    2203647    2201600     1G Microsoft basic data
/dev/sdi2     2203648 1921875967 1919672320 915.4G Linux filesystem
/dev/sdi3  1921875968 1953523711   31647744  15.1G Linux swap

The only exception is that the first partition is not marked as boot and esp, so during the restoration process I will have to mark that partition accordingly with the parted command by using the following commands:

set 1 boot on
set 1 esp on

The idea is that at 3am every night/morning, I will backup the root filesystem to the second partition of the backup drive. If anything happens with the current boot disk, the backup drive can act as an immediately available replacement, after a grub-install preparation as mentioned in the previous article.

Let us see how this new backup process works and hopefully, we can bid a final farewell to the htree_dirblock_to_tree:1080 errors!

Update: 2023-12-22

It looks like even with the rsync command, the htree_dirblock_to_tree:1080 errors still came back during the backup process. I decided to upgrade the kernel from vmlinuz-5.15.0-91-generic to vmlinuz-6.2.0-39-generic. Last night (2023-12-23 early morning) was the first backup after the kernel upgrade, and no errors were recorded. I hope this behavior persists and it is not a one-off.

Replacing NVME Boot Disk

A few months ago, the boot disk of our media server begin to incur some errors, such as the ones below:

Dec 17 03:01:35 avs kernel: [32515.068669] EXT4-fs error (device nvme1n1p2): htree_dirblock_to_tree:1080: inode #10354778: comm tar: Directory block failed checksum
Dec 17 03:02:35 avs kernel: [32575.183005] EXT4-fs error (device nvme1n1p2): htree_dirblock_to_tree:1080: inode #13500463: comm tar: Directory block failed checksum
Dec 17 03:02:35 avs kernel: [32575.183438] EXT4-fs error (device nvme1n1p2): htree_dirblock_to_tree:1080: inode #13500427: comm tar: Directory block failed checksum

The boot disk is a NVME device and I thought it may be due to over heating, so I purchased a heat sink and installed it. Unfortunately the errors persisted after the heat sink.

I decided to replace the boot disk with the exact same model which was the Samsung 980Pro 1TB. This should have been a pretty easy maintenance task. We clone the drive, and swap in the new drive. However, Murphy is sure to strike!

My usual goto cloning utility is Clonezilla, unfortunately this utility did not like cloning NVME drives. The utility resulted in a kernel panic after trying multiple versions. I am not sure what is the problem here. It could be Clonezilla or the USB 3.0 NVME enclosure that I was using for the new disk.

I resigned to using the dd command:

dd if=/dev/source of=/dev/target status=progress

Unfortunately this would have taken way too long something like 20+ hours, so I gave up with this approach.

I decided to do a good old restore of the nightly backup. I started by cloning the partition table:

sfdisk -d /dev/olddisk | sfdisk /dev/newdisk

I then proceeded with the restore of the nightly backup. Murphy strikes twice! The nightly backup was corrupted! I guess it is not surprising when the root directory’s integrity is in question. The whole reason why we are doing this exercise.

Without the nightly backup, I had to resort to a live backup. I booted system again, and performed:

sudo su -
mount /dev/new_disk_root_partition /mnt/newboot
cd /
tar -cvpf - --exclude=/tmp --exclude=/home/kang/log --exclude=/span --exclude="/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache" --one-file-system / | tar xvpzf - -C /mnt/newboot --numeric-owner

The above took about an hour. I then copy the /span directory manually, because this directory tends to change while the server is up and running.

With all the contents copied, I forgot how to install grub and had to re-teach myself again. I had to use a live copy Ubuntu USB and use that to boot up the machine, and then mount both the root and efi partitions respectively.

nvme1n1                              259:0    0 931.5G  0 disk
├─nvme1n1p1                          259:1    0     1G  0 part  /boot/efi
├─nvme1n1p2                          259:2    0 915.4G  0 part  /
└─nvme1n1p3                          259:3    0  15.1G  0 part  [SWAP]

And install GRUB.

sudo su -
mkdir /efi
mount /dev/nvme1n1p1 /efi
mount /dev/nvme1n1p2 /mnt
grub-install --efi-directory /efi --root-directory /mnt

I also have fix the /etc/fstab to ensure the root partition and /boot/efi partition are properly referenced by their corresponding, correct UUID. The blkid command came in handy to find the UUID. For the swap partition, I had to use the mkswap command before I get the UUID.

After I rebooted, I reinstalled GRUB one more time with the following as super user:

grub-install /dev/nvme1n1

I also updated the initramfs using:

update-initramfs -c -k all

For something that should have taken less than an hour, it took the majority of the day. The server is now running with the new NVME replacement disk. Hopefully this resolves the file system corruptions. We have to wait and see!

Update: The Day After

The same errors occurred again! I noticed that these corruptions occur when we do a system backup. How ironic! I later confirmed that performing the tar command on the root directory during the backup process can cause such an error. I now have to see why this is. I will disable the system backup for the next few days to see if the errors come back or not.