Linux Boot with No Networking

GLOTRENDS PA09-HS M.2 NVMe to PCIe 4.0 X4 Adapter

I recently wanted to install an M.2 NVMe to PCIe 4.0 X4 Adapter on an existing server. The idea was to install a new NVMe SSD drive, and the motherboard had no more M.2 sockets available.

The server is running Proxmox with Linux Kernel 6.8.12. I thought this should be a 15-minute exercise. How wrong I was. After installing all the hardware, the system booted up but there was no networking access. This was especially painful because I could no longer remote into the server. I had to go pull out an old monitor and keyboard and perform diagnostics.

I used the journalctl command to diagnose the issue, and found the following entry:

Feb 01 13:36:21 pvproxmox networking[1338]: error: vmbr0: bridge port enp6s0 does not exist
Feb 01 13:36:21 pvproxmox networking[1338]: warning: vmbr0: apply bridge ports settings: bridge configuration failed (missing ports)
Feb 01 13:36:21 pvproxmox /usr/sbin/ifup[1338]: error: vmbr0: bridge port enp6s0 does not exist
Feb 01 13:36:21 pvproxmox /usr/sbin/ifup[1338]: warning: vmbr0: apply bridge ports settings: bridge configuration failed (missing ports)

The above error message indicates that enp6s0 no longer exists. When I looked at earlier messages, I noticed this one:

Feb 01 13:36:15 pvproxmox kernel: r8169 0000:07:00.0 enp7s0: renamed from eth0

It looks like the interface name has been changed from enp6s0 to enp7s0. Therefore the correct remedy is to edit the /etc/network/interfaces to reflect the name change. Below is the new content of the file.

# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp7s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.188.2/24
        gateway 192.168.188.1
        bridge-ports enp7s0
        bridge-stp off
        bridge-fd 0

iface wlp5s0 inet manual

This would be very annoying if the old interface name was used in many other configuration files. There is one other reference that I found on the Internet (https://www.baeldung.com/linux/rename-network-interface) detailing a way to change the network interface name using the udev rules. I did not try this, but something to keep in mind in the future.

In a previous post and on another home server, I did fix the name using netplan, but Proxmox is not using it.

Simple File Transfer – NOT

Recently I needed to transfer a private binary file from one household to my server. We wanted this transfer to remain private because the file contains sensitive content.

In the past, I set up a WebDAV server using Apache2.4:

First I had to enable the DAV modules using the following command line on my Ubuntu server:

sudo a2enmod dav
sudo a2enmod dav_fs

I already had a directory set up on my file system called: /mnt/Sites/public_share. I made the following changes to my Apache2 configuration files.

<VirtualHost *:80>
    ServerName share.lufamily.ca
    RewriteEngine On
    RewriteCond %{HTTPS} off
    RewriteRule (.*) https://share.lufamily.ca
</VirtualHost>

<VirtualHost *:443>
    ServerName share.lufamily.ca
    ServerAdmin xxxxxxxx@gmail.com
    DocumentRoot /mnt/Sites/public_share

    <Directory /mnt/Sites/public_share>
        AllowOverride All
    </Directory>

    <Location />
        AuthType None
        DAV On
        Options +Indexes
        RewriteEngine off
    </Location>

    Include /home/xxxxx....xxxxxxx/ssl.lufamily.ca
</VirtualHost>

I did not have any authentication, because I restricted access to this directory with an override .htaccess file which contains the following:

<IfModule mod_headers.c>
    Header set X-XSS-Protection "1; mode=block"
    Header always append X-Frame-Options SAMEORIGIN
    Header set X-Content-Type-Options nosniff
    Header set X-Robots-Tag "noindex, nofollow"
</IfModule>

<Files ".htaccess">
  Order Allow,Deny
  Deny from all
</Files>

<RequireAny>
    Require ip 192.168.0.0/16
    Require ip 172.16.0.0/12
    Require ip 10.0.0.0/8

    # Sending computer external IP
    Require ip AAA.BBB.CCC.DDD
</RequireAny>

With the above setup, the other party just needs to open up a Finder on macOS or a Files Explorer on Windows with the above URL of https://share.lufamily.ca, and copy, delete, and open files like they normally would. The access will be private because it is restricted by their external IP address. With macOS, copying many gigabytes via WebDAV posed no issues.

Unfortunately, Windows is another matter. This worked for small files. For large files in the gigabytes range, Windows seemed to be stuck on 99% complete. This is because Windows locally caches the large transfer and reports it is 99% completed in a very short time, as the physical transfer catches up. But the actual time needed for the copying across the Internet is so long that Windows became confused thinking that we are copying a file that already exists yielding an unwanted error.

I had to come up with an alternative. We briefly dabbled with the idea of using FTP, but after a few minutes, this was simply a non-starter. The FTP passive mode requires ports to be opened on my firewall which is unrealistic for a long-term solution.

SFTP is a very secure protocol that uses OpenSSH. I also like this technique because the usage is more secure and will be governed by a pair of SSH Keys. The private key on the remote user side and the public key will be used to configure SSH on my server. I set up a ssh user called sftpuser. To prepare for this user to only have sftp access I made the following changes to the sshd configuration file /etc/ssh/sshd_config.

# Added the internal-sftp
Subsystem sftp /usr/lib/openssh/sftp-server internal-sftp

# Configure the local user scpuser to only do sftp
Match User sftpuser
    ChrootDirectory /home/sftpuser
    PasswordAuthentication no
    ForceCommand internal-sftp
    AllowTcpForwarding no
    X11Forwarding no
    AllowAgentForwarding no

I then created the sftpuser using the following command:

sudo adduser sftpuser                                                                                                                  sudo chown root:root /home/sftpuser                                                                                                    
sudo mkdir /home/sftpuser/uploads                                                                                                      sudo chown sftpuser:sftpuser /home/sftpuser/uploads                                                                                    
sudo chmod -R 0755 /home/sftpuser/uploads

This user will not be able to login into a shell and can only use sftp. I also disable the password authentication just in case. For the remote party to upload the file, they will need to provide a public ssh key which needs to be stored in the .ssh/authorized_keys file. The contents of which look something like this:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCliK6NZx6JJBcK0+1GtEe8H6QpN1BHDRgq/vtiEAfwzcjN1dBtQhfplyDxEXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXF+OLV9qWMsE/g+1H4oyLRqzQnD8w7S4RBUJzrrZIpLEzYRf43pWSW9Y3220swlIEYxIOIcJIc8prgzDbECt3CR/BsRDYNZA5uxdPYLwh1YtTX8GEqoctJifLrC4OomKkczDek9k/MHdFbWZ0LdK3AB287nr/Q4Lb8GgfU3bEhF+AMSWM8r/OHC1QBPYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYbH8npyFsC3rADnjfFsB4VkkiNDDIZbZkV2vBf3sJ49Q1Y3uHugWxITWImKjfl+YUdGMalbSfP8UueKSx3sDGQQDXZjzrwnX3KPie0Qiz2rQtrppB7dA5CvOb86Q== guest

The above is just a single line in the file.

With the above setup, a Linux user can simply do the following to transfer a file to my server in a very secure way.

sftp -P55522 sftpuser@lufamily.ca <<< 'put /usr/bin/bash uploads/sample.bin'

The above command will upload the bash binary to my server.

An attacker trying to login using ssh will get the following:

❯ ssh -p 55522 sftpuser@lufamily.ca
This service allows sftp connections only.
Connection to lufamily.ca closed.

On Linux or macOS, the remote user can use ssh-keygen to create the public key which by default resides in ~/.ssh/id_dsa.pub. All I need to do is copy the contents of the public key and add it to my .ssh/authorized_keys.

For Windows users, they can generate the key using Windows PowerShell. Below is an example:

> ssh-keygen -t rsa -b 4096
Generating public/private rsa key pair.
Enter file in which to save the key (C:\Users\kang/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in C:\Users\kang/.ssh/id_rsa
Your public key has been saved in C:\Users\kang/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:hV6vcChUwpxXXXXXXXXXXXXXXXXXXXXXXXXXX0aTkJZ2M kang@win10
The key's randomart image is:
+---[RSA 4096]----+
|  . Eo.==..      |
|   * *+++=+      |
|  . @ oo.=+* .   |
| . = o..B+=.* .  |
|  .   .oSO.o..   |
|       ..oo.     |
|          .      |
|                 |
|                 |
+----[SHA256]-----+

> cat .\.ssh\id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZQQWgIVShifqFxq78MWQEJrM2xrVQXlPHUncNosEm6P/l0LdWu1nRbIccKMNsmpPK7JOv9XF+CsrtlltnhwDqiuflCGftzhrlmBz8BOJRiwD0Fl1IfQ+Qg7Z1nvIo6+kpkBw7SGPN7fbJxDPPHmc9iPB4RnlG46v6ymd4KM0h1cGlReCly2PTxTG1dcPuDbrBIIdEHoN/40hojrooQf+cQNprvYZY59EjvC0NoZsfiKGDHHq3S7HRPGns9Oo4y8vFl1DrJZFIvBVdjjL28JsmIdeKbMhCynkzIkPLPvsiplxkEF0RQ9fFcIsucuD8leJmMDNPas+8EdueQ== kang@win10

To copy a binary you can do the following:

> sftp -P55522 sftpuser@lufamily.ca
Connected to lufamily.ca.
sftp> put "C:\Windows\System32\tar.exe" uploads/junk.exe
Uploading C:/Windows/System32/tar.exe to /uploads/junk.exe
tar.exe                                                                               100%   54KB  13.1MB/s   00:00
sftp> ls
uploads
sftp> cd uploads
sftp> ls
junk.exe    sample.bin
sftp>

The above is very similar to Linux and the Mac. Windows and its PowerShell have come a long way in terms of adopting Posix-like capabilities.

For those who want to use WinSCP, a much nicer GUI on Windows, you will need to convert the .ssh/id_rsa private key into ppk format. Use the command below to achieve this.

"c:\Program Files (x86)\WinSCP\WinSCP.com" /keygen id_rsa /output=id_rsa.ppk

You can then set up WinSCP authentication and load the ppk file.

So what I thought would be a simple matter turned out to be quite a deep rabbit hole. Hopefully with this in place, future transfers can be done quite quickly and securely.

Possible Future of the Connected Watch

On Tuesday of this week, Apple announced the Apple Watch Series 3 with the ability to connect to the LTE data network, allowing the watch to stay connected to the Internet without an accompanying iPhone. This greatly enhances its functionality and removes its original handicap and its requirement to always be tethered to the phone. You can now get notifications and listen to the plethora of songs on Apple Music while on the go without your phone. The future is here.

Ever since the Apple Watch was released in the spring of 2015, it has been thought of as a companion device to the phone. However, I always thought it should be the other way around. The phone and tablet should be the companion devices to the watch!

The watch should be the only device that has the mobile networking radios and should operate in an always on manner, while sharing a personal hotspot via its WiFi radios. The phone and tablets can then function as displays with WiFi connectivity to your watch. This will also simplify your cellular data plans. I just see this as a more convenient setup, and hopefully a cheaper way to go with your mobile carrier.

As memory capacity increases with the watch, personal identity, application data, and other confidential information can be stored on the watch akin to the secure enclave on the iPhone today. This way display centric devices slaved to the watch can restore your last working state from the watch. Imagine a world where display slates are near commodity devices sans your personal information. You can be working with a shared slate in the office. All the while, your data is being centralized and stored securely on the watch. When you travel offsite at the airport or in the hotel, you pickup another shared slate, and you continue to work where you left off.

It is also more difficult to lose your watch than your phone. When you do misplace your phone and cannot find it, then just pickup another, because your personal data is stored on your watch.

Power consumption is probably going to be a major challenge for the watch in this scenario. But if the power challenge can be solved, then imagine having only WiFi display slates of any size of your choosing, and your watch has the only mobile data radio you will need. Instead of the phone being your most personalized information device, it will be your watch. I hope Apple has this vision in mind. Do you believe this to be a better future?

New iPhone 7 with A Big Scare

I just received my new Black iPhone 7 today from the office. My first impression was that The Black (not Jet Black) is very nice. The black colour melds nicely with the antenna bands rendering them invisible. If the previous iPhone 6s had this colour scheme, I would have chosen it as well.

With every new iPhone, I do the ritual of backing up my old iPhone and restoring the backup on the new one. This time however, I ran into a glitch and nearly gave me a heart attack!

Right after choosing “Restore from iCloud”, the new iPhone 7 informed me that I should update to 10.0.1. I guessed it was shipped with 10.0.0. I did not think much of it, and of course proceed with the upgrade. The upgrade completed without any incidents. What really gave me a real unpleasant surprise, was that it did not perform the restore! Arghh!!!

I kept calmed and went to the iCloud settings and to my surprise I found that iCloud Backup was turned off. I said to myself, “No Big Deal”. I went ahead and turned on iCloud Backup. And then the dreaded words of “Last Backup: Never” came up. WHAT?!

These are the times when having more than one Apple devices really help. I went to my iMac through System Preferences I made sure that the backup that I just performed on my iPhone 6s was intact. Sure enough, it was there. Whew! I decided to erase the new iPhone 7 again, and restart from scratch. The second time around, it found the backup and is now restoring. Fingers crossed, let’s hope the restore goes well!

Another interesting thing is that I had two factor authentication turned on and my old iPhone 6s is the default secure device. While setting up the new iPhone 7, I had to use my iPad Air to authenticate. Yet another case for having multiple iDevices handy.

Okay, I am exaggerating the perils of the above situation. In the worst case, I still have my old iPhone 6s and I can plug it into iTunes and have it perform a backup. Any how, everything is restoring now. Awaiting my iPhone 7 to restore and start playing with my new toy!