Home Lab III

The main cost with every new host is that it needs a monitored, smart uninterruptible power supply (UPS), preferably a dedicated one due to its limitations of a one-to-one physical connection to a single host Securing multiple hosts wherein one provides the UPS monitoring data to the others might introduce points of failure preventing the graceful shutdown of those other hosts, though I have set that up untested with my fingers crossed.

This is why one computer that does the work of four, such as one with 256 GB RAM, a 48 virtual core processor, and many more PCIe lanes for multi-port network interface and storage controllers that can be passed through to virtual machines. It might be prudent to consolidate even at a cost equal to the sum of individual computers for such a system, since it eliminates the hassle of sharing a UPS and likely consumes less electricity with lower heat dissipation than four physical PCs, and allowing greater runtime when on battery power.

A reputed, well-supported, smart and especially a rack mountable UPS is obtained through a B2B reseller, which translates to higher costs and hassles of both initial acquisition and periodic battery replacement.

Consider these additional “costs” as well:

You need a cool secure space away from possible water damage, with hardware either locked up or at least not having accessible removable external storage that is easy to steal.

To access the PC at the console for maintenance, you might also need an IP KVM switch, usually one for each PC, especially if distributed due to space constraints or to mitigate the risk of physical loss. I use slower, somewhat glitchy, host bus-powered single-port nano KVMs for occasional management rather than an expensive multi-port one that I only ever had in the wired VGA/USB era. A higher-tier Pi KVM is reserved for accessing the Intel-based Mac mini server remotely since there is no Apple Remote Desktop (ARD) for non-Mac clients and VNC viewer does not scale the display resulting in a scrolling mess.

This of course makes sense only if one does not require physically separate hosts such as for high availability or locational diversity. Not having a remote location except a t2.micro (1 GB RAM) FreeBSD instance on AWS, I have chosen to separate hosts across floors, coupled with automated off-site backup. The basement utility closet with the water supply has a wall-mounted (hence off the floor) mini rack for the essentials at the utility point of entry, whereas the larger and noisy equipment like a NAS with 7200 RPM spinning drives, and a Proxmox host is across the passage on the same floor on a non-conductive wood stand with feet.

File-level backups of configuration files backed up to the NAS from the various services are synced to TrueNAS SCALE (see Home Lab II) as a second copy two floors up, with ZFS snapshots of that copy to TrueNAS CORE in a full-sized rack starting this quarter once I order, receive and set up TrueNAS Mini R in the full-sized rack upstairs. A very costly unit once outfitted with under-provisioned power-safe data centre grade SATA SSDs for performance storage in addition to quiet NAS HDDs.

I would not put noisier surveillance HDDs in a NAS unit that I could already hear from across the hallway, considering the ambient sound level is 32 dB without, and the Mini R is stated to be 45 dB. Noise is a price I don’t ever pay, so I might have to move systems around if the security NVR project ever gets off the ground.

Off-site backups are automatically uploaded on a schedule to one or more cloud storage providers. Often, there’s an additional cost for proprietary cloud backup with dissimilar products given that even S3-compatible storage integration does not work as universally as intended.

Home Lab II

I added a new PC (system76 Thelio Prime) as a node to make a Proxmox cluster, and in doing so expand the services and robustness of my home lab. Equipped similarly with 24 virtual CPU cores and 64 GB RAM as the original host, except being AMD Ryzen 9900X with PCIe 5.0 for higher IOPS.

The second NVMe slot on the mini-ITX motherboard being PCIe 4.0 – and hence likely in a separate IOMMU group – gave me an unexpected advantage of being able to make it my future network-attached SSD by passing that controller through to TrueNAS in a VM, hosting Nextcloud. This makes for an insanely powerful NAS compared to the Synology DS-1019+. A similarly powerful pre-built one would have cost thousands of dollars just to get the processing capability and otherwise would be overkill in the number of drive bays and thus fan noise and power consumption. A single backed-up SSD is adequate for me; and as a courtesy reminder – RAID is not a backup plan.

Next year will add an additional performance tier of onsite backups using ZFS snapshots to TrueNAS Mini R ZRAID pool 3 × 6TB, in addition to the existing file-level onsite backups to Synology DS-1019+ BTRFS pool 5 × 4TB over NFS.

The NVIDIA 4060 Ti GPU pass-through enabled a gorgeous Ubuntu remote desktop for when I need to work in Linux, so I don’t have to rely solely on Windows Services for Linux (WSL) on my portable computer. This being unstable over RDP has been set aside as of this writing. I use Spiral Linux (or Bodhi Linux for slimmer) if I need a GUI but not necessarily remote desktop.

These are of course just some of the building blocks and more services are becoming production ready for 2025.

Lessons learnt:

My next PC would definitely not have a motherboard that maxes out at 64 GB (effectively just over 60 GB) RAM. 128 GB is more appropriate for how much processor you’re leaving on the table without the RAM to utilize it in typical loads. The computer hardware itself is the relatively easy part, the bigger consideration when acquiring an additional host is power, both as in consumption and reliable availability thereof, plus accessories, as I will discuss in my next post.

Home Lab I

I had bought a new PC in 2022 to revisit Linux after my initial troubles a quarter century prior.

The fan noise of the (Dell XPS 8950) computer never let me use it stationed next to me for long. It went on a shelf in the rack cabinet as a Wake-on-LAN (WoL) remote access (RDP) Windows OS device. RDP on desktop-focused Linux distros did not work in a typical headless setup i.e. without both the dummy display plug and sacrilegious autologin user. The PC in any event seemed wastefully underutilized considering I have reservations about the use of capable hardware (24 virtual CPU cores) for trivial purposes and that made me both uncomfortable with tolerating its power draw (Intel i9-12900) and resistant to the grating noise.

I was recently intrigued by the Type 1 hypervisor Proxmox how-to, and that opened up a new world of PCIe passthrough that included concurrent use of both the Windows 11 OS using passthrough with the NVIDIA 3060 Ti, to deliver an equivalent experience over RDP to that of the OS running on bare metal as it was previously, and one or more virtual machines (VMs) of Debian or any other Linux distro used for servers without a local GUI using the virtualized on-board Intel graphics.

I could somewhat justify the existence and power draw with the PC being productive 24×7, the on-demand ability to remotely fire up Windows OS as needed, and while allowing me to run experimental VMs alongside. Next was getting the PC to run quieter yet cooler, for which the quickest fix was replacing both 120 mm chassis fans with Noctua fans.

I tend to use hardware that was cutting edge two years ago, usually over-spec for the job but running at or below spec for reliability, so burned-in and with mature Linux kernel support by the time it is deployed. This one was especially new since it underwent a motherboard change in its first year.

This is the kind of productivity I have always wanted, access to multiple machines over RDP and SSH, running in virtual desktops on my Windows on arm portable, that I can swipe through, snapshot, rollback, backup and restore.

Year of the (Snap)dragon ’24

The most significant change in CPU architecture in almost 4 decades of computing for me comes in the form of the Qualcomm (remember Eudora Pro?) made SoC, Snapdragon X Elite.

I had always wondered about RISC, having read about DEC Alpha, MIPS, Sun Sparc and Intel iA64.  None of those, if at all attainable, ran consumer operating systems.  My first non-x86 experience was with the Apple M1 Pro-based MacBook Pro in 2021, which by 2024 had software support to virtualize Windows on arm OS.  I was amazed at the smooth x86-64 Windows OS apps compatibility and decided to go bare metal with the Samsung Book4 Edge.

I am quite certain after 6 weeks that this is a great choice for right now since it has WiFi7 and, as a Copilot+ PC, early access to Windows 11 24H2 features; else the Samsung Book4 Ultra with Intel Meteor Lake and WiFi 6E [or upcoming Book5 360 with Intel Lunar Lake and WiFi7] would have served me better for native software availability.

Linux Revisited 2023

I bought a PC in the Summer of 2022 after 15 years, with the intention of re-exploring Linux on a separate SSD. The hardware being secure boot capable with Microsoft Windows 11 OS and having an NVIDIA graphics card, narrowed the choice of Linux distribution literally to what “sort of works” in that combination, as I write this in early 2023.

I started with the üniversal choice – Ubuntu seemed to install fine including the proprietary NVIDIA drivers with secure boot, but wouldn’t boot up post install. Pop!_OS was pretty much the same except it did not support secure boot.

Fedora Linux showed the most promise, but it was after weeks of struggling that the Arch Linux wiki helped me wrangle a display from the graphical login manager. Rocky Linux and RHEL were a smoother experience save the suffocating dearth of basic software.

I have maintained since my first experience with Red Hat Linux in 1999, that Linux (distributions), and by extension Android, is a hack job. The fragmentation in Linux led me to FreeBSD (for servers) twenty years ago. For a hot backup OS I have a perfected Windows 11 Pro image on the original SSD, that accesses the same external NAS data as the Linux install.

Read my follow up post

Should I profess my love?

If a person is clearly available and has not explicitly excluded you as a potential suitor, then conventional wisdom suggests, of course! The worst answer you can get for asking is, “No”, and as such you’d be no worse off than having not asked. That is simply not the case in many situations, because one is obviously not hitting on a stranger when professing one’s love – and as such risks jeopardising their existing relationship with this person.

So, well, “No”, and possibly they then immaturely go off and tell everyone that you hit on them (adding in whatever to make you sound creepy) and to avoid you. In any event, there’s a more evolved reason than rejection, shaming and embarrassment to keep one’s feelings to oneself.

It’s about not burdening another person to carry the weight of your feelings for them just so you can get it off your chest; when there are clear indicators that this person would not, for whatever reason, be interested, able or willing to reciprocate. If you know it would not change a thing except the satisfaction of having vented, but with the possibility of losing the proverbial bird in hand.

PC vs Mac

My biggest argument in switching to a Mac 15 years ago was that I would rather be hardware-constrained by Apple than software-constrained by Microsoft. A lot has changed with Macs, starting with the T2 chip post-2015, to on-device scanning of images in 2021. Plus forced obsolescence of software, and with Apple Silicon, missing macOS features on Intel hardware. I am now both hardware and software constrained by Apple.

The last straw was when, as per my previous post on this subject —

I would be dependent on Apple to release parts to me at their discretion and have to needlessly suffer downtime

Mac vs PC

Apple in its high-handedness, refused to replace my Watch battery since they require that the watch battery be down to 79% of its charging capacity to authorize a battery replacement.

The cost of Apple hardware apart, macOS holds not much appeal.

Ownership
I would literally rather run Windows 11 on a Mac than macOS on a PC, considering how much slack Microsoft has recently allowed in activating Windows, thereby ensuring a user’s data isn’t held ransom if one is offline. In contrast, many essential third-party macOS apps have adopted subscription and online authentication to login to local apps, preventing or limiting their offline use.

Usability
macOS Messages and Mail show no contact names without native CardDAV support, but at least with Windows it can be hacked in to perform relatively flawlessly. Mail app activity never ceases when one or more non-Exchange Microsoft accounts are added, and it’s been that way since Mac OS X Snow Leopard. Since then, Windows OS has evolved more into what Mac OS X Tiger and Leopard used to be; “it just works”.

Reliability
The infamous BSOD (Blue Screen of Death) or STOP error seems rarer on Windows, than a kernel panic on macOS since Catalina, and the odds of the latter seem to go up with additional processor cores. Sleep mode that Windows 98 never woke from is a thing of the past, whereas its the Mac Pro with macOS Big Sur that doesn’t automatically sleep when ‘Power Nap’ is disabled.

Performance
Mac Pro hardware feels underpowered or not as optimized, from how slow Finder is at file operations over the network (NFS and especially SMB) compared to Windows Explorer. Add to that, both 3 x 3 MIMO 802.11ac on the Intel Mac Pro and 2 x 2 MIMO 802.11ax on the Apple M1 Pro MacBook Pro far underperform my Dell PC “Killer WiFi” 2 x 2 MIMO 802.11ax. Apple Remote Desktop or Screen Sharing too refreshes slower and has poorer graphics quality over 2.5 Gbps wired Ethernet versus Microsoft Remote Desktop over 5 GHz WiFi 6, each accessed over 5 GHz WiFi 6 using their respective clients.

Recovery
I no longer find my previous issue of having to reinstall and reconfigure the operating system, updates and applications from scratch a deal breaker in choosing Windows OS. All considered, disk imaging is the least painful and bulletproof [backup and] restore strategy for any modern OS.

Reactive TV Lights

Ambilight just too expensive or not cutting it for you? Same. I did it myself, and like most of my projects it involves a Raspberry Pi.

Here’s what you’ll need for my setup:

  • Raspberry Pi 4B
  • Pimoroni Mote LED Kit
  • 4K HDMI Splitter
  • HDMI-to-USB 3.0 Capture Card
  • Lots of USB outlets and cables, some included and some not.

I used HyperBian, which is basically Raspbian pre-flashed with Hyperion. This handy image does 90% of the work, so take a moment to appreciate how much you don’t have to do.

Note: Python 2.x has been deprecated and I have therefore updated the scripts/syntax to reflected that as needed.

I’m not gonna mention the parts that are standard, like setting up Mote lights. I will go over the wiring though:

Media > Splitter > HDMI Capture Card > Pi > TV

First, follow these instructions. I’ve listed the relevant portions below as well.

sudo apt-get install python3-dev python3-pip
sudo pip3 install twisted
sudo pip3 install pyserial mote numpygit clone https://github.com/PaulWebster/artnet-unicorn-hat.git
cd artnet-unicorn-hat
sudo nano artnet-server.service
sudo cp artnet-unicorn-hat-mote/artnet-server.service
  /lib/systemd/system/artnet-server.service
sudo systemctl enable artnet-server
sudo systemctl start artnet-server

Before copying over artnet-server.service, modify it with nano, make sure the path to the script matches where it is on your installation.

For me, I had to change the user folder from “osmc” to “pi”. If you try to start or enable the service you’ll probably get the following error:

[Unit]
Description = Artnet/OPC/FadeCandy Server - control of LEDs

[Service]
Type = idle
ExecStart = /usr/bin/python3 /home/osmc/artnet-server.py

[Install]
WantedBy = multi-user.target

Original error was: libf77blas.so.3: cannot open shared object file: No such file or directory.

This can be easily fixed by running the following command:

sudo apt-get install libatlas-base-dev

Go to Hyperion Dashboard. It should be localhost:8090 from your Pi. Set the Capture Input to the USB device detected (the capture card), and the LED output format to “fadecandy”. Set the number of pixels to “64”. Save the settings and everything should work! Don’t forget to configure your lighting orientation to match the placement of your lights!

Build an Autonomous Unmanned Aerial Vehicle

Earlier this year, I set my heart on building a drone that can fly itself. I know I’m not the best pilot, so I wanted something that could more-or-less take care of things on its own. It was (and is) a very long road, which started with me buying a Raspberry Pi 4B.

Since then, I’ve constructed a vehicle that can hold an extremely steady position a few feet above the ground. That’s about the extent of its autopilot so far, but that is a remarkable thing in itself. Your standard remote-control quadcopters may be able to “stabilize” themselves in the air, but they have no idea where they are in relation to the world around them and therefore can’t “stand still”. If the wind blows them, they’ll drift unless the pilot intervenes. It may sound like a small difference but it’s much more substantial when you’re working with precise movement in an area full of obstacles.

Let me also take a moment to say that if anyone reading this is encountering issues with any of the parts or workflows mentioned below, leave a comment and I’ll be happy to help out. So, let me list off the salient components of my project (which I’ve christened W.A.T.N.E. — in honour of the character Mark Watney from the novel The Martian by Andy Weir.

  • The Frame – DJI Flame Wheel F450 ARF Kit. It came with motors, electronic speed controllers (ESCs), a sturdy set of arms, and a top plate + bottom plate.
  • The Flight Controller – Navio2. Most people opt to buy a standalone FC and connect a “companion computer” (usually an SBC like a Raspberry Pi or Nvidia Jetson Nano) over UART. I operated through the path of least resistance and got what I knew best: a Raspberry Pi Hat. RPi Hats are basically daughter boards that connect to the GPIO pins on the Pi and give it extended functionality. In this case, it’s giving my model 4B the capability to be an integrated 2-in-1 flight controller and Linux companion computer. I’ll talk more about the implications of that later, but please do read what I have to say before you invest in a Navio2 or any other FC that runs off the Linux stack.
  • Cellular Uplink: Netgear 340U USB LTE Modem. A rare find and a royal pain to configure for Linux, but I got there in the end.
  • Tracking: Intel RealSense T265 Tracking Camera. (Tests are underway to implement this device for precision landing.)
  • Obstacle Avoidance: Intel RealSense D435i Depth Camera.
  • Intelligent Object Recognition: Intel Neural Compute Stick 2 (not currently set up).
  • Radio Receiver: FrSky R9
  • Radio Transmitter: FrSky Taranis Lite with R9M ACCESS (long-range module).
  • FrSky SBEC
  • PiJuice UPS/Power Management Hat
  • 4S Li-Po High-Discharge Power Cell
  • Lots of cables, wires, connectors, Flex Tape, and hair ties.

I’ll be honest with you, I had little to no clue what I was doing when I started. My dad, who is an Internet Pioneer that founded this blog several years before my birth, raised me with an intermediate understanding of computers. However, this was hitherto my first foray into the world of UNIX commands. I familiarized myself with syntax by messing around with online examples aimed at kids. Like most things, you’ll find that the most intuitive way to learn coding is with the materials aimed at kids. I started off with Scratch by MIT just a few months after its release over a decade and a half ago.

When I thought I was ready to take on the big project (I was woefully misguided, but if I had known what I was in for I may not have started at all), I ordered the frame kit. That was my big initial commitment to the project. Assembling it was easy enough, once I got used to the soldering iron. You only need it for fusing the ESC cables to the bottom-plate. Pretty much everything else is plug-in if you play your cards right.

It’s not hard to build a quadcopter. We live in a blessed era of over-information, we need only have the skill to discern the pertinent from the irrelevant.

  1. Flash an SD card with Emlid Raspbian. You’re stuck with their distro because the Navio2 needs a custom kernel.
  2. Assemble the frame kit. Only put the propellers on when you’re ready to fly it but make sure you know which props go on which motors.
  3. Assemble your Big Brain Module, in my case it’s a Pi and two hats. Oh, and put that SD card in there too.
  4. Connect the ESC wires to the servo rail, and the Navio2 Power Module to the port on the back-right corner of the FC. WARNING: I hate to be the guy to tell you that the boogeyman is gonna get you if you don’t eat your vegetables, but if you don’t follow the wiring diagram exactly, your drone will keep flipping when you try to take off and you’ll never know why. Trust me.
  5. Connect the power module to the flight battery. I mean, you can do it just to make sure power goes through it but all the motors will do is beep at you in a distressed fashion until it gets a data input. Maybe just leave it disconnected for now.
  6. Connect the power module to the bottom-plate. I use XT60 connectors and I had to solder an XT60 connector to my bottom-plate for the power running to the motors.
  7. Connect the radio to the port on the servo rail marked “1”. it’s the column of pins reserved for the only radio so don’t put your motors or anything else there.

That’s it for hardware, assuming everything was done right the first time. A lot of this stuff is trial and error, as it has been since time immemorial. Remember that everything you do is a result of the culmination of thousands of years of societal and technological research and infrastructure. We stand on the shoulders of giants, all of us. We are nothing without the efforts of our predecessors and keeping that in mind seems to temper me when my patience runs thin.

So far, I’ve described how to build something theoretically capable of flight. There’s not much to do software-wise, just some basic calibration done through the ground control application. I use QGroundControl but if you use Windows I’ve heard Mission Planner is way better.

If you configured everything right, you have yourself a remote control quadcopter. It should arm in stabilize mode even without a GPS lock (don’t get me started on GPS just yet).

In the next post I’ll go over all the tweaks and tricks I’ve done to make it do more than just be an RC toy.

Self Help

I’ve always been into optimizing so self improvement came naturally. I’ve listened to practically every opinion and doctrine, and you know that when you’ve heard more or less the same things in rotation over and over from self help gurus.

A friend once told me that I already had all the answers I needed. That was a wonderful revelation. We do indeed tend to discount our own counsel.

So after years and years of searching for more and having nobody come up with anything noteworthy I knew it was time to stop listening to everyone because they have nothing left to contribute.

My take away is:

  • Stop being a lifelong learner. Decide how much knowledge is enough. Then start integrating what you’ve learnt.
  • Listen to yourself, logically. Not self-indulgently. What you really want. You knew when you were 10 – they say go for what you wanted at that age. Likely because people reprioritize, losing sight of and forgetting what really makes them happy, instead pursuing their corrupt ideas thereof.
  • Have a sense of causality. It usually follows recognizing that one has agency, the privilege of choice, and the exercise of those choices causes outcomes for ourselves and others, both positive and negative.