Electron bloat vs RPi 4

Running a Linux desktop on a slowish RPi 4 with 4GB of RAM sure makes you appreciate just how bloated and slow Electron apps are. Should it really take 300MB to run a fancy IRC client or 150MB to grep a password file? Some example memory sizes, as reported by conky’s summary of free system memory and then ps_mem. Bolded numbers are what I think is the fairest measurement.

  • 1password: 181MB, 152MB, 196+124=320MB
  • webcord: 355MB, 308MB, 277+115=391MB
  • chromium (1 blank tab): 265MB, 250MB, 196+136=332MB
  • VS.Code (1 .py file): ?, 550MB, 439+122=561

(For each process I first tested what corky said was memory in use when I ran just that app and no other Chrome stuff. Then launching it with other Chrome stuff in memory. The third set of numbers is what ps_mem.py reports. These measurements are complicated by how you account for sharable memory and the details of that are inscrutable).

Memory aside these things also take enormous amounts of CPU. And are very slow, latency when typing in them is quite apparent in some part of every app. I’ve known Electron was bloated but stopped noticing on high powered desktops. We lose a lot when we give up on optimized native apps.

Raspberry Pi 4 as a Ubuntu desktop

Update this whole post was written using Ubuntu 64 bit with Gnome on an RPi 4. I’m trying a second time now with Raspberry Pi OS (LXDE) and it’s going much better, new post coming.

I decided to give a Raspberry Pi 4 a go as a Ubuntu desktop. It did not go very well. I think the hardware is just too underpowered. Which is a little ridiculous given how much more capable this hardware is than the systems of 10 years ago. I’m using this on a stock RPi4 with 4GB of RAM and the basic boost overclocking enabled. The system is installed to a SSD connected with USB3 that gets 250MB/s in benchmarks. (Which itself is odd; it should be 500MB/s. RPi4 I/O is limited I guess). My screen is 3440×1440.

Installation is remarkably easy and clean. The Raspberry Pi people have a GUI app that installs bootable operating system images. Basically plug and play. Newer RPi4 can boot directly off USB, no need for an SD card at all. Which is good; SD card I/O is slow. Graphics and sound basically worked out of the box. Even Bluetooth audio.

Performance is very rough though. There’s regularly 5 second delays between typing something and it showing up on my screen; here in the WordPress editor, in the URL bar of a browser, pretty much in any GUI. Sometims keystrokes get lost. It’s much better in a Gnome terminal but even then there’s sometimes a perceptible subsecond delay. I really don’t understand the terrible latency on user inputs, even when several CPUs are idle and I have 1.0GB RAM available (currently allocated to buffers and cache). I wonder if something more serious is wrong?

I’m using the default Ubuntu 22.04 Gnome desktop. I might have luck with lighter GUIs – Lubuntu and Xfce are both recommended. I briefly tried them and they did seem faster but they were also uglier.

Update: I tried LXDE on Ubuntu, the desktop environment Raspberry PI Desktop uses, and it’s much better. The ubuntu install I did is pretty janky but that’s probably my fault, I bet the integrated Raspberry Pi Desktop experience is nicer. Responsiveness is much better and memory usage is much lower although this WordPress editor still drags. LightDM is a big improvement too; the Gnome Display Manager is a huge hog for something that’s just a freakin’ login box.

Update 2: Raspberry Pi Desktop is indeed better. It’s much lighter weight than Ubuntu Gnome and it’s coherent in the RPi build, much better than my janky LXDE. Bluetooth config works! Youtube audio playback is mostly fine but pauses every once in awhile when something else is happening. The wordpress editor is worse than ever though. Beginning to suspect a bug in their Javascript code exacerbated by the relatively low system.

I started with Firefox which was a mistake; the aarch64 build of that is really slow and memory hungry. Chromium is a little better, apparently someone put some real effort into optimizing it for the RPi. It’s still not good though. Super sluggish, regularly pins all 4 CPUs at 100%, and uses a lot of memory per tab. I can’t watch a single Youtube video at 240p even with nothing else going on and the window minimized (sound, no picture). CPUs aren’t overtaxed but the sound playback is choppy and awful. Again, maybe a symptom of something more serious being wrong than just system load.

Update: the fix for Youtube is apparently the h264ify Chrome extension, which forces Youtube videos to H.264 instead of VP9. Which allows for hardware decoding. Shame YouTube isn’t smart enough to do that for you.

Gnome on Ubuntu 22.04 looks fairly good. But the moment you start pushing on stuff it breaks down. There’s two separate programs called “Software Updater” but what you really want is a third program “Ubuntu Software” which installs Snap apps. 1Password requires a manual install; they haven’t built aarch64 packages (But do have a tarball). Lots of weird off-brand Linux choices for ordinary user apps. I never could get the Gnome system monitor to show up in my task bar.

I had a much better experience using this environment on my fast desktop PC. Ubuntu Gnome does at least work on RPi 4 but it’s so slow that it is a miserable experience. Maybe next time I try this I’ll use the Raspberry Pi OS build instead.

PG&E solar billing; two channels

Did you know PG&E smart meters have multiple channels? “Channel A represents energy pulled from the grid” while “Channel C represents energy exported back to the grid”. Weird! Details below.

PG&E screwed up during my solar period and for 6 months even though their records showed I was sending them more power than I was receiving, they were billing me for $200 worth of electricity a month. I eventually got a refund but I made a CPUC complaint about what looked like abusive and fictitious billing.

To PG&E’s credit they got in touch with me to resolve my complaint and I had a chat with a knowledgable customer agent who gave me a clear answer. Here’s how CPUC summarized it:

PG&E advised that NEM customers are billed with interval data using 2 channels on the SmartMeter: Channel A registers consumption, while Channel C registers exported generation.

The PG&E docs show pictures of bills with breakouts by channel, but I can’t find that in my bill. But the docs I’ve found online are 15 year old, many for NEM1 (I’m on NEM2) so maybe it’s not available now, or I won’t see it until my yearly true up.

I can’t make any sense of how this would work electrically. AFAIK my solar feed was installed with vampire taps between the meter and the circuit breakers; there’s no separate wire coming in from my solar directly to the meter to record. Maybe the two channels are more of a synthetic thing, with A being the sum of watts used during intervals that are positive and C being the sum of the negatives. Or maybe there’s some magic in the energy grid that can detect flow? I doubt it.

The funny thing about all these attempts to verify what PG&E is saying is they come to nothing; you’d never prevail in a dispute over how they bill. My Sunpower monitoring regularly shows me about 1kWh a day better off than PG&E bills me. Flaky meters? PG&E skimming? Who knows. The PG&E data looks roughly accurate though and they are effectively the law, so not much point second guessing it.

ffmpeg and hevc_qsv Intel Quick Sync settings

I’m revisiting my experiments using Intel Quick Sync to transcode video into H.265 using the hevc_qsv transcoder. I’m using jellyfin’s build of ffmpeg. Here’s where I ended up:

function to65 {
  /usr/lib/jellyfin-ffmpeg/ffmpeg \
    -c:v h264_qsv \
    -i "$1" \
    -map 0 -c copy \
    -c:v hevc_qsv -preset slow -global_quality 22 -look_ahead 1 \
    $2 \
    "${1%.*} x265.mkv";
}

Doing this results in an H.265 video file about 1/4th the size of the H.264 file and takes about 1/8th the show length to transcode. (ie: 5-6 minutes for a typical 42 minute tv show). Quality looks about the same to me but I’m using fairly unexciting video sources to test.

ffmpeg options

The challenge is this is all very poorly documented, particularly quality settings. Keep in mind that hardware H.265 encoding is generally considered lower quality than the CPU based x265 encoder. NVenc is said to be better than QSV. Also be aware Google search is treating hevc_qsv as a synonym for h264_qsv, so searches will often find irrelevant information if you’re not careful.

ffmpeg -h encoder=hevc_qsv is supposed to give you a list of all the codec specific options. You can see that list here. But there’s no real basic quality or bitrate setting, just a lot of mysterious technical-seeming options. Some of this might be documented in the Intel Media SDK docs or this white paper on how it all works.

Instead you want these ffmpeg docs and the QSV encoder help page. The key option here is global_quality. I think this works like crf (which is not an option and seems to be silently ignored if provided). The number is not documented but can be 1 (highest) to 51 (lowest), folks online recommending about 21. With global_quality set you still have 3 algorithm options: -look_ahead 1 seems to be the fanciest. I’m not sure but that might be the default. -preset is another option to consider; slow seems the right tradeoff to me. I did not consider the -profile option.

I still feel like I’m missing one clear set of docs for how this encoder works. I cobbled some of it together also using this cookbook page.

Testing

I did a very casual test of these settings using a 720p web rip (AMZN) of a TV cooking show. The original file is about 6000kbps; the video stream itself is 5800kbps H.264. I have a tin eye; I’m OK with fairly mediocre video encoding for most things like this. I’m just trying to keep some old TV shows around without taking up too much space.

For visual quality, -global_quality had an enormous impact; it’s the most important setting. Using the default settings instead or was unacceptably bad, particularly on scene transitions. Honestly anything from -global_quality 18 to 26 looked about the same to me. The preset option didn’t have nearly as clear an impact on visual quality.

Preset does have an impact on encoding speed. medium: 9.5x. slow: 7.5x. slower: 3.7x. Quality settings didn’t have much impact on encoding speed.

Here’s bitrates at different settings of -slow -global-quality N

-global_qualityFile bitrate (kbps)
182911
202309
221777
241395
261114
original6075
default hevc_qsv1222

Running ffmpeg this way uses 1 CPU core and the GPU at 100%.

Screenshots

Sorry for the lack of labels on the images. The first is the original, the second two are without global_quality (and unacceptable). The rest are progessively lower global_quality although honestly they all look about the same to me. This was a frame with a lot of detail right after a scene transition. It’s not crystal sharp in the original so it’s not a big surprise many of the transcoded versions are roughly the blurry same.

btrfs

One change in my new Linux system is I’m using Btrfs as the filesystem. I’ve been using ext2 to ext4 for 25+ years now, time for a change. Btrfs is a fancy modern copy-on-write transactional filesystem similar in vein to ZFS or ReiserFS. For some reason Btrfs has a reputation as not being entirely stable; it doesn’t help the FAQ says “Is Btrfs stable? Maybe.”

but that seems outdated, although there are still some cautions about using it for a filesystem spanning multiple devices. If I were doing something complicated I’d care more but for a simple system, it’s fine. Several Linux distros use it as the default.

Some nice features

  • Robust to failures thanks to copy on write design
  • Grow and shrink volumes, span across devices
  • Snapshots of the filesystem
  • Transparent compression. (But not encryption, sadly.)
  • Checksums for file integrity. Default is CRC32, there are others.

Some tidbits:

  • The docs (also) are pretty good. The sysadmin guide is a good conceptual overview. Subvolumes are a key concept.
  • The btrfs tool has various administration programs in it. btrfs filesystem is the main one I’ve looked at.
  • There’s a default commit timeout of 30 seconds. That seems kinda long!
  • GRUB boots fine off my simple Btrfs disk. There are some less than optimal support things though.
  • btrfs detects SSDs and behaves differently on them
  • There’s no lost+found directory! I wonder what fsck does if it finds an orphan file? Maybe that can’t happen.
  • My btrfs partition had a UUID but also a UUID_SUB. That second thing is the UUID of the subvolume and can be safely ignored unless you know you need it.
  • There’s a lot of docs about maintenance: defragments and balancing and stuff. I’m hoping Ubuntu just installed something reasonable for me. OTOH the btrfs-progs package doesn’t have any sort of cron jobs or other maintenance scripts, so maybe not?
  • The COW semantics are a little confusing. /bin/cp has a --reflinks flag that seems to be important to understand if you want to be precise.
  • The btrfs docs say it works best when 15-20% of the filesystem is free. Yowch! 10% was the old overhead.

The main reason I picked Btrfs was the hope that snapshotting would result in some better backup option. I think this is possible but there’s no obvious consensus script. I found a lot of 8 year old experiments. Folks are very excited to use snapshots as check points so you can easily find a month old copy of a file. rsnapshot does that but awkwardly; a COW filesystem is great. But I also want redundancy: I want my files copied to a completely different disk. And different filesystem; if you set up Btrfs to use two disks for redundancy and Btrfs itself blows up, you still lose your files. There’s something to be said for a separately mounted disk entirely. Another avenue is btrfs send / receive; a way to serialize a filesystem and restore it elsewhere. Anyway I’ll look into Btrfs backups again later some day; rsnapshot still does the job for me.

One bad experience; converting from ext4 to btrfs. There’s a tool to do this safely in place, which is very cool. But it’s very slow. It seemed to read every single allocated block on the disk, then do about the same amount of work for every inode. It took about 5 hours for 2GB of data. And then the resulting filesystem is in a less than optimal state; you have to do more work to clean it up. It would have been faster and better to simply clone the files elsewhere and reformat the disk.

Grub, recordfail, 30 seconds

My new Ubuntu 22.04 system was pausing for 30 seconds every boot in the GRUB menu. Despite GRUB_TIMEOUT=0 being set in /etc/default/grub. The fix is to also set GRUB_RECORDFAIL_TIMEOUT=5 in the same file. (Or set it to 0 if you’re feeling sporty.) Read below for the ugly details or see Ubuntu bug 1918736.

GRUB has a nice thing where if a boot fails then on the next try it gives the sysadmin extra time to figure out what’s wrong. It’s common to have no menu or delay if the last boot worked. But if it fails then you get 30 seconds in a menu to press buttons.

The problem is Ubuntu has been broken for a very long time and every single boot GRUB uses this failure timeout. Check out this charming code from /etc/grub.d/00-header/

if [ \$grub_platform = efi ]; then
  set timeout=${GRUB_RECORDFAIL_TIMEOUT:-30}

What this says is “if your system boots with EFI then ignore the configured timeout and wait for 30 seconds”. They’ve been doing this for 3 years. EFI is the common case for PC hardware and I think every installed Ubuntu system every single time it boots waits for 30 seconds for no reason except someone hacked around some other problem in a clumsy way, then never cleaned it up. I’m not sure but the underlying problem may have something to do with detecting whether a boot failed or not. A similar problem shows up with LVM systems.

Update: I misunderstood; this code only runs when recordfail_broken is set, which happens if GRUB determines the boot media is not writable. The check_writable function says diskfilter, lvm, btrfs, cpiofs, newc, odc, romfs, squash4, tarfs, and zfs are all not writable. I guess that’s for a good reason; GRUB’s not able to record that the boot worked so it assumes it failed. Sure is a kludge though.

I set the timeout to 5 seconds because in the case of a real failure I need that time in the menu. It’s still 5 seconds longer than it should be but at least it’s not 30.

I am sure cranky about this. I’ve been encountering a lot of bugs like this setting up my new Linux system, probably because I’m paying such close attention. Part of being a successful tech worker is ignoring all the broken stuff that’s not a big deal because you’re doing something more important. But I have time to get into the details of GRUB, or fstabs, or whatever and what I find is alarming. I always wonder “why doesn’t the engineer working on this give a shit?”

In the Old Days nerds who installed Linux were very proud of one thing: their system uptime. Later on we started accepting reboots were part of life and so the focus shifted to making fast boots (the primary reason for systemd!). Now apparently Ubuntu developers care so little about EFI boot that no one minds that for three years it’s been wasting 30 seconds every single boot. I suspect that’s because of the shift in focus to cloud computing; the serious work is now all on systems that probably aren’t using GRUB at all, just booting in a VM.

I do wonder what this means for folks on laptops though; Ubuntu still seems serious about desktop Linux Maybe I misunderstand and somehow this bug is avoided in common installs? Update: see above, it only affects some types of boot volume.

systemd, fstab, and nofail

It’s always like this; I just got my new server all put away in the closet where it’s hard to get to, then it failed to boot. So I had to drag out an old monitor and plug it in and see what was going on. Turns out I’d put a disk in /etc/fstab with options “auto” and then reformatted it, changing the UUID. So it couldn’t be mounted on reboot. And by default systemd blows up in that circumstance because it doesn’t know if the disk is essential. The console showed I was in “emergency mode” with a logged in root shell.

Easy mistake to fix but how to prevent it in the first place? The solution is to add nofail to the boot options. May want to change the hardware timeout too; it’s 90 seconds by default. So something like

defaults,nofail,x-systemd.device-timeout=15s

For such an essential piece of the system /etc/fstab and how systemd processes it is very poorly documented. The systemd docs should be authoritative. As usual the Arch Wiki is helpful. Also as usual the Ubuntu docs are actively harmful garbage last edited in 2017 and not even mentioning nofail. The man page isn’t very good either.

Searching with Google offers an alternative of nobootwait but AFAICT that’s something that predates systemd so I avoided it.

To compound the hassle here: no trace of this emergency boot shows up in any log file. Not in dmesg, not in /var/log/syslog, not in journalctl. The only place I could find the message was on the console tty. WTF?

I sure wish there were some easy way to have a Linux console be available on a network. I know Linux supports a serial port console, I bet there’s some hardware device that emulates a serial port but is online. Having Linux itself use a network thing as a console would of course be tricky; how do you set up the networking before the system’s working?

New Ubuntu 22.04 homelab server notes

I set up my new home server with Ubuntu 22.0. Here’s notes on what I did in August 2022. See also January 2019 and July 2019. Still doing all this stuff by hand like a craftsman; no Docker for me! The Ubuntu docs are worse than ever and Google has gotten much worse for looking for info. Adding “arch” to Linux search terms often helps find useful docs.

Generic system things:

  • Install Ubuntu 22.04 server image, UEFI boot, btrfs for the root partition
  • Use tmpfs for /tmp
  • Enable universe packages, updated apt
  • Set up a static IP address
  • Install chrony
  • Install mainline kernel via the cappelikan PPA. 5.19 includes i5-12600K tweaks not in 5.15.
  • Set up watchdog daemon with iTCO_wdt hardware watchdog and watching ethernet. (Does not seem to actually be rebooting the system when needed.)
  • Install libnss-mdns so .local DNS queries work
  • Install mailutils and set up Postfix for Local only
  • Turn off sudo email spam
  • Install a bunch of command line software via apt
    joe stress-ng stress s-tui smartmontools zip unzip rsync openssl webp plocate ripgrep pigz
    python3 python-is-python3 python3-venv python3-dev pkg-config build-essential sqlite3
  • Install Samba, configure SMB1 (aka LANMAN) support. Gave up trying to make this work for Sonos.
  • Set up port forwarding on my router. Had to reboot the Ubiquiti Dream Machine; config changes weren’t taking.
  • Set up rsnapshot. I’ll eventually move to a btrfs-specific backup program if I can find one.
  • Verify BIOS allows virtualization
    apt install cpu-checker; kvm-ok

Special apps and personal configuration:

  • Set up my /home/nelson environment
  • Install jellyfin ffmpeg from a PPA for hardware accelerated transcoding
  • Install nodejs 18 from a PPA
  • Install plexmediaserver from the secret apt repository. Also install intel-opencl-icd. Reconfigure.
  • Set up a group nzb for various Usenet things to all belong to. Make sure to set up each system to use this group as its primary group and make things group writeable.
  • Install sabnzbdplus from the Ubuntu repository. Set up user nzb to run it. Copy config from old system. Manually install par2-tbb because there’s no jammy build in the PPA.
  • Install sonarr from Sonarr focal repository. Reconfigure and re-add all my data because they moved config files around too much.
  • Install radarr into /home/radarr by copying from my old install. They don’t have any sort of packaging for Linux.
  • Install syncthing from their apt repository. Copy config from old machine (which is risky); be sure the files are in place on the new system before starting to sync!
  • Install Caddy as a web server. Just couldn’t stomach another round of Apache config files. Caddy doesn’t support cgi-bin (easily) so it’s not a perfect fit for my “light dev server” need. But it’s definitely a lot simpler than Apache 2.4.

Things I didn’t have to do this time:

  • Configure fsck at boot. The default is now “preen” which means “fix safe things automatically”.

I’m a little cranky how hard some of this stuff was to get working. Particularly anything outside standard Ubuntu packages. Lots of things haven’t been updated for jammy / 22.04.

APT changed things so apt-key add is now deprecated but the new method is not clearly documented. Apparently I’m supposed to manage my own keys using gpg? See my blog post for info on what to do, or example here or instructions here. What a UX regression.

Sonos server alternatives

I’m pulling my hair out trying to get Sonos to connect to my new SMB server. The various dark arts needed to enable SMB1 for Sonos’ ancient file sharing implementation aren’t working and there’s no debugs. Is there some way to share my local music files other than SMB? There’s no way to plug a USB drive into a Sonos directly :-(

One alternative is Plex. It has music capabilities. I’m finding it a little flaky and until I fiddle further it’s only finding files that it has metadata for, not the weird MP3 files that fell off the back of a truck.

Another alternative is Airsonic, another Plex-like media server. It has explicit Sonos support via the Sonic Music API (SMAPI), a new-to-me thing.

A third alternative is Bonob. I know nothing about it, just found it as a SMAPI server. I’ve seen it mentioned in concert with Navidrome.

SMAPI is an interesting back door into Sonos serving! A whole documented API. You can add a service of your own via bare bones HTML interface at http://192.168.0.50:1400/customsd.htm (IP address is of a Sonos player on my LAN). I wonder what the API schema is like; I suspect it is oriented to music with tidy metadata and not “giant pile of MP3 files in carefully curated directories”.

There’s some small or abandoned GitHub projects that are SMAPI servers: YouTube, a NodeJS server, a C# server. This NPR One service looks active.

You know what would be really nice? A stripped down SMB1 anonymous server solely for Sonos clients. Samba is so complicated, something that works with Sonos’ ancient client might work better. Alternately some folks use a Raspberry PI as a NAS/SMB gateway.

Ubuntu 22.04: apt keys

New version of Ubuntu has a massive usability regression in adding third party package repositories. There’s a legitimate security reason for the change but they’ve made no affordance for making it easy for users to migrate or understand the new way. I can’t even find docs on the official help. This is bad usability even by Unix standards; I suspect Canonical doesn’t care because they’re pushing their Snap package store so hard.

There’s good information here. Bottom line; the old way meant that the new key you were adding was trusted for all repositories. Rather than fix that design flaw in some sensible way they’ve just deprecated the feature entirely and left users to run gpg -dearmor or some such nonsense to manage keyfiles themselves. Really. It’s 2022 and Ubuntu thinks “just use PGP from the command line!” is a reasonable thing to tell someone. Some tool to do this file management for me would sure be a better choice.

The change seems for the better but they sure didn’t do anything to build tools for the new way or explain the transition. Pretty sure the Ubuntu devs just told me to go fuck myself.