TCL Series 6 TV

Just got a new TV, a TCL 6 Series and couldn’t be happier. I bought it after this glowing review. Pretty much every other review says the same thing; excellent picture quality for the price. It’s quite cheap even for an LCD TV, and definitely way way cheaper than an OLED.

I’m not able to judge picture quality except to say it looks great to me. It’s better than the 2013 era Samsung panel it replaced; brighter, bigger, sharper. The 4K 60fps demo reels look amazing. Of course the default picture settings are oversaturated garbage, but fixing that isn’t too hard.

The key picture setting for this screen is “movie mode”. That means “don’t blow out the colors”. However when you enable that it also for some dumb reason defaults to “warm color temperature”, which shifts all the colors to the red. So set that back to normal. Also it turns off Local Contrast which is one of the nice features of the set, so turn that on to high. Then turn off all the fancy motion interpolation type stuff and it looks pretty great.

I particularly like that there’s a quick setting for “darker / normal / brighter”, which means you might actually change that based on time of day and room lighting. There’s also a game mode for HDMI inputs that has decent latency (19ms, or just about one frame).

One nice feature for this set is that its operating system is Roku. Not some dumb badly designed TV manufacturer stuff, just good ol’ Roku with its open ecosystem of third party channels. I’ve been using Roku devices for years, mostly for Plex and Netflix and Amazon, it’s nice to have it native in the TV. I managed to get ARC (“Audio Return Channel”) working so the sound is still being sent to my fancy AV receiver for decoding.

Speaking of the receiver, I’m less clear on what to do with it now. It’s limited to 1080p (or 2160p @ 30 fps, lol). And seems to block HDR too, or at least my PS4 won’t do HDR through it. I guess I could just plug all my inputs in to the TV and use the receiver solely as a digital audio decoder + amplifier. The wiring is tidier if the receiver is a switch, but then I can’t pass 4K/HDR content through it. If I didn’t already have speakers installed in my walls I think I might just get a nice soundbar and call it a day.

Still haven’t considered the joy of making my old Logitech universal remote drive the new system. Need to settle down how the receiver fits in first.

4K is still such a bandwidth hog I can’t imagine I’ll be using it much if any for watching video. Would like it for games though. HDR interests me much more, it’s great we can finally get higher contrast ratios on TV sets. Apparently there’s competing standards for HDR, awesome. This panel supports both Dolby Vision and HDR10, which I guess are the two most common formats. Hopefully that’s good enough.

Upgrading from an unsupported Ubuntu (18.10 to 19.10)

Update (March 2020): I’ve heard from 10+ people who say these instructions work for them, I think it’s reliable. See also this Ask Ubuntu post. Some folks are finding their way to this blog searching for “An upgrade from ‘cosmic’ to ‘eoan’ is not supported with this tool.” That makes sense, it’s the error the upgrade tool gives. Finally, if you see an error about “Upgrades from 18.04 on the i386 architecture are not supported” that is sadly true; 32 bit support stopped in 18.04. I don’t know how to help you with that.

A couple of weeks ago I blogged about being unable to upgrade from Ubuntu 18.10 (cosmic) to 19.10 (eoan), or even 19.04 (disco). Ubuntu 18.10 is fully end of life and deprecated and the do-release-upgrade tool we normally use to upgrade things refused to run. (Note that this tool is bundled in the ubuntu-release-upgrader-core package) Fortunately one of my readers Alejandro saw my post and suggested a solution. I just tried it and it works. Details below.

The underlying problem is do-release-upgrade on 18.10 was trying to upgrade directly from 18.10 → 19.10. But that’s not possible. And it’s no longer possible to upgrade 18.10 → 19.04 because 19.04 is also not supported. So you’re stuck.

The gist of the solution is to hack a config file to force an 18.10 → 19.04 update. It turns out Ubuntu’s do-release-upgrade tool is a generic program, the version installed on an 18.10 system is not 18.10 specific. Instead it downloads metadata files from the Internet and uses those to figure out what to do. Alejandro’s suggestion boils down to intercepting those metadata files after download and hacking them to point to 19.04 instead of 19.10. The packages needed to do the 18.10 → 19.04 upgrade are still online, do-release-upgrade just wasn’t finding them.

Needless to say, the instructions below come with a big warning this may break your system. It worked for me, but no idea if it will work for you.

18.10 → 19.10 steps

  1. Run do-release-upgrade on the 18.10 system. This will give you an error about being unsupported. But behind the scenes, the tool will download some metadata files we want to modify.
  2. As root, go in to /var/lib/update-manager and copy the file meta-release to a new file meta-release2. This file was downloaded by do-release-upgrade from the Internet and tells the upgrader how to upgrade.
  3. Edit meta-release2. Remove all entries for eoan entirely. Modify the disco entry so it says Supported: 1
  4. Edit the file /usr/lib/python3/dist-packages/UpdateManager/Core/MetaRelease.py. Change this line of code
    self.metarelease_information = open(self.METARELEASE_FILE, "r")
    To read
    self.metarelease_information = open(self.METARELEASE_FILE + "2", "r")
    That will tell the upgrader to use your modified file instead of the original. (It will also avoid any redownloads overwriting your changes.)
  5. Run do-release-upgrade. It should now be doing an upgrade 18.10 → 19.04. Let that run as normal and reboot.
  6. Congratulations! You’re now running 19.04. Remove the /var/lib/update-manager/meta-release2 you made.
  7. Since 19.10 is supported, all you have to do to upgrade 19.04 → 19.10 is run do-release-upgrade again. No hacks necessary, you’re back on the main path.

Like I said, it worked for me. I’m a little surprised Ubuntu doesn’t support this officially out of the box, it seems simple enough. I suspect they just want to have nothing at all to do with older systems. Still if it were me I’d add some flag to enable at-your-own-risk upgrades for people like me who were stuck.

I think I also could have upgraded my system simply by editing /etc/apt/sources files and relying on apt dist-upgrade. That’s the underlying tool that’s doing all the work anyway. But the Ubuntu upgrade scripts have a little extra stuff in them and it’s nice to be able to run it just in case it’s important.

I’m very grateful to Alejandro for his emailing me the suggestion on the fix.

Adding a new SyncThing node

I’ve been using SyncThing for awhile now to keep several Windows PCs in sync. Mostly the Documents folder, also some saved games. It’s been working great since I set it up, so well I’ve nearly forgotten how it works.

I decided to add it to my laptop today. Which begs the question; what’s it like to bring in a new machine to an existing sync cluster, one with possible conflicts in the folders? It mostly went OK but some awkwardness.

To add a new machine to the node all you have to do is install SyncThing and then go to “Add New Device” on the new machine. I have a hub-and-spoke system (3 Windows spokes, one Linux hub that’s the primary copy) so I just added the Linux hub device. Approve it on the GUI of the hub machine, accept the folders it offers, and it all worked OK.

Well mostly. I had some file conflicts, as expected. There’s a conflict resolution tool but it’s very confusing. It talks in terms of “original” and “conflicted” file and I’m wondering; original to whom? It seems to have no idea of three way syncs or merges. I looked and realized I had a backup of all the files and none were important anyway, so I just blindly accepted “take the newest” for all the conflicts.

The uglier problem was errors that some files were not transferring, with an error in the GUI of “file modified but not rescanned; will try again later”. I looked online and found this is often a symptom of a case mismatch. Windows filesystem is case insensitive but still stores cases, maybe it got confused with the Linux copy. I finally resolved these by deleting the files everywhere across my cluster and then restoring it to the hub from a backup, then letting the new copy propagate. Ugly but effective.

The last problem I had was one file refused to copy: sync_ffs.db. That’s a file that a different sync system (FreeFileSync) uses. It has some magic properties that make it unwriteable or something. I simply instructed SyncThing to ignore that file, no more problem.

So all in all a bunch of ugly manual work. Which is in keeping with SyncThing; it’s a great tool but it’s not exactly civilian friendly. The good news is that once it’s set up and syncing it seems quite reliable.

I can’t imagine what topologies more complex than hub-and-spoke behave like.

How big can an ethernet network be?

Switched ethernet is magic. You buy relatively cheap devices, plug everything into them, and packets get where they’re supposed to go without any complication. Well that’s the simple version at least; modern networks with managed switches and VLANs can get complicated. But the basic system is simple. My question is how big does that scale? How many switches and ports can I have on a switched ethernet network?

The classic folklore answer is apparently 7 hops. The network can be no bigger than 7 hops in diameter. That’s pretty big though. Assuming you had 8 port switches in a tree configuration, I think that’s 8^7 = 2,097,152 devices. But IIRC each switch has to maintain a state table of the MAC address of every one of those 2M devices; I imagine in practice there’s not enough RAM for anywhere near that much.

The 7 hops number apparently comes from IEEE 802.1d which formally defines the Spanning Tree Protocol that underlies Ethernet switch routing. Basically magical elves run around your network and divine its topology, then construct a minimal spanning tree so everyone can reach everyone. From what I’ve read it’s not clear that 7 hops is a hard limit so much as it is guidance about practical limits. I suspect it’s mostly governed by latency.

802.1d is very old though and has been superseded. 802.1Q codifies VLANs and the multiple spanning tree protocol. 802.1aq adds Shortest Path Bridging, which I think is a way to let an ethernet use multiple physical routes to increase bandwidth. I’ve never used any of this stuff, but I suspect it changes the assumptions that underlie that 7 hops rule of thumb. OTOH I wonder in practice. With VLANs and managed switches it’s logically reasonable to consider much larger networks working on layer 2, without IP routing involved.

Cheap PC hardware watchdog

I have a Linux PC server that flakes out and locks up solid every few weeks. I tried using a software watchdog to fix it but it didn’t help. My guess is the whole CPU is freezing, so even the watchdog can’t run.

I’ve fixed the problem for now with a hardware watchdog. Some $10 anonymous Chinese hardware designed for Bitcoin mining rigs.

It looks like a USB device, but the USB is only for power. The main I/O are two pairs of wires: one that connects to your hard drive activity LED, one that connects to your hardware reset switch. Yes, it’s that dumb. Basically it just watches the LED and if it hasn’t flashed in awhile (no idea how long, maybe a minute?) it sends a reset to the motherboard.

It looks like a USB device, but the USB is only for power. The main I/O are two pairs of wires: one that connects to your hard drive activity LED, one that connects to your hardware reset switch. Yes, it’s that dumb. Basically it just watches the LED and if it hasn’t flashed in awhile (no idea how long, maybe a minute?) it reboots the computer.

It seems to work, although I haven’t had a real test yet. Most importantly, it hasn’t caused a false reboot after 36 hours of testing. It did reboot the computer when I ran /sbin/halt though, which is a good sign. I had to stare at the panel of my computer for awhile to verify the HDD activity light was actually flashing. Apparently this light works even for modern SSDs plugged in via M.2!

(Fun fact: /sbin/shutdown -h, the command I always use to halt computers, doesn’t actually halt. See this systemd man page for details. “-h” is a synonym for “–poweroff”. You want “-H” or “–halt” to actually halt the CPU without also shutting off the system power. I imagine this is for historical reasons; back in the old days you couldn’t turn the power off with software. Then when that became possible it became the default because it’s almost always what you want.)

There’s a bunch of versions of this hardware watchdog idea; this one is reasonably well documented. Power could also come from an old school Molex connector or using the 9 pin USB motherboard connector, maybe with an adapter. I just routed the wires outside the case and used an external USB port. It’s all spectacularly dumb.

The one hassle with the device I bought is it doesn’t have cable splitters, it replaces the existing LED and switch. I bought some ridiculously overpriced splitters. (Turns out this kind of cable and connector is called “Dupont Line” or “Dupont Cable”.) Now my LED isn’t flashing any more; either I screwed up the wiring or there’s not enough power once split.

When I saw the USB port I assumed this device worked by monitoring the USB port itself. Test if the USB host was working. But that’s a fairly high level test and you could imagine various scenarios where it wasn’t appropriate. I imagine most computers doing useful work are still tickling the hard drive regularly enough this approach is better; if nothing else, than for logging.

Wanderings tracker vs GPSLogger

Summary: after 9 months, the GPS data being logged by GPSLogger on Android looks good to me. At least good enough for Wanderings’ location tracking and heatmap.

One casualty of my switch from iPhone to Android is my own Wanderings app, a lightweight GPS tracker that Brad and I worked on together. There’s an iPhone app that tracks your movements and records data and a webapp that displays the data. But we don’t have an Android version of our tracker and I’ve been unmotivated to learn enough Android to write one.

I have been running Mendahk’s excellent GPSLogger though and it is faithfully recording my location every day to GeoJSON and CSV files in Google Drive. I finally got around to collecting all that data and visualizing it with our web app. Qualitatively it looks the same as the data collected a year ago by the Wanderings tracker. That’s good! See screenshots below.

I haven’t done a deep quantitative comparison. But GPSLogger recorded 218,451 points over 284 days, of which 90,877 points were more than one minute apart from each other (a cutoff we use). By comparison our own tracker only stored 6248 locations for me over 574 days. That’s 319 points a day compared to 11! GPSLogger calmed down a whole lot though starting in September, down to about 70 points a day. I suspect that’s when I disabled passive location tracking, resulting in many fewer points logged.

Still GPSLogger is recording 7x as many points as Wanderings. The extra points are harmless to the visualization output but they do incur some storage and processing overhead. The Wanderings tracker is using the iOS Significant Location Change service and only seems to record points when you’re really moving. GPSLogger is configurable in recording granularity. I have it being somewhat aggressive: GPS/GNSS locations, network locations, and a 60 second logging interval with a 10 meter distance filter to log a new point.

I went ahead and looked at distances between points in the GPSLogger output using geopy’s calculation. A whole lot of pairs of points are less than 20 meters apart. I don’t really know what’s correct here, since you do want to capture data with fidelity when the subject is moving. Probably overthinking it. Here’s a quick and dirty histogram of distances for all pairs less than 100 meters apart.

And here’s the promised gallery of app screenshots. Two similar time periods, one year apart. The older is the iPhone tracker and the newer is GPSLogger on Android. They really do look similar.

Getting at Google Drive files from Linux

Google Drive has no Linux support. That’s hilarious. Lots of folks have hacked things up though. I settled on rclone as a command line tool because it seemed maintained and was available in Ubuntu. It’s a generic cloud drive tool able to get at files, etc from many different providers.

The one-time-setup for Google Drive access is awful. It requires OAuth, which is a nice idea but then has the lovely usability of OAuth vs. command line tools. Lots of copying and pasting tokens. Even worse they really want you to set up your own API key. The end result is I had to do a bunch of “register as a Google developer” stuff, including maybe even generating a review request for my OAuth app called “Nelson’s private experiment do not use” (since private apps are only available to GSuite customers). The irony of this being somewhat my fault; I built the very first Google API key system back in 2001. No good deed goes unpunished.

Anyway, once set up it seems to work. My credentials are apparently stored in ~/.config/rclone/ Example commands:

  • rclone ls ‘Google Drive:/GPSLogger for Android/’
  • rclone –progress sync ‘Google Drive:/GPSLogger for Android/’ ‘./gdrive’

I’ve found some awful behavior from Google though; Google Drive seems to be converting my CSV text files into Google Sheets; I can’t get the CSV back, no matter which Google Drive client I try. I filed a bug on GPSLogger about it, but I wonder if it’s my own mistake.