Windows and High DPI / Retina

It’s better than it used to be!

I’ve been thinking how nice it’d be to have a higher DPI screen. My current monitor is 109 dpi which is just low enough that I can run things at 100% scaling and it’s all readable. (Sometimes I bump the font size up +1 on things). But I’d love to get a 200 dpi or higher screen so everything looks crisper. This works great on MacOS but historically has been a mess on Windows, in large part because sometimes you are running 20 year old code that is rendering pixels on its own and is totally DPI unaware. But things have improved.

The testing below is with Windows 21H1. I learned a lot from this blog post which explains how to configure Windows high DPI, in particular how to override settings on a per-app basis. I haven’t needed to do that but it’s nice that it’s an option.

I set Windows to 125% scaling and tried a bunch of things. For me my metric of “works OK” is “readable” and “works right” is “readable and not blurry”. Example blurry window to the right. (Note that the app itself works right; only the installer is blurry.)

Most of the apps I use just worked right. A few had to be restarted; they were blurry or in the case of Steam just didn’t work right at all. A few apps were slightly blurry: not “pixel scaled 125% blurry” but more like “awkward font size”, maybe just something not quite optimized. A few older apps worked but were blurry like 125% pixel scaling: Audacity 2.3.3, NZBGet 21.1 (Win UI), SFTP Net Drive. The only thing that was consistently blurry was NSIS, the Nullsoft installer system a lot of software use. There might be a fix package builders can use. Don’t much care anyway, you run an installer once and forget about it. I didn’t test very many games, most games are fully doing their own thing with rendering anyway so I expect they are unaffected by the DPI setting in Windows.

I was honestly surprised at how much did work perfectly. So that impressed me. 125% scaling works almost everywhere in Windows now. None of it feels quite perfect and there are some rough edges but then that’s typical of the Windows UI experience in general. It’s fine and totally usable.

Microsoft has put more thought into display scaling than I reazlied. This helpful developer doc shows the evolution; there was reasonable support starting in Windows 8.1. Windows 10 1703 stabilized on “Per-Monitor V2” with various APIs. The part that surprises me is “Desktop applications must tell Windows if they support DPI scaling”: without that Windows just bitmap-scales and makes for bluriness. I’m surprised almost all the apps I tried are apparently DPI scaling aware. It seems to work even with very old Win32 APIs, you don’t have to use fancy new UWP apps to get scaling. This guide makes it sound not very hard to write basic scaling-aware apps although it does take some extra work to do it nicely.

There is a way to inspect the scaling-awareness of running apps. Oddly a couple of apps (Everything, the PowerToys launcher) say they are “System” status which means basically unaware and yet they look right on my screen. There are a few apps that are fully “Unaware” but they are mostly things like Steam that seem to have their own bizarre rendering for everything.

One thing I can’t tell easily is if the apps are non-blurry because they aren’t scaling at all. I think PingInfoView may be like that; it says it’s “System” scaling but the window was non blurry and the font seemed too small. But the difference from 125% to 100% is not easy to be sure about. I’d care a lot more if I’m running at 200%.

I wish I had a simple program that called the various APIs for inspecting display characteristics and just showed me the info. I guess I could write one.

Higher DPI monitors

Anyway the software mostly seems to work. Now I need the hardware. I currently have a 34″ 3440×1440 monitor that I bought for about $1000. I like it very much. It’s 31 x 13 inches for 109 dpi. There are a bunch of higher density displays but they get pretty pricy.

The Dell UltraSharp 32″ 8K gives you 7680 x 4320 at , the calculator says that’s 275 dpi. $3750 is a little more than I had in mind though.

5K monitors may be more reasonable. Here’s a couple, the first is pretty much a straight DPI upgrade on my existing screens ize.

LG 34″ 163 dpi, 5120×2160, 32″X14″, $1224.
LG 27″ 218 dpi, 5120×2880, 24″x15″, $1600.

Newegg has a lot of choices for 4k gaming monitors, some 16 at 3840×2160 (for a 16×9 aspect ratio).

Looking at this another way… a 43″ 3840×2160 monitor is a pretty common size for $1300 or so. That’s 105dpi, so basically I’d get a bigger monitor at the same dpi. Not interested. To go higher density there’s 32″ 3840×2160. That gives 138 dpi, or about 25% more pixel density. These monitors are 27″x15″ and I’d definitely the loss of width.

What would be a clear DPI upgrade on what I have now is a 5120 x 2160 monitor. That gets me back to that LG 34, I think it’s the closest to a direct dot density upgrade. It’s one of only two monitors in this size; the other is the MSI PS341WU.

The other thing I should consider is pixel depth; 10 bits is common now although I don’t know of any software supports it. And also refresh rate; I like my GSync up to 100Hz. The LG 34″ that looks like a good match for me is 10 bit and supports 75 Hz. No indication of GSync or FreeSync support though. The MSI monitor is 60Hz only.

Just for fun, Apple’s Pro Display XDR is 6016 x 3384 for 218 dpi at $5000 (!). The screen is 28 x 16″. I can’t find anyone selling another brand with this panel.

Bottom line, the LG 34WK95U-W looks like the best option for me and a fairly unique one at that. $1224 on Amazon is not a lot but I don’t think it’s a big enough upgrade to warrant replacing my existing monitor.

Dyson Sphere Program

I just finished a first playthrough of a really great early access game, Dyson Sphere Program. I don’t normally talk about games here but this game is so much like programming it seems relevant. Also it’s pretty and it’s fun.

DSP is a Factorio clone. Or to be more fair, a “factory automation game” since it’s now kind of a genre. But DSP is a lot like Factorio and it’s quite apparent when you play.

The big innovation is that it’s a lot more beautiful, as the screenshot above demonstrates. Full 3d graphics. And it’s not just a gimmick; not only does it look great but it performs well. And you even play in 3d a bit, the conveyor belts can be woven at different altitudes letting you make some nice bus designs. Also planets are actually spheres which has some implications.

The other big change is that it’s a multi-planet game. You start out building a factory on a single planet but there’s resources you’ll need to go to another planet in your system for and later on, you’ll bring in rare resources from other stars. Honestly this doesn’t add a lot of complexity but it does add some beauty and variety. (There’s a good Factorio mod called SpaceX that creates something similar.)

And the third big change is there’s a grand goal, swallowing the sun in a Dyson Sphere to generate power. You can see my nearly completed sphere rising over the planet in the picture above. The Sphere gives a purpose to all the construction that’s a bit more tangible than Factorio’s “build a rocket”. Also it’s real pretty and makes a useful power resource when it’s complete.

One nice improvement in DSP is a lot of quality of life stuff, UI improvements that make building things easier. OTOH the game is still in early access – the blueprint system only was released a week ago and has some rough edges. One of my favorite features is the devops dashboard, er, the production monitoring graphs. Here’s some details of the growth of my empire over the 65 hours it took me to win.

The game is really impressively implemented. It’s simulating a lot of individual components. And rendering them in 3d. It runs smooth on my fairly high end PC with a 1080Ti GPU. I read somewhere the Dyson sphere components themselves, tens of thousands, are being simulated with GPU code. Makes sense. Folks complain that this part greatly slows their PC but I didn’t have any trouble at all; maybe an artifact of how beefy a GPU they have.

The game is still early access and is missing some things I’d like to see. Some UI for monitoring and controlling the logistics network would be nice, it plays a bigger role in DSP than Factorio. It’d also be nice to have more game systems to design; there’s no analog to fluids or trains, for instance. Also the DSP folks have promised to add some defense / military aspect of the game. I’m fine with a peaceful builder but it could be interesting to have the challenge. Finally there’s no multiplayer, not even one promised, that seems like a mistake to me.

There’s a small but active mod scene for the game. No official mod support yet but the game is in Unity which is fairly amenable to hacking. There’s a few resources for writing your own mods. One point of confusion; there’s a global scoreboard called the Milky Way and various rumors that some mods keep you out of it. I asked on the Discord and was told there’s no whitelist or anything. Instead the game has its own cheat protection, it inspects its own data structures and if something modified them (say, a mod) then you are disqualified. That means info-only mods are OK and it’s determined by behavior. Unfortunately there’s no published list or info on which mods are allowed or not, you have to guess.

It’s a good game!

GAN art

My friend Tom has been playing with GAN art, AI generated images. Thanks to him I was able to try out a working system; you just provide a text input prompt and out pops some art. I don’t really know how it works, something along these lines apparently. keywords: VQGAN, CLIP. The results are awfully neat though.

Update: you can play with it yourself in this notebook. No support provided, sorry.

Anyway I played around with it. Literally all I did was provide some text prompts, all the art is coming from the software. Which raises an interesting question about artistic input. I give the large majority of the credit to the folks who developed these techniques and trained the networks. But there’s a lot of skill in tuning the systems (something I think Tom is expert at). Of course the original image input creators have an influence too but when your input set is literally all images on the Internet, well, the credit gets a bit diffuse. A lot of these images are kinda samey but by no means are all of them. I imagine there’s a lot of latitude in how you train and tune these things.

Technical notes: it’s a giant pile of Python software with PyTorch as a big piece, also OpenAI CLIP. It’s running on Google Colab with a GPU at a free level so presumably a modest amount of compute resources. The small images take about 5 minutes to generate, the slightly bigger ones take 10.

Here’s the images I made. This is all of them, so you can see failures as well as success. I’m putting the ones I like first.

Known Unknowns

De Chirico landscape
Yves Tanguy
Australian aborigine art
church pixel art
Gay Orgy
Gay Orgy #2
Homosexual Party
Statue of Narcissus
Malevich square
Classic male nude bathing
Water slides
Rotten banana
Dyson sphere
Has Anyone Really Been Far Even as Decided to Use Even Go Want to do Look More Like?
When the moon hits your eye like a big pizza pie

WireGuard: very simple setup

Now that Starlink is working reliably I want to bridge the network between my two houses to make it easier to share stuff across. I have an elaborate setup of ssh tunnels and autossh for NAT busting and it’s awful. A VPN would be better. Some notes on getting there, this is very WireGuard 101 stuff.

The SF network is at and is on fast fiber. I have a NAT router but I completely control it so with port forwarding it’s easy to set up a server. The GV network is at and is behind double NAT; my own and also Starlink’s carrier grade NAT. The latter prevents listening on sockets pretty much entirely, so running a server behind it is awful and even elaborate NAT traversal seems dicey.

At first I thought I wanted a full bridging setup so the two houses appear in the same place. But in retrospect I think that is a mistake. It’d be bad for Sonos, for instance, it seems to use LAN broadcast to discover its own stuff and I’d much rather it found the local server for music files than tried to use the VPN. Same goes for Plex video too. OTOH I would like some discovery-based things to work; Windows filesharing for sure. I don’t really understand what I need.

To start simple I decided just to create a basic tunnel on a new subnet ( between Linux boxes in the two houses. Doesn’t really solve my problem but those tunnels could later be gateways for house traffic. I am ignoring my Ubiquiti routers entirely. Folks have gotten various VPN solutions working on that, including Wireguard, but I’m concerned at the longevity of any hackery solutions for doing that. Maybe later.

Anyway setting up a simple tunnel between two Linux boxes is easy. I used WireGuard for this, apparently the modern VPN solution. It really is a breeze both to install and to understand. I mostly used this guide with some consulting of this other guide. Note WireGuard itself doesn’t really have good docs. For instance I cannot find anywhere a man page style reference for the conf files that wg-quick uses, never could find a definition of exactly what “Address” or “Endpoint” means. It’s sort of clear from context and examples but…

Anyway following the guides it all basically just worked. Very simple too. Wireguard’s model is peer to peer; peers are identified by a public/private key pair (and an internal ID). Tunnels between peers then get created as needed and each end of the tunnel is given an IP address. The guides are written in terms of a client/server setup so I’ll use that language from now on.

The main gotcha I had is when I first set up the server in I gave the Wireguard tunnel itself an IP address in the same subnet, That’s definitely a bad idea; my Linux box decided to use that new route instead of the ethernet and I had to reboot it. That’s why I put the Wireguard tunnel in a new subnet ( instead. The other gotcha is I set SaveConfig=true. It’s a neat idea; you configure the thing on the fly with the command line and then the config file is updated. But at the same time I was editing the config file to fix things and restarting WireGuard and my changes kept getting clobbered. Pick one or the other style of creating a config, don’t do both.

Anyway, now I have a working tunnel so my two Linux boxes can talk to each other. The next step is using this tunnel to do “other things” like mount Windows fileshares, etc. Still not sure how I’ll go about that.

Updates below

I’m going to keep updating this post as I hack more. I don’t feel like having the discipline to mark every update, so just expect stuff to be added over time.

I realize I don’t really know what I want from this VPN setup. I don’t want full bridging; I want some things like media servers to not be discovered over the WAN link. I do want a lot of transparency though. I’m going to start now by making sure every device in SF ( can talk to every device in GV ( I’ll worry about broadcasts and discovery later, if at all.

Several folks have asked “why not Tailscale”? That is probably an excellent answer; Tailscale is a nice VPN product built on top of WireGuard. I was under the impression it was client oriented, not network oriented, so you had to install Tailscale on every device. But that’s not really true; they have a “subnet router” capability that works for other devices. At this point I’m mostly not using it because I got WireGuard itself running OK and I want to tinker more.

Starting to think I should run Wireguard on my Ubiquiti boxes afterall. One of my routers is a Ubiquit Dream Machine which is sort of different software: here’s a port of Wireguard to Dream Machine.

Netcat for testing

I want to be able to test if packets get through but without the full round trip ICMP ping requires. That’s what netcat is for. On the server side; nc -klu 3333. On the client side, testing sending a packet: echo foo | nc -u 3333. Note the client netcat doesn’t hang up after sending the packet, it keeps the socket open looking for input.

Routing the subnet

The basic wireguard setup gets a tunnel going. You can ping the tunnel endpoint IP address ( But not the “real” IP address of the machine at the other end ( To add that, we need a route. Doing this manually is a mistake (wg-quick does this…) but it was the first thing I tried and half works, so I mention it here:

route add -net gw wg0

Doing that is enough to let UDP packets through. nc -u 3333 works! However traceroute (and presumably ping) fails with the mysterious message send: Required key not available. Turns out that’s an error message from Wireguard and in my case is caused because is not one of the “Allowed IPs” in the peer definition for the Wireguard tunnel. Wireguard is blocking the traffic.

The solution is to add the subnet to AllowedIPs. AllowedIPs =, Add that and restart and pings get through and get replies. Moreover there’s no need to manually add the route because wg-quick does it for you. “It infers all routes from the list of peers’ allowed IPs, and automatically adds them to the system routing table”. Nice! Long story short, to get a single machine on two different subnets talking to each other via Wireguard it’s sufficient to add each’s subnets to the other’s AllowedIPs.

Next step: get the rest of the LAN on each side to know to route traffic for the other subnet through Wireguard. I know this means adding a static route (probably in my router). I’m less clear what, if anything, I have to do to configure the Linux box to act as a router that forwards the packets for that destination. When searching for info on this I found a lot of advice about using IPTables masquerading for a NAT setup. Yuck, no!

Update: turns out basic routing just works already with no extra config. It may help that my Linux box which is the Wireguard endpoint already has IP Forwarding turned on. Anyway on my Windows box I just did route add mask and now my Windows machine can talk to the other end of the Wireguard endpoint on its “real” address Can’t talk to anything else on the remote network though, but that’s because they don’t know a route back yet.

Of course I don’t want to manually have to add this route to everything! A better solution is not to set the route on my Windows desktop but instead add a static route to my Ubiquiti Dream Machine. It’s hidden under Settings / Advanced Features / Advanced Gateway Settings / Static Route. The Destination Network is, the next hop is, where the WireGuard endpoint is.

C:\Users\Nelson>tracert -d

Tracing route to over a maximum of 30 hops

  1    <1 ms    <1 ms    <1 ms
  2    <1 ms    <1 ms    <1 ms
  3    60 ms    59 ms    59 ms

That’s it! Now everything on my LAN can initiate connections to Nothing else in though because I haven’t added the route on that side yet. The big thing is my Windows box can now access the remote fileshare at \\\. I doubt it will show up as discoverable but a direct connection works.

This is all working well enough now I’m close to needing to document the final setup and making it persistent. It may actually be persistent; I just did the systemctl enable command that gets wireguard to run at startup.

Typesetting multiline text in SVG

Having worked on rendering text on my AxiDraw, the next question is how to typeset text. The challenge is SVG 1.1 text elements are explicitly single line, no automatic wrapping. There’s no text flow in an SVG renderer, you have to do it yourself. (SVG 2.0 promises to add this, and I guess there was an ill fated effort in SVG 1.2).

There is a nice way to express multiline text in SVG using the <tspan> element.

    <text class="verse">                                   
        <tspan dy="1.2em" x="10"
               >How doth the little crocodile</tspan>      
        <tspan dy="1.2em" x="10" dx="1em"
               >Improve his shining tail,</tspan>          
        <tspan dy="1.2em" x="10"
               >And pour the waters of the Nile</tspan>
        <tspan dy="1.2em" x="10" dx="1em"
               >On every golden scale!</tspan>

Note the use of dy to do the line spacing; that’s nearly automatic. (the dx parts are for some fancy indentation. Not sure why x is being set explicitly every time.) All you have to do is decide where to break the lines! Ha.

There’s also some support for automated kerning with the textLength attribute.

I can’t find a Python library that will break lines for you or do more fancy automated typesetting. It is possible in Javascript, or at least you can write code to do it, using the getBBox() function to inspect the pixel size of some text and use that to break lines.

One approach in Python would be to calculate metrics for a font and then use that to estimate the width of a word. That won’t work if the kerning isn’t predictable. Also it’s a lot of work to do right for arbitrary fonts.

A second approach is to use something to render SVG in Python and use that to measure widths. Something like the pyrsvg wrapper for librsvg. That’s a lot of code to bring in though and I’m not confident it would perform very well. py3rsvg looks promising as a lighter / standalone option. See also this discussion.

A third option is to skip SVG text entirely and work directly with HersheyFonts generated paths in Python. The library has support for inspecting the width of glyphs; bounding box, etc. But this library is a separate work from the Evil Mad Scientist / AxiDraw code for Hershey text, the one that’s an Inkscape extension. They’ve done some nice work on fonts, etc so I’m not sure how much you’re giving up going with this other code instead. SVG Fonts, for one; I think the HersheyFonts library is using the actual Hershey format. That means no Unicode. I’m not sure the ESM HersheyText Inkscape code can be repurposed though, it seems to be written in a very Inkscape-specific way and relying on their renderer. I could be wrong about that though.

I think that last option may be best; it’s closer to the actual SVG I want to send to the plotter.

AxiDraw and italic pens

On the back of my text experiments I decided to give italic calligraphy a try. Unlike other calligraphy which requires managing pen pressure, italic calligraphy is done with a simple stub nib (or italic nib) of fixed width. The drawing end is a short line, not a circle. It’s usually held at a 45 degree angle so that both vertical and horizontal lines are wide but a 45 degree line is very thin. It looks beautiful when you draw curves.

Italic calligraphy is a whole art, one of which I am mostly ignorant. But this website has a good introduction to the basics and there’s lots more resources on line. For a pen I am using the Sakura Pigma Calligrapher; their Pigma Microns are my favorite ordinary pens (needle point).

Italic pen nibs need to be held at a specific 45 degree angle to work well, this adapter from AxiDraw makes it possible. Easy to install, although note it’s not compatible with their other pen-holding addon, the one that lets you adjust angles. It took me awhile to understand why they mounted it in the direction shown in the photos. That keeps the pen tip out from under the robot arm, a good thing. However it means you need to place the AxiDraw on the right edge of the page; if you’re used to the AxiDraw being above your design (“on top”) then you need to rotate your images 90 degrees. (This orientation is natural for the AxiDraw; it’s designed for A4 pages in landscape orientation, wider than tall.)

The hardest thing is getting the rotation of the pen correct. You want it positioned so the entire flat line of the nib is in full contact with the page. That’s very fiddly. What I found worked best for me was to set the pen drop height fairly high, like 50%, and position the pen manually and screw it down tight. Then go back and adjust the pen drop to lower, say 40%. That way it won’t ever drop the pen just barely above the paper, particularly if the paper is not entirely flat.

The other thing I learned on my first experience is it works better drawing very slowly, like at 10%. I don’t know if that’s a limitation of the pen I’m using or just a reality. I also suspect it would work better with more pressure, ie a weight attached to the pen itself.

Anyway, after about 15 minutes off fiddling I got some plausible looking things. Here’s a test pattern, also some simple test text. The text is a 10mm high font (with a 2mm wide nib) converted to a stroke font via Hershey Text, using the EMS Readability font. Those simple stroke fonts are a virtue if you’re getting interesting design thanks to the pen nib! The test circles show some ink flow problems I still need to work on. Also I think the font size is a bit too small for the nib.


Made some nice progress.

I figured out how to align the pen more easily. Gently! Let the pen just drop to the page and find its own level. Then don’t overtighten the grip on it, since that will move the pen. It’s still really fiddly though! I also got some mileage from a simple pen-aligment test print.

The 2mm Pigma Calligrapher definitely needs to write slowly. Even 20% speed is too fast; 10% seems OK but very occasionally skips ink. I wonder if that’s a deliberate decision, calligraphy artists tend to draw slowly. Or maybe it’s a requirement of the very wide ink aperture. Anyway it’d be nice if it were faster!

Below, on the left, I made a nice font sample using 10mm sans-serif as my input to Hershey Text and a variety of fonts. All of these look decent, although not are all equally readable. The italic / slanted fonts look better (duh! italic calligraphy is usually written with a slant). The hand script fonts with curves look nicer too. 10mm with a 2mm nib is still too small. On the right is the same image shrunk down 75% (so, 7.5mm?) with a 1mm nib. Looks way, way better.

Anyway this all looks pretty plausible as a basis for fancy looking robot handwriting. I think my next interest is getting some nicer stroke paths for the pen to follow, something more like a real calligrapher would do. I feel this stuff is very well codified by instruction. I should try programming that myself from videos! Or look into digitizing an actual expert’s motions.

In the sample images above, the font for “10mm sans-serif” is EMS Pancake. I included that as a joke but actually it looks pretty good.

GoodReads for games

I love GoodReads. It’s terrific having a giant list of books I’ve read and my thoughts about them. Also it’s got a nice community, a low key social network. There’s a lot of reasons to complain post-Amazon-acquisition, particularly about how little development the site gets, but it’s still good.

I’ve never managed to get into Letterboxd, the equivalent popular site for movies. I’ve started using it again recently and am doing a little better but I sure wish I had notes on the 500+ movies I’ve watched in my life, particularly back when I was watching 3 movies a week.

So what about video games? I went looking and found various options. See also this list. “Reviews” refers to folks who at least entered a rating for the game. I’m using three games to compare user populations: Ratchet & Clank Rift Apart (2021), No Man’s Sky (2016), and Majora’s Mask (2000).


No Man’s Sky: 3005 users, 1026 reviews
Rift Apart: 274 users, 116 reviews.
Majora’s Mask: 5404 users, 2754 reviews

Steam import implemented
Export implemented


No Man’s Sky: 350 users, 44 reviews.
Rift Apart: 337 users, 129 reviews
Majora’s Mask: 60 users, 51 reviews

Steam import on roadmap
Export not mentioned?

Backloggd (numbers approximate)

No Man’s Sky: 1500 users, 900 reviews.
Rift Apart: 446 users, 380 reviews
Majora’s Mask: 4200 users, 2500 reviews

Steam import on roadmap
Export on roadmap

HowLongToBeat (backlogs + beat)

No Man’s Sky: 2700 users, 256 reviews
Rift Apart: 1000 users, 285 reviews
Majora’s Mask: 2500 users, 516 reviews

Steam import implemented
Export implemented

Main conclusion from that is GG is a much smaller userbase. Anecdotally, Backloggd shows up in recent discussions on ResetEra, etc. Grouvee and HowLongToBeat have been around longer. I also think it looks like Backloggd is as popular as Grouvee now but missed out a few years ago when No Man’s Sky came out. I like how many folks on these platforms rate Majora’s Mask, that’s a classic and sophisticated game for folks to know.

I like that Grouvee and HowLongToBeat already has the import and export features I want.

Sleep position tracking and OSCAR

Update: Torena now exports Somnopose-style CSV data right in the app. Yay!

I’m interested in tracking the position my body is in when I sleep: on my side or back. mostly. I’ve found a good solution for that for Android, the app Torena. You strap your phone to your chest with a runner’s belt, run the tracking app all night (30% of battery), and it produces nice little reports. It’s a good app, well designed, and worth the $10 if you need this kind of tracking. Somnopose is a similar app for iOS.

Pretty straightforward. But to make this really useful I want to pull the data in to OSCAR, the CPAP / sleep apnea tracking app that’s also recording all sorts of information about my breathing patterns, my blood oxygen, etc. I want to answer questions like “does sleeping on my back cause breathing problems?” and “do apnea events cause me to turn over, or does turning over cause apnea events?”. I succeeded! Code is in this gist.


I made the import via OSCAR’s Somnopose CSV support. The code linked above converts Torena’s JSON export to a format enough like Somnopose data that OSCAR would import it. It’s a bit of an impedence mismatch. Somnopose basically just records the phone’s rotational position in two axes, so it’s a data stream like “-89 -93 -92 -73 -23 4 2 1” with timestamps (that’d be rolling over from one side to stomach). Torena classifies the raw numeric data and exports categorized data, like “Left… Back… Right” with timestamps. So I just converted that back to numbers, albeit coarse ones. One extra trick is I had to add some extra CSV rows to mark the end of old positions so that OSCAR would draw square waves, not sawtooths.

Next steps

I did this convertor as a one-off just to see if it’d work. It does! To make this useful there’s three options:

  1. Turn my convertor into a nice reusable program
  2. Ask the Torena devs to export an OSCAR-compatible Somnopose CSV in the app
  3. Design a new input format for OSCAR along with the Torena dev

Option 2 is probably easiest, if the Torena dev is amenable. Option 3 would probably be better but requires some product design effort with the OSCAR team.

FWIW I think having the raw numeric data is probably more useful. You can measure partial positions that way. Also maybe infer movement activity from the magnitude of changes. But really the data display needs to be rethought: coding position as an angle from -180 to 180 is not intuitive. The classifying Torena does in-app makes sense here, but maybe it could be done in OSCAR instead.

Another specific problem to solve is time synchronization; Torena has its own clock independent of the CPAP machine and O2 sensor that OSCAR is getting data from. OSCAR doesn’t seem to have a good solution for that although you can at least input a manual offset for some things. It’s important to get it at least to the right minute, if not second.

See also the discussion on Apnea Board.

ez Share hacking notes (WiFi SD cards)

tl;dr: I found a way to download arbitrary files off an ez Share, but no way to list the files. Also ez Share only runs as an access point with a DHCP server which makes it super awkward.

I’m trying to get a working WiFi SD card. This is a thing that acts like a normal SD card, typically for a camera, but also has an embedded system so you can download files from it over WiFi with the card sitting in your camera. I want to use this to get data off a device that writes to an SD card and has no network interface.

Eye-Fi pioneered this tech way back in 2005. They’re gone now. Transcend made a line of WiFi SD cards for awhile. So did Toshiba with their FlashAir. They’re all off the market or only buyable for ridiculous prices. The only thing left on the market is a Chinese product called ez Share.

Long story short, ez Share is not suitable for general file usage. It’s got a very restricted design that makes it only good for photos and only in an awkward WiFi way. I’ve got some detailed notes below on my hacking attempts. Meanwhile I fond a FlashAir I could get from Germany (via eBay) for $50 so I’ve ordered that, it should be more amenable.

ez Share limitations

The WiFi only runs as an access point. You can’t make the card act like a client on your existing WiFi network. That means you have to swap your WiFi adapter to its network to download from it.

The other big problem is the card only presents a Web UI. And that web server seems hellbent on making sure you only download images from specific directories. It only wants to serve files in directories named DCIM/100CANON or DCIM/EOSMISC. And those files need to have image or video extensions. See below for a partial workaround to download arbitrary files. But I can’t get a file listing.

I couldn’t find any sign of a firmware update capability. Too bad, that might be a way to hack the thing to be more useful. Update: Biorn says there’s a way to update firmware via USB and some details here.

ez Share resources

These ez Share cards have been around for 8 years or so, but surprisingly there’s precious little information in English from folks trying to hack them. Either because they weren’t sold in to the US/Europe or because they’re so inflexible. Here’s some links I did find:

  • A download script in bash. Seems to be written for a different firmware than I have, the methods don’t work.
  • GitHub repo for that download script.
  • A clever hack to use the card with a microscope that writes to another directory. He corrupts a FAT32 filesystem to create the equivalent of a hardlinked directory.
  • Hardware notes on using the card without a camera. Doesn’t address software problems.
  • Game console hackers at GBA Temp have tried using it to run games without much success I could find.
  • An official manual.
  • The default root password is “admin”.
  • The ez Share creates a WiFi access point with SSID “ez Share”, password “88888888”.
  • My card is version 4.4.0. In detail:
    LZ1001EDPG:4.4.0:2014-07-28:62 LZ1001EDRS:4.4.0:2014-07-28:62 SPEED:-H:SPEED

Download files

I poked around at the web server trying to coerce it to give out more info than it wanted to. The big success is I was able to download arbitrary files with URLs like this:


Note the ..\ in the path name, to get out of DCIM. The filename has to be the 8 character version; it’s a FAT32 filesystem but the firmware doesn’t seem to know it. Also there’s no ftype specified; that’s needed to avoid the “file extension must be a .jpg or other image/movie format” code.

Unfortunately trying similar tricks to get a listing of files via the web interface at /photo did not work. Neither did trying to download a whole folder; you can do that with images in the DCIM directory but apparently not with arbitrary files.

Below are some very detailed notes on exactly what I tried / learned

ez Share file and directory URLs

List photos

List videos

Alternate file list? From another firmware:

Download an image

Hack to download any file:

Download selected photos:

Download folder:

Doesn’t work but looks like it should: hack to download any folder

other ez Share URLs

First page:

Redirects to /publicdir/sendir
Redirects to

Change config screen (password)

Admin page:

Cloud Lab advertisement

Version request (reported)
LZ1001EDPG:4.4.0:2014-07-28:62 LZ1001EDRS:4.4.0:2014-07-28:62 SPEED:-H:SPEED

Starlink improvements

I’ve been running the TIG stack monitoring my Starlink connection for about 3 months now. I made a couple of posts to Reddit recently about the results of that. Bottom line: both latency and packet loss have improved significantly. Latency to dropped from 44ms to 37ms average around July 15. And in the last month or so packet loss dropped from 2.1% to 0.6%.

Here’s a copy of the two reddit posts.

Latency improvement around July 15: 44ms to 37ms.

Noticed something interesting in my automated monitoring; my latency has improved. I’m measuring with pings to every few seconds using Telegraf. The average has been about 44ms for a long time, dropped to about 42ms starting July 9, and starting July 15 dropped to 37ms.

It’s hard to accurately measure performance with a connection as variable as Starlink, but I have enough consistent data I think this is a real trend. Graph is here:

I don’t know where latency improved. The changes in the graph don’t seem correlated to firmware updates. There’s no obvious change in bandwidth, although measuring that accurately is really hard. The latency improvement could well be in the terrestrial links between Starlink ground stations and Google DNS; that’s not very interesting. Or maybe it’s in the satellite network itself. No way of really figuring it out from here I think. But FWIW I’ll append a current MTR.

 Host                   Loss%   Snt   Last   Avg  Best  Wrst StDev 
 1.        0.0%   189    0.5   0.4   0.2   0.6   0.1 
 2.       0.0%   188   35.9  31.1  21.2  55.4   6.6 
 3.      0.5%   188   33.2  30.8  19.5 141.2  10.5 
 4.      0.0%   188   33.5  30.3  19.8  50.4   6.2 
 5.       0.0%   188   34.5  31.4  21.5  59.8   7.7 
 6.              0.0%   188   35.1  30.6  19.2  56.8   7.4

Packet loss improvement: 2.1% to 0.6% over three months

My packet loss has improved over the last three months: from about 2% to 0.6% by my measurement. Don’t get hung up on the absolute number but I think the improvement trend is real. See below for more numbers and the graph here:

Nothing significant has changed at my site. What I’m seeing is consistent with what Starlink has been telling us: reliability should be improving as they fill out the constellation. My antenna is mounted on a roof in Grass Valley, CA with some minor tree obstructions. I’ve been hoping with more satellites that Starlink could work around the obstructions; there was a promise a few months ago of a firmware update to do that.

Anecdotally I’ve been using Starlink exclusively on my desktop computer for about 8 weeks now. It used to be annoying about 10 times a day to have to wait for 5-10 seconds for the network to respond again. Now it feels like 2 times a day. But that’s just perception, the numbers in my report say more.

Unfortunately I’m having a hard time getting a great graph of the trend but it should be visible in the graph linked above. I’m also seeing a similar trend in Dishy’s own reported drop rate. Here’s the average packet loss percentage from my pings along with Dishy’s average drop rate over 7 day periods starting at various times:

  • Week starting 5/1: 2.1% / 0.024
  • Week starting 6/1: 2.0% / 0.027
  • Week starting 7/1: 1.0% / 0.008
  • Week starting 7/13: 0.6% / 0.006

FWIW the same ping tests with my old fixed wireless WISP was about 1% packet loss. But those were just one-off lost packets. Starlink’s multi-second outages have been annoying. They’re getting better!

Appendix: my ping data comes from Telegraf’s ping plugin. I’ve got it configured to send 5 pings at a time, every second. As always, when measuring with ICMP ping all sorts of other things could explain changes in behavior. But in this case the ping data confirms my general perception and also Dishy’s own reported drop rate.

Obstruction improvement: 0.7% to 0.07% in the last month

The recent info email update got me looking closely at my obstruction reporting stats. Historically I’ve had about an 0.74% obstruction rate thanks to a few trees. In the last 7 days it’s been 0.070%, or 1/10th the rate. You can see the graph of Percent Obstructed here: A second related graph is Seconds Obstructed:

Note these are Dishy’s own reported numbers, similar to the graph you get in the app. This trend is entirely consistent with my own monitoring with pings. I think in the last month the planed “Connecting to the Best Satellite” has finally come to full usage.

(One in a series; I’ve recently posted other posts here on packet loss and latency improvements. I’d like to do one more post measuring lengths of outages, those 10 second failures we used to get. I think that’s also improved significantly but they haven’t gone away entirely.)

A false bandwidth result

I noticed the following graph (that is false) of Speedtest by Ookla results showing bandwidth decreased around August 1.

I wrote a whole long ass exploration of why my Starlink bandwidth might have decreased. It’s all wrong! Turns out somehow Speedify got re-enabled on my server August 1, I don’t know how. And Speedify’s VPN servers typically cap out about 120 Mbps for me. That’s it, that’s why my bandwidth went down.

Unfortunately this invalidates all my network monitoring data from August 1 to August 6. Doesn’t invalidate any of the data posted above, that was all in July before this change. But I will need to remember going forward to ignore all the weird changes I started seeing August 1.

I’ve removed Speedify entirely from the Linux box.