Here’s where it gets complicated. The cable is Micro-B Superspeed to Type C. That’s the big Micro-B with basically two cables; not the single small one. The cable has “SS 10” printed on it which is a promise it’ll do 10Gbps. I’m guessing the drive is USB 3.2 Gen 1×1, which is SS 5 or 5Gbps. There’s a chance it’s USB 3.2 Gen 1×2 which is 10Gbit but is not called SS10. In that case the USB 3.2 Gen1x2 device is probably not 10GBps compatible with the USB 3.1 Micro-B SS/10 cable. It should presumably fall back to 5 Gbps.
Given the drive is spinning rust, a 5 Gbps interface is about 50x faster than necessary.
On the back of my problems with a flaky WAN link I’ve had a different problem with my Ubiquiti router: it would fail over from the WAN link (when it was down) to my LTE backup, but then not return once the WAN link came back up. Not every time, only three or four times in several months. But that’s obviously no good.
Long stort sort, I got escalated to tier 3 support. I’ve made two changes at their suggestion. One was to switch the watchdog ping address from ping.ubnt.com to 18.104.22.168, mostly to skip DNS resolution in the “is the network up?” test. The other was to turn on smart queuing, which apparently might magically fix the routing problems I was seeing.
I changed the ping address Nov 21, and the smart queueing was enabled Dec 9. Meanwhile I haven’t seen my WAN link flake out once since Dec 1. No explanation for that new stability, the router was last rebooted Nov 29 or so and there was flakiness after that. But with the WAN link behaving solidly there’s no chance of the routing failover messing up.
So.. still confused as to what caused the problems I saw. I suspect they’ll come back.
Update: the came back. WAN started flaking out today, Dec 20, at 11:59am local. No hint the failover isn’t working though.
Windows’ model of sound output has always confused me but I think I finally figured it out. Particularly how it interacts with apps like Zoom and games like World of Warcraft. Here’s some notes.
Jump down to the section on “Application output device” for the juicy bits on “why isn’t sound playing where I want?”
Your Windows system has a set of output devices; placees it can put sound out on. Typically these correspond to hardware devices you own like headphones or speakers, but can also be virtual sound devices for capturing audio, modifying it on passthrough, etc.
The easiest way to see your sound devices is to open the Sound settings. Right click on the volume / speaker icon in your taskbar, or open Settings, click System, then click Sound. You’ll see something like this:
The drop down (“Speakers”, here) lets you choose what your system default output device is. Most apps will play sound to that device. This notion of “default” is the cause of much confusion, read on for more.
Clicking through to “Manage sound devices” gives you a couple of extra options:
There’s a “test” option for each device. Also the ability to disable sound devices. This is useful for annoying devices you never want to use. Like the Acer X34, which is the option to send sound out my video cable to my crappy monitor speakers. Never ever want to do that, so it’s disabled.
Windows has like ten options for setting the sound volume. The most complete one I know is “App volume and device preferences”, you can get there by pressing the Win key and typing “mixer”.
The “Master volume” is the output level for the current selected default output. Each output device has its own remembered master volume that you can change by switching output devices first. Also there’s a per-app mixer available for some but not all apps. I don’t know what populates that list but it’s obviously not very complete.
Application output device
Every app gets to choose which device it sends sound output on. Most apps just send to the system default and you have no control over that. A few specialty apps like Zoom or games with voice chat let you control the output device, typically in the System settings.
Choosing a new sound output can be handy particularly for voice communication systems. Maybe you want most system sound out the speakers but your friend’s voice on zoom in an earpiece. But for most apps “System default” is what you want. But here’s the gotcha:
The system default device can change while the app is running. Most apps will detect that change and switch sound to the new output device. You turn on your Bluetooth headphones and now sound is going out those headphones. Neat! But World of Warcraft is an exception. If “System Default” is selected it will not switch to a new output device even if the system default has changed; it will keep playing to whatever was the default when the game was launched. I suspect other apps do this too. It’s a bug IMHO.
Changing default output device
When you plug in a new device it usually becomes the default output magically for you. Often that’s what you want. But it can become confusing, particularly when you then unplug a device and the default output switches back to something. If you’re not hearing sound, first thing you should check is what Windows thinks your default output device is at the moment. 9 out of 10 it’s that HDMI monitor you plugged in.
I’ll say it again: if an app is playing to the system default and you change the system default, most of the time the app will switch to that new output device. But some apps don’t detect the change. It’s very confusing.
A special note about wired headphones. The headphones I have are implemented as a separate output device. Both my speakers and headphones are plugged into separate motherboard jacks and the Realtek S1220A sound chip implements genuine separate devices for each. Other motherboards do something special with headphones where when you plug them in they electrically intercept the speaker output and divert sound to the headphones. But Windows is 100% unaware of this diversion. My old Realtek ALC887 motherboard did this. If yours does, you will not see a separate headphone device and there’s no way in software to switch between them.
Sound input devices
For the most part, all the notes I wrote about output devices apply to input devices as well. There’s multiple input devices, there’s a default input device, switching between them is a little confusing.
Except: Windows 7 had an idea that there were two default input devices; “default device” and “default communication device”. You can still change the latter if you go to the old Control Panel. In theory that communication device was for voice chat and the other default device was.. your high end microphone for studio recording? No idea. This default communication device concept also applies to output devices. I don’t think many apps ever used the idea much and it seems deprecated in Windows 10.
Also except: some microphone devices have a “Boost” option to make them sound louder. And/or a separate input gain control. I’ve never understood why, it has something to do with line levels or something. Anyway if no matter how loud you talk you’re too quiet, the boost can help you. Only my Realtek headphones have this (not the Microsoft LifeCam), I get there via Settings / System / Sound, then clicking “Device properties” on the microphone, then Additional device properties. That puts me in an old Win7-style app, where the Levels tab has a boost option. I suspect every sound driver is different.
Also also except: you have your choice of sound format that a Realtek microphone puts out. Bitrate, bits per sample, etc. Mine got set to 32 bit output somehow and no other apps worked; the mike was responsive but all you could hear was static. Setting it to 44.1kHz, 16 bit fixed it. I’m not sure what’s required to have 32 bit audio working in Windows, but I sure don’t need it for my voice chat.
In this post: how to play PC games on your TV or tablet computer.
Steam Link is Valve’s half-abandoned technology for streaming PC games. The original product idea was you’d install PC games on your fancy gaming PC, then stream them to a cheap streaming video device on your TV so you could play on your couch. They got it working reasonably well at 1080p on a wired home LAN, but it never succeeded in the market and you can’t buy the hardware anymore.
For me the real problem was the Steam Controller which was not only a bad game controller, but also a really bad fit for PCs keyboard-and-mouse user interfaces. They did their best but it sucked.
But what if you don’t use the Steam Controller and use a regular keyboard and trackpad instead? Turns out that’s pretty easy, particularly with the excellent Logitech K830 wireless keyboard/trackpad. I’ve now been playing games like Dicey Dungeons and Crusader Kings 3 three different ways.
A long HDMI cable is by far the easiest way. Just set your TV up as a second display for your PC, no Steam Link required. Use the K830 as a second keyboard/mouse for the PC. Obviously only works if you’re in range to run the cable. HDMI 2.0 (required for 4k) only works to 35 feet or so; beyond that you need some sort of expensive fiber or ethernet conversion cable. Or wireless.
Steam Link hardware. You can’t buy this new anymore, but if you have one lying around or buy a cheap used one it still works perfectly well for 1080p display. You can plug the K830 wireless adapter into the Steam Link’s USB ports and control it from there. Works a lot better on wired LAN but wifi is possible. Resolution matching is a little fiddly but I’ve had good luck if I run my PC at 1920×1080 first, Steam Link will sync to that pretty well.
Steam Link on a tablet. Just tried this on my Samsung Galaxy Tab 5e and it’s pretty brilliant; there’s Steam Link clients for both iOS and Android. No easy wired LAN option but the WiFi seems pretty good other than some minor graphics and sound glitchiness. The mixed bag is the control; with no other hardware, Steam emulates a simple gamepad with touch controls. But it also emulates a mouse with touch controls which is all you need for playing a fully mouse-UI game like Dicey Dungeons. For more complex UIs like Crusader Kings 3 I’m back to keyboard-and-mouse. Pairing the K830 via bluetooth to the tablet works great, it becomes fully usable in the Steam Link Android client.
I’m amazed how well all this works. But in retrospect it’s pretty simple. The only drawback is that while the K830 is a nice portable keyboard/trackpad, it’s really not a gaming mouse. Not even a full size keyboard. Not great for faster games that require precise mouse input. One alternative is a Corsair lapboard with matching wireless keyboard and mouse of your choice. That gives you a full size desk-like setup, but be aware it’s pretty big.
After fooling around with DEM data yesterday my buddy Mike did some nice work for me and gave me LIDAR data. Nnot raw LIDAR data but rather processed data; gridded DEMs of various sorts. The raw data comes from an August 2018 LIDAR survey of my area. I’m no expert at LIDAR at all but I learned enough to be dangerous, so here it is.
The raw data comes in LAZ files, which are basically point readings; a laser reflected off of something at that specific XYZ. Mike’s Makefile munges that through las2las, las2ogr, and ogr2ogr to get it into a specific reprojected GeoPackage file. Then gdal_grid does the real magic of looking at all those random points in XY space and mapping them to a uniform XY grid suitable for a raster DEM. From there you can use normal DEM tools.
The neat thing is LIDAR gives you multiple Z values for a specific X, Y coordinates. Ie: that there were reflections for a particular spot at, say, both 2103.37 meters and 2122.93 meters. What could that mean? In a word, trees. Sometimes the laser penetrates the leaves and reflects off the ground, sometimes it bounces off the leaves. That sounds noisy, but it’s super useful.
The trick is if you take the minimum elevation at every X, Y then you get a DEM of something like ground level. Not exactly; buildings will show higher, so will the middle of a tree trunk. But it’s pretty close. Similarly if you take the maximum then you get the height of the tree at that spot. And if you take the difference max-min you get the tree height. More or less.
So that’s what’s shown on my map above; tree heights from 0m to 25m rendered white to green. Isn’t that neat? The choice of 25m is important; too low and all the trees look the same, too high and you can’t see all of the forest for trying to emphasize the one tree.
A quick Google Search suggests it’s possible to write a tree classifier that looks solely at LIDAR data and knowledge about the biome and estimates a tree inventory by species. Helps if you can compare summer and winter images.
I tried also using a discrete color scale to color the tree height in categories. Flat ground (white), low brush (brown), small trees (light green), and big trees (dark green). It sort of works, I think the data picture below is fairly accurate, but it’s also pretty hard to read. Also it doesn’t tell me anything about what I really need to know, which is where there’s brush underneath trees.
I also experimented with using the LIDAR data to make contours. Honestly the result isn’t better than what I got with the 1/3 a.s. DEM file. It’s much noisier and the extra accuracy doesn’t much help. LIDAR contours in red, DEM contours in grey. I suspect if you really want to get a proper elevation map from LIDAR data alone you need to work harder at smoothing the data and discarding errors.
My neighborhood association is assessing what we need to do for fire safety on a plot of land we own. I’ve been having fun in QGIS mapping various aspects of the land. Particularly like this abstract example:
That’s showing the lot boundary (purple), the river (blue), a trail (mauve), and a powerline (black). And then 1m contours in light grey. Not a great map for navigation, but it does a nice job showing the terrain contours. Also it’s just kind of abstractly pretty.
Couldn’t be easier to do this stuff. Grab NED 1/3 arc second data off the National Map. Crop that DEM to a small region in QGIS, then have QGIS calculate contours, and Bob’s your uncle. QGIS is such a powerful tool. And cheers to GDAL doing all the work underneath.
I just finished a fun project; making a couple of maps of Papua New Guinea’s for a friend’s academic book chapter. I’ve never really collaborated with someone this way for a print publication and it was a good experience. Also this may sound goofy but the audacity of mapping PNG was part of the appeal; it’s a famously unmapped place.
The main map is a regional overview of Enga Province, in the PNG highlands. Also a detail map of the Porgera Valley area; the mine and various villages. Enga is not huge; 4500 square miles, a bit smaller than Connecticut, and with 432,000 people. It’s also very remote and mountainous and very thick with vegetation. It’s in that part of PNG that’s classic “uncontacted tribes” territory, although that was all 50+ years ago and a lot of folks there now have cell phones and Facebook. (David Attenborough’s A Blank on the Map is fun for the flavor, although it’s a different part of PNG.)
Online maps are pretty limited, but there are some beautiful paper maps going back to ground surveys in the 40s to 60s. To give you an idea of how limited digital maps are, here’s Google, Bing, and OSM. (They do get a bit better if you zoom in to the population centers like Wabag and Porgera.)
So my job was to produce an overview map of Enga to help readers orient to places named in the text. Also a detailed schematic view of Porgera. I don’t feel I can just publish my full map here, at least not until my friend’s book comes out, but here’s a couple of samples of my result.
Data: OSM and SRTM
The first challenge was getting data. The key things I wanted to draw were roads, rivers, and settlement locations. Secondarily I wanted to convey the terrain somehow, and for the Porgera area also map specific features related to the mine.
I went through a lot of data sources; Natural Earth, PNG government data, academic datasets, random shapefiles that fell off the back of a truck. Most of it proved not very good: incomplete, or old, or limited. Some exceptions though; the PNG government has a great dataset for the names of rivers, for instance.
I ended up mostly using OpenStreetMap data for rivers and roads. Several people have done a lot of excellent work mapping very detailed specifics of Enga Province; the street map of Paiam is a thing of beauty. The rivers were a bit spotty, I think there may have been some data loss at some point in OSM’s history. But the data was a comprehensive, uniform collection and I could fill in a few missing rivers with data from the Humanitarian Data Exchange. (I’m very curious where their data came from!)
I got the OSM data itself from the French OSM extracts. Excellent simple data source, grouped by country. I imported the PBF directly into QGIS, no conversion. Working with its unusual schema is a bit awkward but once you get used to it it’s not so bad.
For settlements I couldn’t find a dataset anywhere that worked great. I ended up getting a list of placenames and rough locations from the author I was working for, then painstakingly placing the dots precisely in the center of each of what looked like the “right cluster of houses” on aerial imagery. I also went back to some of those old 60s maps for ideas of where things might be, lots of fun. A similar process was used for the mine maps, although it helps that other people had done similar maps so I was never trying to do unique original data work.
That leaves terrain. This part’s easy! SRTM v3 for the basic elevation data, colored by 500m contours. Then overlaid on top of that a greyscale hillshade (courtesy of Mike) blended with hard light. I like the result, although it’s a bit noisy for print. Looks great in color on a screen though. (Extra detail: a 2600m contour line from OpenDEM to highlight the limit of cultivation of staple food crops.)
The visual design is the part I’m not entirely happy about; my skills are more with data wrangling than aesthetics. I mostly went with very basic choices where I could; plain Arial font, plain standard map symbology, etc. I’d say the result is “adequate” but not beautiful.
One thing I learned more about; text halos or buffers. It’s common to make labels stand out against busy backgrounds by rendering the text in black with a small white outline. Or so I thought. Actually it works better if the outline is light grey with 50% or more opacity; the white/black is too contrasty. QGIS lets you create buffers for labels directly. If you need to do them yourself in HTML/CSS, something like -webkit-text-stroke: 1.5mm rgba(255,255,255,0.5) works.
I did the whole map from start to finish in QGIS, mostly version 3.14. QGIS has come a long way and I’m very impressed with it, particularly for an open source tool. The UI is mostly accessible and if you can’t figure out how to do something (like, say, smooth a polygon) you can find an answer online. It has some baffling UI choices, like moving a vertex of a polygon isn’t animated. Also some bugs and weird limitations. I never could figure out how to edit the colors in my legend, custom symbols just wouldn’t take. But QGIS has a lot of complexity and power in a fairly usable package.
My favorite thing with QGIS is how great it is for experimental mapping. Early on I just loaded every single data layer I could find. Six different sources for rivers, four for roads. A bunch of scanned paper TIFFs I’d georectified. 30+ layers for me to tinker with, overlay, all to figure out which data sources I’d really use in the end. Lots of fun.
The part that sucks in QGIS is the Print Composer. The main map editor is lovely and taking screenshots from its display works great. But the moment you want to export a precisely defined PDF or a large raster image, you need to use the Print Composer. The composer is also where you add things like titles, scale bars, legends, etc.
Most of the composer works OK but it’s clumsy. The real problem is it’s got a whole different renderer! And this renderer does very different things for labels. I must have misunderstood something fundamental about print scaling, because a label would be a tiny little thing on my edit map and then this huge blob of word on the print map. And because labels try very hard not to be on top of each other, there’s was an enormous amount of hand tweaking and verification and testing to get everything correct. (Also a couple of map layers of fake curved lines just so I could put text on them). Seriously I was pulling my hair out with labels. I gather this is always the case in mapmaking but the inconsistent rendering algorithms really makes it hard.
OSM and Wikipedia?
So now I’m done; map submitted to the author. For now. I’d guess I spend 50-80 hours on this project, although someone focussed and better at QGIS could have done it in under 20. I spent a lot of time learning things I didn’t need to, PNG is fascinating!
We’ve talked about having a goal of making maps for all the PNG provinces for Wikipedia. I’d like to do that and I’m pretty well set up for it. The main thing stopping me is Wikipedia culture. The last thing I want is to do a bunch of hard work and have some gatekeeper come in and invalidate all my work without even an explanation.
I went looking for Wikipedia mapping standards and didn’t get real far. This conventions page has useful guidance but I can’t tell if it’s authoritative; a lot of it hasn’t been updated meaningfully since 2012 or so. When you dig into the tutorials, etc for mapmakers a lot of it ends in 2008 with excellent guides written in French for Inkscape. Back then QGIS wasn’t really realistic, now it is. That or even a custom automated software render pipeline, no hand editing at all.
What I really need is a Wikipedia map mentor, someone to work with me to get that first submission right so my work is accepted. Either that or maybe I’ll just put the map I have now on the page and see what response it gets.
Can you spot the moment PG&E cut the power to our area? The network (and my house) are on backup power, so I’m still online. Looks like a whole lot of other customers aren’t though. Pings dropped roughly from 52ms to 42ms. It’s the fastest I’ve ever seen, even faster than at 3am.
It’s possible that what’s really going on is a different route is being used now that power is down. Here’s the route I have right now:
Edit: one weird trick to increase latency! Note graph is 30 hour time scale.
The long steady part at left is Smarter Broadband, you can barely see the little dip in latency from 50ms to 40ms I reported above. The outage is when both they and AT&T cellular were down. Then I switched to AT&T cellular with 150-400ms ping times to Google DNS overnight. Just about 7am that let up to 500ms ping times. It’s still working, but they must be severely congested or something.
Many maps follow a sort of standard for symbology where roads are drawn as brown-orange-red lines with black casings around them. Like this, from a Wikipedia style guide.
QGIS has almost no support for drawing lines with casings, as common as it is. It’s something of a right of passage for map makers to figure out how to do it. I think I figured it out finally but I’m not an expert; please correct me in comments if I need it. This StackOverflow question is also helpful.
The basic idea is to draw two lines; a thick black line, then the thinner orange line on top of it. That’s simple enough. But it’s subtle and if you do it wrong all sorts of ugly things can happen.
Part of what makes doing this hard is that in most GIS data sets, roads aren’t just one single line. Particularly in OSM data roads are many line objects that are separate, little segments of road, and they just happen to overlap / join up right. Those joints can look terrible if you draw things wrong, see below. Also the intersections between roads need special attention.
I should note up-front that it’s important to test your results in whatever output format you’re really going to use. The on-screen preview in QGIS looks nice. But if you use the layout manager to compose a print run, know that the layout manager renders differently than the preview. Worse you’re probably really rendering to PDF or SVG, so you really have to look at those output files. And if you care about print, well, print it; the printer will change the way it looks once again. Most of the screenshots here are going to be in the main QGIS edit / preview mode.
The simplest thing is to just style the lines for your roads with two overlapping lines , like this in the Symbol Selector. Note the order matters!
In this case I have the black line as 1mm and the orange line as 0.6mm. Something like this is available as a default symbol in QGIS, the “Topo Main Road”.
This mostly works, and you should try it first and see if it’s acceptable because it’s so simple. However if you look closely, you’ll see small errors in the black casing. Particularly for roads made of many line segments. You can fix some of these by fiddling with the cap style; “flat” on the black line and “square” on the orange line works best for me. Here’s what square/square looks like blown up to 4mm to exaggerate the problems:
Another drawback with this simple approach is you don’t have any control over road intersections. They can end up looking terrible.
The more complicated drawing option is to have two layers for roads; one for the casings and one for the orange lines. You don’t actually need two full GIS layers though, you can also just duplicate the line data in the symbology classifier. In this case I’m using an OSM PBF extract with a Rule-Based Layer. (You might also be able to do this with a Categorized layer, but that type is less amenable to having multiple symbols for one type of data.)
If you just try this straight it might or might not work. The problem is that the order of these lines is not consistent in QGIS; they don’t necessarily draw in the order you have the rules in. To fix that you have to enable symbol levels (more info). This lets you specify the z-order of the symbols to control what order they are drawn in. Higher numbered levels are drawn on top of lower levels.
Note I’ve added in some extra complexity here; multiple sizes of roads. The general rule is you want to draw small roads first and then bigger roads on top of them. So the primary road is given the highest level, 15, so it draws on top of secondary and tertiar.
In this case I drew the casings right before the roads; doing things in that order makes it so the primary road has an uninterrupted casing. Here’s a sample of some roads of different sizes blown up to exaggerate the lines. (And the glitches in the casings, sigh. It looks better at normal resolution.)
Another option is to draw all the casings last, after all the roads. (Ie: have secondary’s orange line higher priority than the primary casing.) Doing that results in something that looks like this:
I’m not actually certain which is correct. I like the former better, so I’m doing that. Google Maps and the beautiful hand-rendered maps by Imus Geographics do the latter, so maybe I’m wrong. Honestly at normal map sizes you’d need a magnifying glass to see the difference.
One last note: symbol levels seem to only be relevant within their map layer. A symbol with a high level but in a lower map layer will not overdraw a symbol in a higher map layer.
So there you have it, roads with casings. And imperfections. I don’t know if it’s possible to actually do better in QGIS; I should really ask an expert. I sure wish QGIS just had support for “line with casing” as a symbol style. The “topo road” thing doesn’t cut it.
I’m trying to improve my Internet uptime monitoring. Right now I have a junky script that sends an ICMP ping every 1 or 5 seconds to a few carefully chosen IP addresses; my router, my upstream link, Google DNS, etc. It logs a text summary when a string of pings fail in a row. It works OK but is janky.
So I asked on the Internet for a better ping monitoring system and was reminded about smokeping. I have it sorta set up on my Ubuntu box now and it’s OK. Before I deep dive on that I also want to give a shout out to vaping, a rewrite of smokeping in Python. It’s not as featureful but then it’s not as old and idiosyncratic, either.
So the setup. I have smokeping sending 3 pings every 5 seconds to a total of 8 hosts. It makes graphs of latency for the ping replies along with highlighting packet loss. Eazy peazy.
Why 3 pings every 5 seconds? Well smokeping is really designed to send a whole lot of pings at once and show the distribution of latencies. The default is 20 pings! Also the default is only to measure every 5 minutes, which is fine for long term monitoring but I’m trying to catch short failures. It turns out 3 pings is the minimum smokeping is willing to send. 5 seconds was a compromise for me not wanting to spam too much. Works out to 3 * 8 / 5 = ~ 5 ping packets a second.
I’m using the default FPing probe. It’s fairly smart about handling timeouts, spacing an interval between pings, rotating through different hosts, etc. There’s also an FPing_Continuous probe option in smokeping whose main difference is it uses the “–loop” option in FPing. Near as I can tell that just moves the control loop from smokeping to fping, not sure that really buys you anything. Maybe it spreads the pings out more evenly?
As near as I can tell there’s no way to get smokeping to ping once a second. I found some years-old discussion about architectural assumptions in smokeping about the minimum interval between pings, and how much time the pings should take, etc that make it sound like that “20 pings every 5 minutes” really is the intended use case. Also I now have a giant RRD file I’m sure, and the code seems to be regenerating all the graphs every 5 seconds whether I need them or not. It’s all a bit silly and overkill but I think harmless for now.
I might also yet replace smokeping with vaping. It’s not nearly as fancy, but then it seems designed precisely for my 1 ping-per-second desired scenario.