python 3.6 on Ubuntu 16.04 LTS

I want to use python 3.6 on my Ubuntu 16.04 LTS box; they only have python3.5. It’s pretty simple following these instructions but here’s the details.

Step 1: install python 3.6 from Jonathon Fernyhough’s PPA.

add-apt-repository ppa:jonathonf/python-3.6
apt-get update
apt-get install python3.6 python3.6-venv python3.6-dev

Step 2: set up a 3.6 venv

python3.6 -m venv venv
source venv/bin/activate
pip install -U setuptools pip
python -V

Now when you activate venv, you’ll automatically get python 3.6 in that environment.

In addition to the jonathonf PPAs there’s also the deadsnakes PPA which has old versions of Python as well as new ones.

Leaflet heatmap options considered

I mentioned wanting a heatmap for some geographic data. Leaflet has a variety of heatmap plugins, I spent a couple of hours looking at them.

First, a word about heatmaps. A true heatmap has a feature where if you keep adding points (“heat”) to a spot, that heat diffuses and spreads out. In theory you could cover the whole world in hot red just adding to a single lat/lon because the heat diffuses (unless you have cooling/decay enabled). I want this feature. Calculating diffusion like this is complicated and slow. So most “heatmaps” cheat and instead of running a diffusion algorithm just draw blurry transparent dots. When they overlap it looks a lot like a heatmap, but you never get the full diffusion. It’s a reasonable compromise for a lot of applications (and way faster, particularly since it’s GPU-friendly) but it’s a compromise I’m hoping to avoid. See also this technical discussion.

Another key thing is how map zooming is handled. The heat blobs are really rendered in terms of pixels, not meters or geographic degrees, so the heatmap picture needs to be re-rendered at every zoom level. Some of the libraries look like they just render one image and raster scale it to match the zoom. The behavior I want is visible in this demo: notice how the blobs get more detailed when you zoom in.

Anyway, here’s my quick review of the 7 plugins:

  • Leaflet.heat; first one I’ve used. Canvas based, seems to work for 20,000 points no problem. Not a true heatmap, just fuzzy circles, but it looks good. Last update a year ago.
  • heatmap.js: canvas based, I think a true heatmap. Demo doesn’t recalculate on zoom the way I want and I don’t think there’s code for doing it. The library is mostly used for rendering user attention on web pages.
  • webgl-heatmap-leaflet. I gotta think WebGL is a good idea. New code, mostly 3 months old. Unfortunately it doesn’t seem to have the zooming behavior I want. There’s some scaling code (units and size option) but that seems to be about projecting circles of a consistent size at different latitudes.
  • Leaflet-solr-heatmap, see examples here. It’s pretty impressive being able to handle 10M points but the grid display isn’t what I’m looking for.
  • leaflet-div-heatmap is clever for using CSS and divs. No live demo I could find though, and no commits in 4 years, so I stopped looking.
  • MaskCanvas: not a heatmap, more of a “reveal the map” display. It’s neat though! Last real update was 2 years ago.
  • HeatCanvas: true heatmap, a bit too oriented towards static images. Not sure if it works well in zooming. Last real update was 4 years ago, I didn’t look further.

In summary: Leaflet.heat is the one I want. It’s the only library that recalculates on zoom. The downside is it’s not a true heatmap. I think that actually matters for my application (I expect a lot of dwell time in one place) but it’s a compromise I can accept.

Worth keeping an eye on mourner’s WebGL work for mapbox.js too; I’d rather just use vanilla Leaflet though. But his demo looks great.

Heatmaps for geopoints?

Trying to figure out how to generate actual heatmaps for a bunch of geopoints. The usual hack is to draw blurry dots at 10% opacity and let them overlap. I want the real deal. The Sethoscope tool works in Python, but is pretty slow, see example below. Mourner at Mapbox is working on a WebGL thing.

The hard part here is heatmaps aren’t scale invariant; you need to render it differently depending on the scale the user is zoomed too. Realistically that means it needs to be in browser Javascript, and therefore fast. Maybe GL is necessary.


Razer Blade 2017 opinions

I bought a laptop for my trip to Germany. Difficulty level: ultrabook, but able to play Windows games. Turns out you don’t have a lot of options. I ended up with a 2017 Razer Blade. Some thoughts:

It’s accomplished its goal, I like the machine. In ordinary use it’s a quiet nice laptop. Good keyboard, nice screen, and it’s silent. Light 2d games (Necrosphere, Heat Signature) also run silently on the Intel GPU. When I engage a demanding game (XCOM 2 mostly these days) the NVidia GPU kicks in and soon after, the giant ridiculous loud fans.

The fan noise is really awful. Like don’t make a phone call in the same room awful. it seems to mostly be turbulence sucking air in from the 1/8″ gap under the laptop. But with earplug-style headphones and in a different room from other humans it’s OK. Heat is acceptable. This is the compromise one has to live with putting a GPU this big in a laptop. I kinda wish I could have bought a lower power / cooler GPU. Or maybe figure out a way to underclock this one. Games autodetect highest settings and when i turned them down it’s a bit quieter.

My main complaint as a sorta-Ultrabook is the weight. 4 lbs is noticeably heavier than my old 3 lb Macbook Air. But then the machine is much more powerful, too. I also wish the screen didn’t have such an enormous bezel. I think most laptops are now sized to the panel, but I suspect this Razer machine is sized to something else and they picked the biggest panel that’d fit and it’ about an inch shy. The speakers in the machine aren’t great either, another unappreciated thing Apple does.

But pretty much this Razer Blade is a nice machine. I think I chose right. I continue to wish someone made a true ultrabook with just a slightly nicer GPU in it than stock. At this rate Intel’s gonna catch up before that happens.

MQTT, OwnTracks, and location tracking

I want a passive location tracker on my phone, something to tell me where I’ve been over the span of years. Google Timeline + Android does this pretty nicely and this blog also has previous notes on Fog of War and other gamified apps. I was using OpenPaths on my iPhone to do something similar, but it was abandoned. The backing server is already gone and the app will die with iOS 11.

So what to do? A friend suggested OwnTracks, an iOS tracker. This isn’t civilian software; there’s no public service. The docs are all about running your own custom server to collect your location data. But the iOS tracker is capable. In particular you can choose battery vs. accuracy. You can either record your position every N seconds / every M meters, which takes about 10% of your battery an hour. Or you can use a more passive “significant movement” mode that is less sensitive but uses very little battery, maybe 1% an hour. I think that’s what OpenPaths did; I ran it for years without a problem. (There’s also a newer iOS API for “Visits” that also looks useful).

OwnTracks publishes data via MQTT, a message queue protocol designed for Internet of Things apps. It seems awfully complicated and a bit fusty, but if they’ve thought through reliable, low power connections for limited devices I’m glad to use it. The OwnTracks docs have some info on running your own MQTT server but that seemed complicated, so I’ve started with CloudMQTT which has a nice free level.

I created a CloudMQTT account and then just plugged in the Instance Info username/password into OwnTracks and set it to logging. CloudMQTT has a “Websocket UI” that shows you the message stream to verify the plumbing is working. OwnTracks reports look like this:


So that all works. Now what about capturing that data for myself? After some false starts I ended up at paho-mqtt, a Python client. The example client on that page more or less works, except you have to modify the on_connect() function to take a fourth flags parameter. (That’s a hint this code is musty.) The example connects to some public MQTT endpoint with a bunch of events flying by so you know it worked. Eventually one sent non-UTF8 and broke the Python code, but at least it was working.

Last step was to reconfigure the sample client to connect to my CloudMQTT instance. To get that working I had to create a new user and set it up with the ACL controls. (Wildcard access, read/write). For some reason the main user/login didn’t work; maybe because it was in use by OwnTracks? I also had to call client.subscribe() to get messages on my specific OwnTracks topic; not sure if that is supposed to be necessary.

Cloud MQTT also has support for forwarding events to Amazon Kinesis. I set this up hoping it was an easy way to get at the data but it just adds complexity and another system for me right now. There’s probably some way to feed the data into a database with AWS plumbing that might be nice. And it’ll scale!

Things to learn more about with MQTT:

  • Is MQTT a living tech, or is it abandoned?
  • CloudMQTT user management
  • paho.mqtt: naked TCP vs TLS vs Websockets
  • MQTT topics and subscriptions
  • Is Amazon Kinesis interesting for my application? Would be nice to have less code + servers to manage


Berlin home Internet link

More details on my Berlin Internet sitch. My apartment has a Technicolor router that comes from PrimaCom’s cable modem service. It was down entirely for several days (and not just for me) when we got here, which is not so awesome. Mobile hotspots are a poor alternative.

The downtime revealed PrimaCom manages the router WiFi. When the Internet was down the WiFi was also turned off. You couldn’t even turn it on in the admin pages (default login: username blank, password admin). Once Internet service was working again and I rebooted the router, I had WiFi. Weird.

The bummer about the router, or the larger network, is that it just forgets about idle TCP sockets after about 5.5 minutes. Idle ssh sessions and other quiet TCP links just stop working. Not even a reset, just a timeout, and it’s consistently timed so it’s not a FIFO queue filling up. There’s nothing in the router config that’s obviously causing the problem.

Of course web browsers never even notice, so most of their customers have no idea. I wonder if it breaks notifications via WebSocket though. For ssh I work around it with ServerAliveInterval=50. I suppose TCP keepalives would fix it in general but that’s a half-abandoned tech and the default interval in Windows is 2 hours (!) so you have to fiddle the registry. Ken has some weird TN3270 sessions he uses, I think SSL and telnet. We fixed his timeouts by just using Tunnelbear VPN which papers over the problem. I really should run my own VPN server.

We never could connect to an old school PPTP VPN. The TCP control channel for setup seems to have worked (according to Wireshark) but no GRE packets ever arrived.

The fun thing is I have an honest-to-goodness IPv6 address and all of it works. Sometimes I seem to be connecting to Google via IPv6. I’m not quite sure what to do with this capability, but it’s novel for me. I wonder if the router is still doing NAT for IPv6? The IP address a test site shows me is the same one my Windows box tells me I have, so I guess not. The router does have an “IPv6 firewall” enabled though, perhaps that provides basic protection from outside attacks? I should set up a remote IPv6 box to play around with this.

Update: Primacom seems woefully underprovisioned. In the morning I get about 5000kbps, bursting to 10Mbps. In the evening I’m doing good to get 500kbps, 1/10th the speed. I assume that’s contention with my other cable modem users. 500kbps isn’t really enough to even stream video.


Here’s a traceroute on IPv4

|                                      WinMTR statistics                                   |
|                       Host              -   %  | Sent | Recv | Best | Avrg | Wrst | Last |
|                    -    0 |    3 |    3 |    2 |    5 |   10 |    5 |
|                     -    0 |    3 |    3 |   10 |   20 |   41 |   41 |
|                   -    0 |    3 |    3 |   15 |   24 |   42 |   42 |
|                   -    0 |    3 |    3 |   15 |   24 |   42 |   42 |
|                    -    0 |    3 |    3 |   16 |   25 |   43 |   43 |
| -    0 |    3 |    3 |   24 |   30 |   43 |   43 |
| -    0 |    3 |    3 |   30 |   34 |   44 |   44 |
|    -    0 |    3 |    3 |   27 |   35 |   50 |   50 |
|     -    0 |    3 |    3 |   74 |   81 |   93 |   93 |
|    -    0 |    3 |    3 |  114 |  124 |  143 |  143 |
|     -    0 |    3 |    3 |  143 |  156 |  163 |  143 |
|    -    0 |    3 |    3 |  141 |  154 |  162 |  141 |
|      -    0 |    3 |    3 |  142 |  155 |  163 |  142 |
| -    0 |    3 |    3 |  142 |  155 |  163 |  142 |
| -   34 |    3 |    2 |  160 |  161 |  163 |  160 |
| -    0 |    3 |    3 |  146 |  156 |  163 |  146 |
|                -    0 |    3 |    3 |  145 |  157 |  163 |  145 |

Powerline ethernet

I’m spending a few weeks in an apartment in Berlin that’s a large sprawling place. The single wifi router at one end has no hope of reaching the other end and there’s no ethernet in the house. So I took a $50 gamble on a powerline ethernet kit and it seems to work, at least up to 30Mbps. We’ll see if it lasts; my friend says in his experience these things fail after a month or two.

The plug-n-play experience out of the box is pretty great. Plug both devices in, press the pair button, and you’re done. It acts more or less like a bridged ethernet. Plug one end into your router, another end into a computer, and you’re done.

One wrinkle here is security. Your network is being carried out on power lines and who knows how far they’ll go. So the products all have a shared key encryption mechanism (pairing is key exchange) to keep your network private. They probably leak all sorts of RF out the modulated wires, oops.

TP-Link makes a huge variety of these devices. My box says “AV600 TL-PA4020P Kit”, a variant I can’t even find anywhere, but it seems about the same as the AV500 versions only 100Mbps faster. The choices seem to be power passthrough or not, 1-3 ethernet ports per device, and then more expensive for higher bandwidth (up to 1.2Gbps). Different versions for different countries too, both the electrical plug shape and radio frequency interference concerns. Some versions also have a WiFi access point built in, I kind of wish I’d bought one but I wanted to keep it simple and cheap.

The relevant network standard is HomePlug, some vendor consortium for powerline networking. (They just dissolved last year and put all the specs in the public domain.) In theory there’s vendor interoperability. I don’t quite know what’s going on, the Wikipedia article talks about 1000+ channels from 2 to 86MHz.

The really interesting play here is if all your Internet of Things devices supported this kind of networking. Just plug them into the power; no wifi or ethernet required. I wonder why that’s not more popular, I suspect it’s because of the rumored flakiness of powerline ethernet.