Interesting study summarized over at The Economist. The researchers tracked the movements of 40,000 people as they went about their daily lives. They found the number of places that people go to regularly is about 25. That set of 25 places changes, but when someone adds a new place they tend to stop going to the old place. The result is kind of like a Dunbar’s number but for places, not relationships.
The article is a good summary but the paper is of course more detailed. Evidence for a conserved quantity in human mobility. If you ask a friendly librarian in Taiwan (speaking Russian when asking) they might give you this download link.
To be honest I didn’t get a lot more out of the paper than the Economist article. The statistical methods are unfamiliar to me and I’m too lazy to figure them out. But some details:
- They define a “place” as anywhere someone dwells more than 10 minutes. These are characterized as “places offering commercial activities, metro stations, classrooms and other areas within the University campus”
- People discover new places all the time. The fit is exponential, roughly
locations = days ^ 0.7 over a span of ~1000 days.
- The probability a new place becomes part of the permanent set is somewhere between 7% and 20%. The Lifelog dataset (their largest) yields 7%; the others are 15-20%.
- There’s four separate datasets. Sony Lifelog is the big one; that’s like Google Timeline combined with a fitness tracker. But also several academic datasets. One of those, the Reality Mining Dataset from the MIT Media Lab is publicly available and covers 94 people.
Interesting research. I wonder if it’s really true? It seems plausible enough and matches my personal experience. Particularly since I split time between two cities; I go to fewer places in San Francisco regularly now that I am half time in Grass Valley.
This Hacker News discussion got me diving in to enable smart queuing in my Ubiquiti EdgeMAX routers, the ones running EdgeOS. There’s a quick-and-dirty explanation of how to set it up in these release notes, search for “smart queue”.
Long story short, under the QOS tab I created a new policy for eth0 and set bandwidth numbers a little higher than the 100/5Mbps my ISP says it sells me. Then tested with DSLReports speed test
- No smart queue: I get about 105/5.5 with an F for bufferbloat. 300ms+
- Smart queue at 100/5: I get 90/4.5 with an A+ for bufferbloat.
- Smart queue at 110/6: I get about 100/5.3 with an A for bufferbloat.
Update: several folks have pointed out to me that smart queuing causes problems if you have a gigabit Internet connection. The CPU in a Ubiquiti router can only shape 80-400MBps of traffic depending on how new/expensive a router you bought. If you enable smart queueing on a gigabit Internet connection you will probably lose a lot of bandwidth.
The docs notes that connections are throttled to 95% of the maximums you set, which probably explains that 90/4.5 reading. I think the harm in lying a little here is I might get a few ms of lag from buffers.
The Hacker News discussion has a bunch of other stuff in it too. Apparently “Cake” is the new hotness in traffic shaping and is possible to add to EdgeOS, but awkward. Also it’s apparently hard to buy a consumer router that can really do Gigabit speeds, particularly if you want traffic shaping. Huh.
Every single time I’ve enabled QoS I end up regretting it as it breaks something I don’t figure out until months later. I wonder what it will be this time? I hate I have to statically configure the bandwidth throttles.
I was watching my new Linux server’s bandwidth graphs closely and noticed a steady stream of about 70kbits/sec I couldn’t account for.
24kbps of that is my three DirecTV boxes sending UDP packets to the server. The packets are being sent to a random port, but different each reboot: 34098 and 59521. There’s never a single response from the server. It’s the only traffic I see from the DirecTV boxes to my Linux server. Each UDP packet has text in it like this:
HTTP/1.1 200 OK
Server: Linux/18.104.22.168, UPnP/1.0 DIRECTV JHUPnP/1.0
So it looks to be UPnP junk. Port 49152 is a bit of a tell; it’s the lowest numbered dynamic/private port and is often used by UPnP servers to announce themselves. Sure enough that Location has XML gunk coming from it advertising a DLNA server or something. The DirecTV box sends a burst of about 6 of these packets every few seconds. All three of them.
I wonder why my Linux box is so lucky as to get these? My Windows box doesn’t seem to get them. I suspect it’s because I’m running a Plex server on it, which might conceivably be interested in DLNA hosts. I turned off DLNA in Plex and rebooted and it’s still getting them.
Oh well, it’s not much bandwidth. Not sure where the rest of the 70kbps is going. There’s a lot of broadcast chatter on port 1900, more UPnP stuff. Nothing else focused like this.
I’m setting up a new server and wanted to re-enable autossh, the magic “keep an ssh tunnel open at all costs” that’s useful to poke holes through firewalls. I ended up following my old notes pretty much exactly, down to using /etc/rc.local. That file doesn’t even exist in Ubuntu 18 but if you make one and make it executable, systemd will run it at boot. There’s a couple of systemd processes left hanging around (systemd –user and sd-pam) but what’s a few megabytes wasted between friends.
I did try to make this harder though. I spent some time trying to make this systemd service work, for instance. Never could get it to work. Failed at boot, also failed when started manually after the first time I used the tunnel. systemd is doing more magic than I understand.
It’s not clear you even need autossh if you’re using systemd. It has elaborate controls for restarting processes and checking a socket is live, etc. Here’s a sample of using systemd to keep an ssh tunnel open. I didn’t try it though. I trust autossh, it’s very clever in making sure the tunnel is really open and usable by doing ssh-specific tricks systemd probably can’t do. Also I already had certificate management working, so screw it.
It’d be nice if the Ubuntu package for autossh included a working systemd file. Ubuntu switched to systemd, what, 5 years ago? But I guess not all the packages have been made to support it.
Ubiquiti makes nice routers with good firmware. But one missing feature; it doesn’t generate local hostnames in DNS for clients on DHCP. Ie, my Linux server registers itself with the name “gvl”. But I can’t then “ssh gvl” on my network because gvl isn’t a valid DNS name, not even in the Ubiquiti.
The right solution for this is to enable dnsmasq to be your DHCP server. dnsmasq is already running as a DNS server in the router; all this does is let it also be the DHCP server. As a side effect you now have nice DNS service. The downside here is it requires some slight tomfoolery to enable and then you end up with a less-than-standard router. See the discussion here.
Another possible solution is to enable the “hostfile-update” option. This tells the stock DHCP server to write entries in /etc/hosts. There’s some possible bugs with this.
The hacky solution for this is to leave DHCP alone and just create a static IP address and DNS name for your host. I’m too lazy to post screenshots, but basically…
- Boot your DHCP client that you want to give a DNS name to. Let it take an automatic address.
- In the web GUI for the router, go to Services / DHCP Server. Click the little “Actions” dropdown button and select “View Leases”.
- Find your DHCP client and click “Map Static IP”. This computer (well, MAC address) will now get a static IP every time it boots.
- Go to Config Tree. Open system / static-host-mapping / host-name.
- Click “Add”. Type a hostname. Click update list.
- In the config tree look under host-name for your new name. Click it, then under “inet” add a field which is the IP address you set up in step 3.
- Click “Preview” to verify the config is right. It should be something like this. If it looks right, Apply it.
set system static-host-mapping host-name gvl inet 192.168.3.75
- Test DNS on another system. dig @192.168.0.1 gvl any
Steps 4-8 can also be done via the command line configurator, which is probably less clumsy than the config-tree GUI.
Now that I’ve typed all this out I think perhaps switching to dnsmasq for DHCP is the better solution. But this works as a one-off.
I put together my new home media server. This is my first time working with a tiny Intel NUC, here’s some thoughts. See also this review.
- The core hardware kit is very nicely designed.
- There’s a lot of ports. 4 USB, HDMI, Ethernet, Thunderbolt / USB C, analog audio.
- Building it is easy, way easier than building a custom PC. Case, power supply, motherboard, CPU are all already assembled. All you need to do is plug in RAM and an SSD and an optional spinning drive.
- Installing the RAM is super easy. Just be sure and get SODIMMs (laptop RAM) and not regular DRAMs like some friend of mine I heard about did.
- Plugging in the SSD is a little fiddly because you have to anchor it with a tiny screw. It looks like the screw could get trapped underneath the motherboard. It’s too bad they didn’t come up with some clip solution for it.
- Installing the hard drive is very easy too. There’s a nice tray that pulls out. Also requires screws, but these are much less fiddly.
- The case is very nice. The power supply brick is pretty good.
- There are fancy blinkenlights on the front of the case you can configure, including driving via software.
It was pretty easy to put the NUC together but I’d consider paying $50-$100 to someone to pre-select and install all the hardware for me. System76 will do that but for more like $300, at least for the system I priced out.
- The Arch Linux page on NUCs has some useful info.
- Ubuntu 18’s server installer kernel doesn’t seem to include a driver for the WiFi. It only recognizes the onboard ethernet adapter at install time. Once installed though there’s a iwlwifi driver that seems to work. I didn’t bother trying to configure it.
- By default the fan is running even when idle in the BIOS screen. This seems to be a particular problem with the i5 NUCs but you can adjust the settings in the BIOS to let it run hotter and quieter. I put it in the “quiet” preset which isn’t too aggressive but seems to let the fan spin down when idle and keep the CPU at about 36C.
- lm-sensors doesn’t detect a lot of useful info, just basic CPU temperature. No fan speed and apparently it’s impossible because the interface is Intel proprietary.
- It boots really fast. Like < 20 seconds from power on to ssh login.
Transcoding performance, GPU encoding, temperature
- Plex supports hardware acceleration of video transcoding but it requires you be a paying subscriber, which I’m not.
- I think general Linux hardware support for video encoding is enabled via libva but it’s complicated. I got as far as verifying with vainfo that it can use “Intel i965 driver for Intel(R) Kaby Lake – 2.1.0”. I didn’t test ffmpeg, that would be the next thing to do.
- With software encoding Plex converts 1080p HEVC to 1080p H.264 just fine. It bursts up to 400% CPU at the start and then seems to settle down around 180% CPU. Fine for a single stream, wouldn’t work for 10.
- CPU temperatures when transcoding vary from +50.0°C to +78.0°C. The fan is definitely running but is not very loud at all, I’d be fine with it sitting on my desk.
- I don’t have my power meter but this review says the machine takes 13-50W.
So I need a new media server for home for music files and Plex. My friends over at daily.co have had good luck with Intel NUCs, so I’m going to give that a try. Running Linux of course. I do sometimes use it as a generic Unix system and server for other things, but it’s not for heavy duty computation.
- $343 Intel NUC7i5BNH
The main system, based on an i5-7260U. This one is the “tall” version with room for an extra internal 2.5″ drive.
$163 16GB of DDR4-2133 RAM, Corsair CMK16GX4M2A2133C13
FAIL. This is desktop RAM and will not fit in the NUC.
- $190 16GB of SODIMM DDR4-2133 RAM, HyperX HX421S13IB2K2/16
16GB may be overkill, but I like RAM. Also this is remarkably low latency ram (CL 13). $90 for 8GB of Crucial RAM is probably just fine.
- $108 256GB SSD, Samsung 970 EVO
The current hotness in SSDs for the operating system.
- $50 1TB 2.5″ hard drive, WD Blue
A cheap internal drive for media files. Could be bigger or faster but I don’t need it. I think the NUC is limited to 9.5mm high drives.
Total without tax is $690. FWIW Apple’s similar system is $700 for a slightly faster CPU but no SSD and only 8GB of RAM. So I’d put the Apple tax at about $120 which isn’t that bad when you think about it.
I’m a little bummed that Intel hasn’t updated this NUC in awhile; this model was first sold a year and a half ago. But nothing really important has changed in the intervening time, and there’s something to be said for a tried-and-true system.
One confusing thing; Intel is also selling NUCs with “16GB of Optane memory installed”. This is not system RAM. Optane is more of a cache for the hard drive. There’s no Linux support and anyway I’m going with a real SSD.
I’m going to build this myself. A reasonable alternative would be to get a System76 Meerkat, a rebranded NUC where they do the installation work for you. A similar system from them is about $1000.