Ubiquiti EdgeMAX / EdgeOS local DNS

Ubiquiti makes nice routers with good firmware. But one missing feature; it doesn’t generate local hostnames in DNS for clients on DHCP. Ie, my Linux server registers itself with the name “gvl”. But I can’t then “ssh gvl” on my network because gvl isn’t a valid DNS name, not even in the Ubiquiti.

The right solution for this is to enable dnsmasq to be your DHCP server. dnsmasq is already running as a DNS server in the router; all this does is let it also be the DHCP server. As a side effect you now have nice DNS service. The downside here is it requires some slight tomfoolery to enable and then you end up with a less-than-standard router. See the discussion here.

Another possible solution is to enable the “hostfile-update” option. This tells the stock DHCP server to write entries in /etc/hosts. There’s some possible bugs with this.

The hacky solution for this is to leave DHCP alone and just create a static IP address and DNS name for your host. I’m too lazy to post screenshots, but basically…

  1. Boot your DHCP client that you want to give a DNS name to. Let it take an automatic address.
  2. In the web GUI for the router, go to Services / DHCP Server. Click the little “Actions” dropdown button and select “View Leases”.
  3. Find your DHCP client and click “Map Static IP”. This computer (well, MAC address) will now get a static IP every time it boots.
  4. Go to Config Tree. Open system / static-host-mapping / host-name.
  5. Click “Add”. Type a hostname. Click update list.
  6. In the config tree look under host-name for your new name. Click it, then under “inet” add a field which is the IP address you set up in step 3.
  7. Click “Preview” to verify the config is right. It should be something like this. If it looks right, Apply it.
    set system static-host-mapping host-name gvl inet 192.168.3.75
  8. Test DNS on another system. dig @192.168.0.1 gvl any

Steps 4-8 can also be done via the command line configurator, which is probably less clumsy than the config-tree GUI.

Now that I’ve typed all this out I think perhaps switching to dnsmasq for DHCP is the better solution. But this works as a one-off.

NUC thoughts

I put together my new home media server. This is my first time working with a tiny Intel NUC, here’s some thoughts. See also this review.

Hardware install

  • The core hardware kit is very nicely designed.
  • There’s a lot of ports. 4 USB, HDMI, Ethernet, Thunderbolt / USB C, analog audio.
  • Building it is easy, way easier than building a custom PC. Case, power supply, motherboard, CPU are all already assembled. All you need to do is plug in RAM and an SSD and an optional spinning drive.
  • Installing the RAM is super easy. Just be sure and get SODIMMs (laptop RAM) and not regular DRAMs like some friend of mine I heard about did.
  • Plugging in the SSD is a little fiddly because you have to anchor it with a tiny screw. It looks like the screw could get trapped underneath the motherboard. It’s too bad they didn’t come up with some clip solution for it.
  • Installing the hard drive is very easy too. There’s a nice tray that pulls out. Also requires screws, but these are much less fiddly.
  • The case is very nice. The power supply brick is pretty good.
  • There are fancy blinkenlights on the front of the case you can configure, including driving via software.

It was pretty easy to put the NUC together but I’d consider paying $50-$100 to someone to pre-select and install all the hardware for me. System76 will do that but for more like $300, at least for the system I priced out.

Software, BIOS

  • The Arch Linux page on NUCs has some useful info.
  • Ubuntu 18’s server installer kernel doesn’t seem to include a driver for the WiFi. It only recognizes the onboard ethernet adapter at install time. Once installed though there’s a iwlwifi driver that seems to work. I didn’t bother trying to configure it.
  • By default the fan is running even when idle in the BIOS screen. This seems to be a particular problem with the i5 NUCs but you can adjust the settings in the BIOS to let it run hotter and quieter. I put it in the “quiet” preset which isn’t too aggressive but seems to let the fan spin down when idle and keep the CPU at about 36C.
  • lm-sensors doesn’t detect a lot of useful info, just basic CPU temperature. No fan speed and apparently it’s impossible because the interface is Intel proprietary.
  • It boots really fast. Like < 20 seconds from power on to ssh login.

Transcoding performance, GPU encoding, temperature

  • Plex supports hardware acceleration of video transcoding but it requires you be a paying subscriber, which I’m not.
  • I think general Linux hardware support for video encoding is enabled via libva but it’s complicated. I got as far as verifying with vainfo that it can use “Intel i965 driver for Intel(R) Kaby Lake – 2.1.0”. I didn’t test ffmpeg, that would be the next thing to do.
  • With software encoding Plex converts 1080p HEVC to 1080p H.264 just fine. It bursts up to 400% CPU at the start and then seems to settle down around 180% CPU. Fine for a single stream, wouldn’t work for 10.
  • CPU temperatures when transcoding vary from +50.0°C to +78.0°C. The fan is definitely running but is not very loud at all, I’d be fine with it sitting on my desk.
  • I don’t have my power meter but this review says the machine takes 13-50W.

New home media server

So I need a new media server for home for music files and Plex. My friends over at daily.co have had good luck with Intel NUCs, so I’m going to give that a try. Running Linux of course. I do sometimes use it as a generic Unix system and server for other things, but it’s not for heavy duty computation.

Parts list:

  • $343 Intel NUC7i5BNH
    The main system, based on an i5-7260U. This one is the “tall” version with room for an extra internal 2.5″ drive.
  • $163 16GB of DDR4-2133 RAM, Corsair CMK16GX4M2A2133C13
    FAIL. This is desktop RAM and will not fit in the NUC.
  • $190 16GB of SODIMM DDR4-2133 RAM, HyperX HX421S13IB2K2/16
    16GB may be overkill, but I like RAM. Also this is remarkably low latency ram (CL 13). $90 for 8GB of Crucial RAM is probably just fine.
  • $108 256GB SSD, Samsung 970 EVO
    The current hotness in SSDs for the operating system.
  • $50 1TB 2.5″ hard drive, WD Blue
    A cheap internal drive for media files. Could be bigger or faster but I don’t need it. I think the NUC is limited to 9.5mm high drives.

Total without tax is $690. FWIW Apple’s similar system is $700 for a slightly faster CPU but no SSD and only 8GB of RAM. So I’d put the Apple tax at about $120 which isn’t that bad when you think about it.

I’m a little bummed that Intel hasn’t updated this NUC in awhile; this model was first sold a year and a half ago. But nothing really important has changed in the intervening time, and there’s something to be said for a tried-and-true system.

One confusing thing; Intel is also selling NUCs with “16GB of Optane memory installed”. This is not system RAM. Optane is more of a cache for the hard drive. There’s no Linux support and anyway I’m going with a real SSD.

I’m going to build this myself. A reasonable alternative would be to get a System76 Meerkat, a rebranded NUC where they do the installation work for you. A similar system from them is about $1000.

Understanding smartctl

I have what I think is a failing hard drive in my little Linux server. Unfortunately it’s a 2012 Mac Mini so replacing the drive is a huge PITA. So.. just how bad is that drive, do I really have to replace it?

The first thing to do is enable SMART on the drive. It might already be on. But if it’s not no big deal, just turn it on.  Some of the historical data the drive has been tracking will suddenly be available, although the tables of attributes may not be. It’s safe to enable SMART and run tests while the drive is active. Here’s how to turn on SMART and also enable some automatic smart testing.

smartctl –smart=on –offlineauto=on –saveauto=on /dev/sda

You can then do a quick health check, but beware that a drive with some errors may still show as healthy

smartctl -H /dev/sda

Attributes

The next thing to do is dump all the smart data on the drive

smartctl -x /dev/sda | less

There’s a lot of data there and you can kind of read it. The confusing thing is the table of attributes; this guide here helped me understand it better.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  1 Raw_Read_Error_Rate     PO-R--   100   100   062    -    65536
  2 Throughput_Performance  P-S---   100   100   040    -    0
  3 Spin_Up_Time            POS---   208   208   033    -    1
  4 Start_Stop_Count        -O--C-   095   095   000    -    7921
  5 Reallocated_Sector_Ct   PO--CK   043   043   005    -    1490
  7 Seek_Error_Rate         PO-R--   100   100   067    -    0
  8 Seek_Time_Performance   P-S---   100   100   040    -    0
  9 Power_On_Hours          -O--C-   029   029   000    -    31378
 10 Spin_Retry_Count        PO--C-   100   100   060    -    0
 12 Power_Cycle_Count       -O--CK   100   100   000    -    158
160 Unknown_Attribute       -O--CK   100   100   000    -    0
191 G-Sense_Error_Rate      -O-R--   100   100   000    -    0
192 Power-Off_Retract_Count -O--CK   100   100   000    -    22
193 Load_Cycle_Count        -O--C-   001   001   000    -    1673246
194 Temperature_Celsius     -O----   200   200   000    -    30 (Min/Max 14/48)
195 Hardware_ECC_Recovered  -O-R--   100   100   000    -    0
196 Reallocated_Event_Count -O--CK   038   038   000    -    1614
197 Current_Pending_Sector  -O---K   100   100   000    -    152
198 Offline_Uncorrectable   ---R--   100   100   000    -    0
199 UDMA_CRC_Error_Count    -O-R--   200   200   000    -    0
223 Load_Retry_Count        -O-R--   100   100   000    -    0
254 Free_Fall_Sensor        -O--CK   100   100   000    -    3
                            ||||||_ K auto-keep
                            |||||__ C event count
                            ||||___ R error rate
                            |||____ S speed/performance
                            ||_____ O updated online
                            |______ P prefailure warning

Long story short, “Value” is a number for 1-100. The higher the number the better, 100 is perfect. “Worst” is the worst it’s ever been; maybe forever, maybe since SMART was enabled. “Thresh” is the threshold below which the drive is considered to be in “pre-fail” state.

So how’s my drive? A lot of values are 100, so it’s not a total loss. Reallocated_Sector_Ct is at 043 which is troubling but not the end of the world; drives are designed to map over the occasional bad sector.

Error log

The other thing I find useful is the error log.

Error 518 [1] occurred at disk power-on lifetime: 31377 hours (1307 days + 9 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 08 00 00 31 41 12 30 01 00  Error: UNC at LBA = 0x31411230 = 826348080

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 08 00 a8 00 00 31 41 12 30 40 00     00:07:11.085  READ FPDMA QUEUED
  60 00 40 00 a0 00 00 31 40 11 00 40 00     00:07:11.074  READ FPDMA QUEUED
  60 00 08 00 98 00 00 31 00 21 00 40 00     00:07:11.062  READ FPDMA QUEUED
  60 01 00 00 90 00 00 31 00 12 c0 40 00     00:07:11.061  READ FPDMA QUEUED
  60 01 00 00 88 00 00 31 00 11 c0 40 00     00:07:11.060  READ FPDMA QUEUED

The error details are not very useful. UNC means “Uncorrectable” and probably means some data was corrupted at that blog. But what’s super useful is the date stamp on the error along with the log. You can use that to estimate the rate of errors, whether it was a one time bad luck or something really wrong.

Running tests

SMART has a self-test built in.

smartctl –test=short /dev/sda

This runs for a couple of minutes asynchronously. Then you can see the result

smartctl -l selftest /dev/sda

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed: read failure       30%     31378         728636768
# 2  Short offline       Completed: read failure       30%     31378         728636768

I’ve run two short tests and both show a read failure. Yep, my disk is definitely not entirely healthy.

Monitoring

The clever thing to do is run smartd to track smart for you and send an email alert when the disk is failing. I’ll be honest; I do this and then don’t know how to act on the email. Replacing a disk the very first time anything goes wrong is not crazy, but it is expensive and sometimes you can accept that you might lose a file or two because it’s a backup disk or something.

 

AIS boat tracking

My sister now lives on a sailboat and is traveling the world. Good for her! I’d like to keep track of her boat, follow the adventure. This turns out to be totally doable thanks to the fact the boat has an AIS transceiver. Here’s some notes on tracking location with it.

The two best free sites for AIS tracking I’ve found are MarineTraffic.com and VesselFinder.com. There are also many more sites that do similar tracking with varying user interfaces and premium subscriptions for extra features like historical data. But the big difference with these sites is their source of AIS data.

AIS tracking mostly works by relying on a network of receivers installed on the shoreline. Receivers typically have a 15-20 mile range. MarineTraffic runs its own network of receivers. They have 3488 receivers, you can make a map of them. Many other sites, including VesselFinder, cooperate via AISHub. That’s a hacker-friendly project to try to get many AIS sites to share data. AISHub coverage is spotty, they currently have 543 receivers. But sometimes they have one in a spot MarineTraffic doesn’t. Here’s some other sources of AIS data, including historical archives.

Screenshot_1.png

There are also satellite AIS receivers in orbit so you can track a ship world-wide. Apparently AIS wasn’t designed for this but with some careful radio engineering the signals can be detected from space. I don’t think there are any free satellite AIS sites, but MarineTraffic sells tracking for a single ship for $16.50 / month using ORBCOMM satellites. A lot of the other products I saw were $1000s aimed at commercial customers.

I think a receiver that relays to the Internet can be pretty cheap. Here’s some discussion about doing it with a simple SDR and a Raspberry Pi. But for a proper installation you probably want a real antenna and a robust outdoor deployment and a cellular uplink, so I imagine a robust receiver is a few hundred bucks. MarineTraffic gives receivers away to people who can expand the network, more details in their FAQ.

AIS transmitters come in various flavors. My sister’s boat has a Class B transceiver, expensive commercial operations use something fancier. It works at around 162MHz transmitting 38400kbps at 2W.  Data is in NMEA format and includes a position update (from GPS) every 30 seconds. The devices are full transceivers, they receive positions from other ships too and can be interrogated for more detailed info. Messages include a unique identifier called an MMSI. Ships also broadcast extra data like their name, ship type, etc every 6 minutes. It’s all pretty similar to ADS-B used in aviation but seems rather further along in actual deployment.

 

Easier archived article access

I’m liking the Get Archive Firefox extension. It adds a toolbar button and some other UI for easily getting an archived copy of a web page from the Wayback Machine or from archive.today. It’s not doing anything fancy, just makes it one-click to do something that previously would be several steps.

I mostly use it for bypassing paywalls like at the Wall Street Journal or the Washington Post. But now that I have it installed I’m finding I’m using it pretty frequently, particularly for Wayback history on pages.

The addon is a bit over-featureful IMHO, it has a lot of keyboard shortcuts and other hooks that seem unnecessary to me. But you can turn those off and strip it back to just being a toolbar button.

Privacy Badger experience

I’ve been running Privacy Badger lately. That’s EFF’s ad tracking blocker, it tries to stop third party websites from tracking you. The clever thing is instead of having some admin-maintained block list, it’s a learning system that builds its own list based on what it observes on your browser:

Privacy Badger keeps note of the “third party” domains that embed images, scripts and advertising in the pages you visit. Privacy Badger looks for tracking techniques like uniquely identifying cookies, local storage “supercookies,” and canvas fingerprinting. If it observes a single third-party host tracking you on three separate sites, Privacy Badger will automatically disallow content from that third-party tracker. In some cases a third-party domain provides some important aspect of a page’s functionality, such as embedded maps, images, or stylesheets. In those cases Privacy Badger will allow connections to the third party but will screen out its tracking cookies and referrers (these hosts have their sliders set to the middle, “cookie block” position).

I’ve run this for a couple of months now and it has identified 1134 potential tracking domains. Note I run this along with uBlock and some other blockers, and I’m not sure how they interact.

  • 629 green (allow, not enough data)
  • 327 yellow (allow because it’s on a white list of essential services, but try to block personal tracking)
  • 76 red (block)

Here’s a shell snippet to dump all the domains that are fully blocked (red), starting with Privacy Badger’s JSON export:


jq '.action_map | to_entries[] | select (.value.heuristicAction == "block") | .key' PrivacyBadger_user_data-5_28_2018_5_28_59_PM.json |
tr -d \" | rev | sort | rev

Once I eliminate subdomains (ie: http://www.metafilter.com vs metafilter.com) I’m left with 41 domains. Some of these are surprising. Metafilter, for one, they’re a pretty benign company.. Also Google. Clearly they’re not blocking all stuff ever loaded as a third party from google.com; half the Internet would break. So I’m kind of confused what this is really doing.

stripe.network
spot.im
ooyala.com
quora.com
viafoura.com
gigya.com
dynamicyield.com
adobe.com
google.com
nymag.com
scorecardresearch.com
oath.com
facebook.com
zendesk.com
onesignal.com
amazon-adsystem.com
medium.com
lightboxcdn.com
linkedin.com
politico.com
yahoo.com
newyorker.com
metafilter.com
leagueoflegends.com
sharethis.com
tinypass.com
salesforceliveagent.com
pinterest.com
washingtonpost.com
jwpsrv.com
spotify.com
embedly.com
yummly.com
viafoura.co
brightcove.net
doubleclick.net
liveperson.net
po.st
digitru.st
pscp.tv
embed.ly