Nest thermostat hacking

I got a Nest thermostat a few months ago. I mostly like it, even if it is a bit overdesigned, it’s a lovely product. But a few days ago the battery stopped charging. Turns out charging is complicated on these things; it’s trying to vampire off the furnace control lines. Mine report to be running at about 35V, 20mA. That’s less than a Watt of power, not a huge amount, although it’s been enough so far and Nest tells me it should be sufficient.

Anyway, the diagnostics on the unit for battery charging are not very good. But the Nest helpfully uploads its battery state every few minutes, at least if it has enough battery charge to do that. Unhelpfully the webapp doesn’t show battery history. So I have to make my own.

I ended up using python-nest and writing a simple script to get the battery state.

import nest
username = 'nelson@monkey.org'
password = file('password').read().strip()
with nest.Nest(username, password) as napi:
    for structure in napi.structures:
        for device in structure.devices:
            target = device.target
            temperature = device.temperature
            ts = device._device['$timestamp']
            battery = device._device['battery_level']
            print "%.3f,%.3f,%.1f,%.1f" % (ts/1000.0, battery, temperature, target)

Boy that Python Nest API is funky. And undocumented. I have no idea what a “structure” is supposed to mean. The object doesn’t really exist until you query it with the iterator protocol and it lazily makes an HTTP request. Once it does I think you can also access it as an array, but not before. Then there’s the device which you access with properties, but that only has some data, the battery level is hidden in the device._device object which you access as a hash table. And while the command line tool can print whether the heat is actually running at the moment or not I never could find that in the data structures. At least I got my battery recording.

The Internet of Things enables whole new worlds of yak shaving.

bash: $@ vs “$@”

Brad Fitz’s tweet about how bash treats $@ vs “$@” had me curious. The man page explanation is confusing but indicates these are different.

When the expansion occurs within double quotes, each parameter expands to a separate word. That is, “$@” is equivalent to “$1” “$2

So what does this mean in practice? Some code:

#!/usr/bin/python
"Print out how many command line arguments there are"
import sys
print "%d %r" % (len(sys.argv)-1, sys.argv[1:])
#!/bin/bash
echo With Quotes
./printargs.py "$@"

echo Without Quotes
./printargs.py $@
$ ./quote.sh foo "bar baz"
With Quotes
2 ['foo', 'bar baz']
Without Quotes
3 ['foo', 'bar', 'baz']

To me the quoted “$@” does what I expect; do its best to pass the arguments along verbatim, as the shell script was invoked. That rewards some advice I learned 20 years ago to just always quote “$@” in shell scripts and the right thing would happen. But if I think about it harder I get confused. For matters of shell quoting and expansion, best not to think too hard.

Rant: why is the Chrome Web Store so awful?

I accept that Google has locked down Chrome extensions and made them hard to install from anywhere other than their own store. Security issue (also, vendor lock-in). But why does the store have to be so awful?

Chrome App Store

The thing I want, the extension I clicked on, is there. In the middle in a 980×700 box. That is the web page I requested. Why is it locked inside a tiny frame with some slightly faded out ugly set of icons beneath it?

Why is the URL for the extension cepmakjlanepojocakadfpohnhhalfol instead of some human readable name?

Why is the 980×700 box so badly laid out? The top gutter is too small compared to the left and right gutters, and there’s no bottom gutter at all, just more ads for other products I don’t want. And so much of the box is given over to a screenshot. I appreciate wanting to demo the wares, but often screenshots are not the best way to understand what a Chrome extension does. There’s so little room left for the text! And screenshots can be so visually noisy they look hideous embedded in the rest of the tiny box inside the app store page, like this one.

The navigation is also funky; clicking tabs like “reviews’ does some Web 2.1 trick where the content and URL change but the page doesn’t actually navigate. This part works reasonably well, it’s certainly faster than reloads, but it reinforces how this thing is not really a web page but a bad app trapped inside a browser, on top of a blurred out version of a store I do not want to shop at.

In the long long ago the Chrome app store had a much nicer simple page per extension. They changed it a couple of years back and I assumed they’d swiftly recognize their mistake.

Installing openaddress-machine on a new EC2 system using Chef

No need to install stuff manually; Mike already wrapped up scripts to set up an EC2 system with Chef for us. Here’s how to use it on a brand new EC2 micro server

sudo bash
apt-get update
apt-get upgrade
apt-get install git
git clone https://github.com/openaddresses/machine.git
cd machine/chef
./run.sh

Done! The shell command openaddr-process-one now works and does stuff.

In brief, this:

  1. installs Chef and Ruby via apt
  2. runs a Python setup recipe. That installs a few Ubuntu Python packages with apt (including GDAL and Cairo), then does a “pip install” in the OpenAddress machine directory. This tells pip to install a bunch of other Python stuff we use.
  3. runs a recipe for OpenAddresses. This uses git to put the source JSON data files in /var/opt.

Update

But really, that’s so manual. If you just pip install openaddr-machine it makes a /usr/local/bin/openaddr-ec2-run script that will do the work for you. That in turn invokes a run.py script which you run on your local machine. It, among other things, runs a templated shell script to set up an EC2 instance and run the job on it.

The shell script that is run on EC2 is pretty basic. It:

  1. Updates apt (but does not upgrade)
  2. Installs git and apache2
  3. clones the openaddress-machine repo into /tmp/machine
  4. Runs scripts to setup swap on the machine, then invoke chef to set up the machine
  5. Runs openaddr-process to do the job
  6. Shuts the machine down.

The run.py script you use on your own machine is mostly about getting an EC2 instance.

  1. De-template the shell script and put it in user_data.
  2. Use boto.ec2 to bid on a spot instance
  3. Wait for up to 12 hours until we get our instance

The details of how the EC2 instance is bid for, created, and waited on are a bit funky but seem well contained.

Installing openaddresses-machine on a new EC2 instance

Just documenting some work I done. The pattern here follows my notes on making openaddresses work in virtualenv, although I didn’t use venv here. This is really how to get the minimal stuff installed on a new Ubuntu box to run our code. Just a bunch of shell commands.

Set up an Ubuntu 14.04 AMI. The smallest one will run it OK

Do the following as root:

# update  ubuntu
apt-get update
apt-get upgrade

# install pip
cd /tmp
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py

# install gcc and python dev
apt-get install gcc g++ python-dev

# install openaddresses
pip install Openaddresses-Machine

# install Cairo
apt-get install libffi-dev libcairo2
pip install cairocffi

# install GDAL
apt-add-repository ppa:ubuntugis/ubuntugis-unstable
apt-get update
apt-get install python-gdal libgdal-dev
pip install GDAL

I haven’t run these commands a second time yet, but it should be close. The last line for “pip install GDAL” is probably not necessary, and it’d probably be better to install Openaddresses-Machine last although it may not matter.

Next step: automate this with Chef Solo.

After that: set up Honcho and a Procfile to run my queue service worker.

Ubuntu guest VM frustrations

I’m trying to set up Ubuntu in a virtual machine. My goal is to recreate locally an environment a little like I might get from Amazon EC2, so I can do rapid testing of deploy scripts. I don’t need an exact Amazon AMI, but a clean Ubuntu image is close enough I can get stuff working before dealing with a remote environment.

I’ve got a VM host running. It’s Ubuntu 14.04 running KVM. Seems to work.

I’ve previously gotten Ubuntu’s new Snappy working as a guest. It boots fine and I can access ports. Unfortunately this environment is a strange Ubuntu 15, which has replaced apt with the “snappy” tool. Not enough like an EC2 host to be useful.

I tried using an Ubuntu Cloud image as a guest. They ship 14.04. But it doesn’t boot cleanly:

2015-05-03 18:25:26,540 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: request error [(<urllib3.connectionpool.HTTPConnectionPool object at 0x7f0f325e58d0>, 'Connection to 169.254.169.254 timed out. (connect timeout=17.0)')]
2015-05-03 18:25:27,542 - DataSourceEc2.py[CRITICAL]: Giving up on md from ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds
2015-05-03 18:25:27,550 - url_helper.py[WARNING]: Calling 'http://10.0.2.2//latest/meta-data/instance-id' failed [0/120s]: bad status code [404] [404]

The init scripts are waiting on some network resources to load. The IP addresses are on the local network, and I guess it’s some sort of Ubuntu cloud server configuration thing. I don’t want all that! I just want a basic Ubuntu unmanaged instance to boot. Unfortunately ssh is still refusing logins. It does finally give up but it takes about 4 minutes, way too long for the rapid iteration I have in mind. And then I can’t log in. Apparently there is no default password. This image is really not designed for the quick and dirty hackery I want.

I did learn something about how QEMU/KVM’s networking stuff works. The default IP is provided by slirp, which among other limitations does not support ICMP. So that’s why ping doesn’t work! Also it’s outbound connections only, although the “-redir” flag to kvm seems to offer some basic port forwarding.

I found this guide to using libvirt sort of interesting, it’s simple stuff for managing the virtual machine image. Although again way overkill for what I want.

At this point I think I’m going to give up on running my own KVM guests and hope that EC2 gives me a new machine fast enough that testing machine setup scripts using throwaway instances isn’t too annoying.

Update: my friend Jeff says that I should look at ubuntu-vm-builder. It will apparently make me an image. I sure would be glad to just download one someone made already though.

postgres: setting up permissions for a user

I wanted to give a CGI script access to one specific postgres database, both read and write. This is remarkably complicated. I ended up creating a password protected role named “oarunone” and gave it access to a database named “oarunone” (yes, confusing) and one table named “queue”. Something like:

  • create role oarunone with password ‘secret';
  • grant all privileges on database oarunone to oarunone
  • grant all privileges on table queue to oarunone

But that’s not good enough. Ubuntu’s default postgres only identifies users via the Unix UID. So you also have to add a line like this to pg_hba.conf. and it has to occur before the catch-all “all” rule.

  • local oarunone oarunone md5

That says allow the role “oarunone” to authenticate with a password (MD5ed), but only from a local Unix domain socket and only to the database “oarunone”.