I’m using Google Cloud’s Postgres implementation. I thought I had it all working except my AppEngine server couldn’t connect to it in production. For dev work I’d just authorized my home IP address and connected directly to it from my dev box. But that didn’t work in production.
Turns out there’s complex docs for connection to Google Cloud SQL. Here’s what I had to do when following those instructions. (This was made significantly more different by intermittent outages either at Google or in my network connection; lots of failures and timeouts.)
- Download cloud_sql_proxy to my dev box
- Get some application default credentials on my dev box
gcloud auth application-default login
This does some OAuth thing with my Google account. In retrospect I have no idea how I was able to gcloud deploy, etc before doing this.
- Authorize my home IP address to connect to the SQL instance via the cloud console.
- Run the gcloud_sql_proxy and have it listen on 127.0.0.1:6432 (port 5432 is busy with my own Postgres instance.)
Note the instance connection name has three parts separated by colons. “gcloud sql instances list” does not show you names in this format, but the console does in the “Instance details” page for that specific SQL server.
- Test I can connect to the database via local proxy
The proxy will log a line about the connection.
- Add the beta_settings / cloud_sql_instance stuff to my app.yaml. I have no idea what this does or if it’s necessary, but the docs said I should.
- In my dev environment, configure my code to connect to its database via 127.0.0.1:6432, so it uses the proxy. It’s not clear the proxy is necessary. I can connect directly to the database machine via IP address. I guess it’s nice in that the proxy is configured just with the instance connection string, not an IP address.
- In prod configure my code to connect to
I have no idea what this ?host= argument does or why it works. Is this a standard thing or did Google fiddle with its Postgres drivers somehow?
That’s more or less the process described in Google’s docs, but it took me an hour of trial and error to make it work.
Update: I asked Stack Overflow about how cloud SQL connections worked and got a very quick answer from a Google Cloud engineer named Vadim. “It’s a UNIX socket backed by Cloud SQL Proxy”.
In detail, /cloudsql/projectname:us-central1:dbhost is a Unix domain socket, not a TCP socket. There’s an actual file (or directory?) named /cloudsql/projectname:us-central1:dbhost in the Unix machine’s filesystem. The ability to connect to Unix domain sockets with a special host= parameter is implemented in libpq, the Postgres connection library. The Cloud SQL Proxy code talks about how it can proxy from a Unix domain socket.
It’s a nice optimization; it’s silly to use a TCP/IP stack for passing data between two processes on the same computer. I assume Linux has a highly optimized localhost driver, even so Unix domain sockets should be better. This discussion suggests 1.5-2x the throughput.