New Windows gaming PC

After five years of running Macs I’m going to try Windows again. I only use the computer in front of me as a fancy terminal, web browsers and SSH sessions mostly. And gaming. But I’m not really using the OS as a real computer, so I can suffer with Windows. Also I’m fed up with a bunch of stuff related to Macs. MacOS is not a very good Unix, I went back to doing all my real work on Ubuntu last year. Mac gaming is just awful. And Apple has apparently abandoned the idea of updating iMac hardware.

Anyway, I ordered a PC. I went full max spec with brand new Kaby Lake parts because I intend to use this machine for 4+ years. Top end CPU, top end GPU, 16GB RAM, 500GB SSD, and some fancier cooling / quiet components. Here’s the parts list.

  • Motherboard:  Asus Z270-P
  • CPU: Intel Core i7-7700K 4.2GHz
  • GPU: EVGA 08G-P4-6286-KR NVIDIA Geforce GTX 1080 FTW
  • RAM: C769-V16B Corsair CMK16GX4M2B2800C14 16GB (2x8GB)
  • Drive: Samsung 960 EVO 500GB PCIe NVMe M.2 SSD
  • Power supply: Corsair CP-9020092-NA RMx RM750X 750W
  • Case: Fractal Design Define R5
  • CPU cooler: Noctua NH-D15S
  • Monitor: Asus ROG PG348Q 3440×1440

So far so good! The motherboard was a compromise, it doesn’t support SLI. The CPU and GPU are too far up on the price vs. performance curve but I was feeling spendy. The case is nice, it has sound baffling. The CPU cooler is absurdly huge; I don’t intend to overclock, but large = quiet.

Shopping for the monitor really made me appreciate what Apple has done with the iMac displays. It’s very hard to find something that quality for PC parts. I decided not to go with the 4k/5k/Retina high density displays, too expensive and no GPU can fill that many pixels in a game.

There’s some neat new things in PC hardware. The M.2 form factor for SSDs dispenses entirely with the history of hard drives, the SSD looks something like a RAM module. (Which begs the question; why are PC cases still so giant? Cooling.) And GPUs have a nice new feature called G-Sync or Freesync. For 20+ years we run the screen at 60Hz and hope the GPU can keep up with 60fps vsync interrupts. Now the GPU just renders frames as fast as it can and the LCD only updates when there’s a new frame to draw.

I had Central Computers build it for me, a local screwdriver shop. They did a great job on the quote and build process for $80 labor plus maybe a 10% markup on parts. The computer priced out about $2150, the monitor $1100. Expensive! But I use this literally all day every day, and like I said I don’t intend to upgrade for a long while.

 

Rendering 10M+ points from S3 to a map

Michal Migurski is working on a new project, rendering previews of OpenAddresses datasets in slippy maps. We’re using S3 to store stuff and trying for stateless servers. He just described the architecture plan to me, I’m writing it up here. He’s doing all the work, currently in a test git repo. The fun part is a FUSE file system for MBTiles on S3. Read on.

OpenAddresses is a collection of address data points. Big CSV files full of latitude, longitude, street name. We collect the data from government sources. Every time someone finds a new government source they create a pull request like this one for Luzern with metadata for the source; stuff like the URL and how to parse it into our format. Mike has some GitHub integration hooks that look at the pull request and renders a preview image of the data file. It looks cool, but it’s also a useful debugging tool. We’d like to transform that static preview image into a slippy map. Here’s how it’s going to work.

  1. GitHub integration hook: use Tippecanoe to boil down the address points into a tiled dataset. Write the resulting MBTiles file to S3 somewhere. Note this hook and processing is on a transient server that disappears once the processing is done.
  2. Web page: slippy map using Leaflet or the like to request OpenAddresses tiles and render the points in the browser.
  3. Tile server: persistent Flask server that uses TileStache to serve the MBTiles file to the web browser.
  4. Tile server: a FUSE filesystem that mounts the S3 MBTiles file and provides it to TileStache.

 

It’s that last step that’s particularly clever. Serving an MBTiles file locally is easy. But what do you do if your MBTiles file is on S3 somewhere? It might be quite large, 10 million points or 100 megabytes. But each map view only needs like 16 tiles or a megabyte of data. You’d rather not have to copy the whole thing from S3 first.

So instead of caching the whole file locally, Mike has written a simple read-only FUSE driver to remotely mount the S3 file. To normal Linux processes it just looks like a file, but behind the scenes read requests are being turned into HTTP Range-restricted requests to get the bytes and return them.

Why go through the hoops of FUSE? The challenge here is MBTiles are actually Sqlite databases, and Sqlite really wants to open an actual file down in the depths of its highly optimized C code. So we give it a file.

The big question here is performance. It seems to be OK on first testing! Sqlite should be pretty efficient about reading data. I’m a bit more concerned about llfuse, the Python FUSE driver framework Mike is using. It seems to have a single global lock so only one S3 read request can be active at a time, maybe per mounted MBTiles device. So this might not work so well; in practice multiple tile requests are happening in parallel, even for a single user looking at a single slippy map. But we don’t imagine many users so it may not be too bad.

 

League of Legends server IPs

Just writing this down so I don’t lose it. I haven’t verified these. But most of the old lists I’ve seen are no longer valid since Riot started doing its own peering.

  • NA – ping 104.160.131.3
  • EUW – ping 104.160.141.3
  • EUNE – ping 104.160.142.3
  • OCE – ping 104.160.156.1
  • LAN – ping 104.160.136.3

Source is this Reddit discussion about yet another ping checker. I wish Riot would build a small network diagnostic into the client, before a game starts. Many of these third party tools don’t measure round trip time correctly and none that I’ve seen measure packet loss.

Also I was curious so I just did an mtr to the NA address. Average response to the end is 92ms, which is about the same number the league reports as ping. Standard deviation is 12ms, best/worst is 73/140. That’s with ICMP probes, I don’t get responses from UDP probes.

Verizon tethering for real

I’m glad I did that tethering experiment; I had to use my cell phone for Internet in anger today after yet another local WISP outage. Big storm, power outages, but also some robustness problems in equipment and/or service. I do have sympathy, the ISP depends on $300 wireless devices installed in trees scattered all over the county.

Anyway my iPhone worked fine for tethering. I put it in an east-facing window. Only one or two dots of service, but my Mac through it was reliable and seemed fast. I didn’t measure but it felt 10Mbps fast, at least. I quickly forgot I was tethered.

Doing my ordinary daily routine my iMac alone seems to use about 100 MB / hour. Call it 1GB a day. My data plan is only 2GB, but with another 2GB carryover that’s good for 3-4 days before it costs money. Overage charges at Verizon are $15/GB, but you can also avoid overage by adjusting your data plan size; no switching cost and it’s retroactive for the whole month. The plan price is about $5/GB.

It’d be $135/month if I wanted to just buy 30GB a month. But I really use more like 100GB / month once you factor in other devices, discretionary downloads, etc. That’d be $450/month (!).

The plan flexibility makes Verizon a pretty good low cost backup option.

 

Overwatch netcode

I’ve been playing a lot of Overwatch, on PS4. It’s fun! Also it feels super responsive and reliable on the network, I almost never perceive lag. Blizzard is uniquely good at building games like this so it’s no surprise, but what are they doing to make an Internet game feel like real time?

This video from Blizzard developers is good info (but outdated; keep reading). It goes into detail into server prediction, client prediction, and perceived lag. I think they’re mostly doing what other games have in the past, but there’s a lot of choices and tradeoffs they make. One choice they say is new is they make the assumption that most clients have reliable, fast networking. That means they don’t need to buffer nearly as much. Specifically they say that with most games you may be buffered 4 ticks (or 80ms), but if they think your network is good they’ll buffer you just 1 tick (20ms), so it feels even more real time.

The server runs the simulation at 62.5Hz. When that video was posted, clients got network updates at 20.8Hz. In August they pushed an update so PC clients get updates at 62.5Hz. PS4 and XBone also got the faster tick rate on September 13. The rate is adaptive, so your client will drop down if you have low bandwidth.

The game is a UDP protocol. I’m not sure about bandwidth usage. Back in the 20.8Hz days this post claimed it’s about 60-100Kbps in both directions. So maybe triple that? Bandwidth consumption was the reason Blizzard originally went with the lower tick rate.

I’m really curious why it’s 62.5Hz and not the obvious 60Hz. Maybe you want the network updates to be just a bit faster than the screen updates for smoothness?

For comparison, League of Legends runs at 30 Hz on the server. I’m not sure about the network / client update rate, it may be the same. CS:GO is 64Hz (or even 128Hz).

Internet in the mud

This is what happens when my Ubiquiti wi-fi antenna outside falls off its mast into the mud

The network was surprisingly useful in the bad period. I didn’t really notice for basic web stuff, although I did notice video streaming was unreliable. The main thing I could measure was my ping time to the antenna (as measured by mtr) had a standard deviation of 28ms. 5% packet loss too. So something was wrong, but the hardware handled it robustly. What’s with the delayed pings, btw, is there some retransmit logic involved somewhere? In the wireless devices?

“In the mud”, btw, mostly meant on the ground. Fortunately it fell facing my house. But with little or no line of sight to the house. Mind you the other antenna in the house is literally behind a wall; it works well enough I’ve never bothered to mount it in the window.

Local stream flows: Yuba and Deer Creek

We’re about to get a lot of rain, and like every year I get interested in the local rivers and waterflow around Grass Valley, CA in Nevada County. Here’s some rough notes and resources.

Yuba River, particularly the South Fork

  • Yuba River at Marysville: flow history and forecast. This is below Englebright Dam, so includes the choices of how much water they’re releasing there. But is excellent data from NOAA.
  • South Yuba at Highway 49: flow history, forecast. From Dreamflows.
  • Yuba River at Jones Bar: flow history. From CDEC. This is either the exact same sensor Dreamflows is showing at Highway 49 or else is a mile downstream.
  • Yuba River flow history along the whole river. This is California state data (CDEC) and the website has lots of neat stuff.
  • Watershed map
  • Wild and Scenic River Act protects the river from Lang Crossing to Bridgeport. This is the much loved stretch of scenic river with lots of swimming holes, etc.
  • BLM management plan

Deer Creek in Nevada County

This ArcGIS map of World Hydro Reference Overlay does a good job showing labelled rivers against towns and major roads.