Two unrelated PS4 gripes

Sony’s online store site requires a login. The login requires third party cookies; logging in to requires setting using a cookie at, so it won’t work if you have third party cookies disabled. This breaks the Brave Browser, which disables third party cookies by default. It also breaks in Chrome if you disable the cookies. I thought Safari prevented third party cookies by default, I wonder if they have a workaround? The error displayed is

{“error”:”invalid_client”,”error_description”:”Bad client credentials”,”error_code”:4102,”parameters”:[]}

The PS4 operating system, or the apps, have a bug where if a game is put in the background the fan gets really loud. Ie, if you open the system menus. I assume instead of suspending processing entirely the game starts spinning consuming lots of CPU or GPU, generating heat, which makes the fan run. It’s dumb. A fan theory is that the games have no framerate cap and so they effectively render at 1000 FPS or whatever, but I think the problem may be broader than that.


LA Times and ads

The LA Times is a good newspaper and is currently doing the best political coverage in California. They are also the most aggressive ad shoveling website I have ever seen. Their ad blocker blocker and paywall works, preventing me from reading articles. I even tried installing an ad blocker blocker blocker which doesn’t work.

So I open articles like this in incognito mode, and let it run its ads, and close the popups and mute the videos and try to ignore the visual distraction. But boy that page does not go quietly. Here’s how they reward their readers.


That’s a timeline of 30 seconds of page activity about 5 minutes after the article was opened. To be clear, this timeline should be empty. Nothing should be loading. Maybe one short ping, maybe loading one extra ad. Instead the page requested 2000 resources totalling 5 megabytes in 30 seconds. It will keep making those requests as long as I leave the page open. 14 gigabytes a day.

There’s no one offender in the network log, it’s a wide variety of different ad services. The big data consumer was a 2 megabyte video ad, but it’s all the other continuous requests that really worry me.

A lot has been written about the future of journalism, the importance of businesses like the LA Times being profitable as a way to protect American democracy. I agree with that in theory. But this sort of incompetence and contempt for readers makes me completely uninterested in helping their business.

Edit for pedantic nerds: for some unfortunate reason this blog post ended up on Hacker News where people raised eyebrows at my 14 gigabytes / day estimate. To be crystal clear: I did a half-assed measurement for 30 seconds and measured 5 megabytes. 5 * 2 * 60 * 24 = 14,400, or about 14GB. That’s all I did.

Yes, I know that extrapolation is possibly inaccurate. If you want to leave a browser window open for 24 hours to measure the true number, be my guest; I’ll even link to your report here. (Hope you have lots of RAM and bandwidth!) I’m a web professional with significant experience measuring bandwidth and performance, I know how to do this right. But that’s the LA Times webmaster’s job, not mine, and I don’t have a lot of confidence that they much care.

3.2 hours is the current record for measuring traffic before something crashes.

Edit 2: for a lot of people visiting my blog I think it is their first view of a network timeline graph like this. Generating your own timeline for any web page is easy if you use Chrome. It’s a standard view in the Network tab in Developer Tools. See the docs, but briefly you open dev tools and click on network and watch the waterfall graph. You want to open it before loading the page so it captures everything from the start. Beware a normal web page will end very soon and be boring; the LA Times is a very special website.

Here’s a larger view of 200 seconds of that same LA Times article page, and a direct link to the full size image. (It’s funny redoing it, every time I look at this page it does some new insane thing.)


Upgrading a PS4 hard drive

I just upgraded the hard drive in my Playstation. It was remarkably easy following these very thorough directions. Sony designed the system to be easily upgraded, quite a surprise.

The hardware is super simple; slide off the cover, remove a single branded screw, and slide out the drive tray. Swap in the new drive (4 screws) and slide it back in, done. I replaced it with a 2TB hybrid drive. A full SSD would be faster but costs 5x as much. These Seagate SSHD drives have a 64GB SSD caching the 2TB platter and seem like a reasonable compromise.

The software is also surprisingly simple. Back up the PS4 to a spare external  USB drive, swap in the blank new drive, then boot to recovery mode. You can then reinstall the firmware from a download Sony provides, restore your backup, and you’re done. It took all of 30 minutes of my time, plus 5 hours waiting for the backup and restores to copy.

My main reason for the upgrade was more space. The old units shipped with a 500GB drive which after overhead, etc allowed for about 6 games to be installed at once. Significant nuisance; I don’t want to download 30GB a second time for something I want to revisit.


Microsoft aggression

This is bold.


I’d agreed to SumatraPDF’s request to be the default PDF handler, and approved it with Administrator privileges. Microsoft decided it knows better.


Non-ASCII filenames

I’m on a quest to clean up my music library. I decided to give up and make all my music filenames ASCII; no µ-Ziq, no Sigur Rós, no préludes. I feel bad but after 10 years of struggles I still occasionally run into problems with Unicode filenames. Also most music players are grabbing the display info from metadata tags anyway, the filenames don’t matter so much.

Anyway, here’s a quick trick for finding non-ASCII filenames

LC_ALL=C find . -name ‘*[! -~]*’

The find range names any character outside 0x20 to 0x7f. The LC_ALL setting is presumably to disable any proper text processing. Ie: bytes, not characters.

The bigger project here is the music library grooming. I had this all locked down in 2008 or so when almost all my music was ripped by a service. But since then I’ve acquired a bunch of music of uncertain provenance with all sorts of random tags. I think it was when I discovered I had a Genre named “drownstep” I realized it was time to clean things up.

Putting lolslackbot on the back burner

I’ve decided to stop working on lolslackbot, my social project for League of Legends players who use Slack or Discord. I wrote it originally for a few friends, then slowly expanded it to a few hundred users. But I’ve never put the work in to make it a consumer product and now am not motivated to do it.

The main feature missing is any sort of web interface so people could sign up for themselves. I’ve been maintaining it by hand with database update scripts, doing ~30 minutes of one-off work every few weeks instead of one focussed month-long engineering project. This blog is full of bold plans to port the whole thing to Django and get going on a web interface, but I never did it. Too much product work I don’t really know how to do well, designing interactive web UI. Hell, I don’t even have a proper name for the project.

Also some deeper technical problems. The Django port seems doable but requires database schema changes, specifically in how many-to-many relations work. And I got part of my core schema wrong, an assumption that an individual only belongs to one group. Fixing that would require redoing pretty much all the tests and half the business logic. Also at some point I’d have to migrate from sqlite to Postgres and that doesn’t sound like fun at all. In retrospect it’s too bad I didn’t start with Postgres+Django, but that seemed complicated at the beginning when I was thinking of this as just a cron job.

My real reason for lack of enthusiasm is the market. I like games and I like the idea of making game playing more social. But League of Legends is a hard community to build humble tools for. Most of the energy there is to highly polished and well marketed sites like LolKing and I’m just not that ambitious. There’s not much money in it (Riot’s API requires you don’t charge for services) and not a lot of love either. Me and my gaming buddies are on a bit of a LoL break too, which makes it harder to stay personally motivated. I’m also bummed that Riot hasn’t done anything more with Clubs, their social feature, my hope was to springboard off of that to build out the bot.

I did get some data from the last user population of Learning Fives, a cohort of ~80 people playing games together for a few weeks. 50% said they found it useful, 20% said it wasn’t, and 30% didn’t know what it was (despite seeing it in their channel). Not sure what conclusion to draw from that.

Anyway it’s a weight off my mind to just say I’m not going to do further work on this, at least for now. Truthfully my mind is on political work right now, I’d really like to do some sort of progressive activism combining data processing and GIS. (I’m following Mike’s work on redistricting closely.) To the extent I do anything for games it’s about time to revisit Logs of Lag, which 2.5 years later is still running just fine and uniquely useful. But I have some bugs to fix and maybe some improvements to make.


Synchronizing Windows directories

I’ve been using Allway Sync to keep my two new Windows machines in sync. It’s pretty good. I sync one machine to an external hard drive, then sync that drive to the other machine. So really it’s three copies. Allway Sync also supports various Internet services for syncing, like Dropbox or its own cloud service.

The sync algorithm seems relatively robust. It’s a bidirectional sync, so changes on both sides can propagate at the same time. There are safety features like a sanity check if too many files changed. It tracks metadata and deleted files so deletes can propagate. It also keeps old versions of things in a hidden _SYNCAPP folder. I haven’t delved into details of how it handles a single large binary file that’s only partially changed, I don’t know if it has binary diffs or if it just copies the whole file over.

The UI is pretty awkward. Mostly it’s just ugly. But it’s also a bit confusing. I believe you have to define a new job for each folder you want to sync. (There is a way to sync one source folder to multiple destinations.) I finally found that if you enable the toolbar, there’s an icon for “sync all jobs at once”. I haven’t tried the automatic syncing yet, it seems to rely on a service watching for change file events and/or on a timer.

The trick now is figuring out what all in Windows I can safely sync. My Steam apps folder is safe to sync, and saves me from having to download a 30GB game twice. Syncing my Documents folder seems to work too and propagates some app settings, game save files, not to mention my actual work output like source code and text documents.

The part I’m on the fence about is syncing my Roaming profile. That should keep many more app settings in sync, and my first time copying it seems to have worked OK. But there’s a lot of random stuff in there that doesn’t quite seem like it should be copied, like Slack’s cache files. OTOH the Roaming profile was designed to be copied from one machine to another, so it should work? Edit after trying it I think syncing the whole Roaming profile is a bad idea. Parts of the folder, like Microsoft, seem to be treated specially. Also it makes the most sense to sync when nothing is open that writes to that directory, ie before you log in, and that’s awkward.

It’s free, but there’s a limitation on the number of files. The license is $26 but that’s for a single machine. $16 for a second one.

There’s a bunch of other sync options too. For a long time I used Unison to sync file systems. It works great and I’d still use it for a Unix command line. There are Windows build but I couldn’t get the GUI version to work. I also get nervous using Unix tools to manipulate Windows filesystems.