Got a crazy deal on a lightly used BMW iX to replace it and will be saving a ton, the car is far nicer inside and out, and I actually trust BMW’s engineering and service. Sad it didn’t work out with the Rivian, but excited for the new ride for Lacey.
We have decided to get rid of our Rivian. At the price we paid, it shouldn’t feel like we are driving a beta test where service appointments are multiple months out. Over it.
Also, For All Mankind always finds a way to become even more joyfully unhinged every single season.
Only halfway through the first episode of For All Mankind season five, and I’m calling it — they’re going to cryogenically freeze Ed Baldwin.
After the OpenClaw Anthropipocalypse, I have been struggling to find a suitable alternative. Started with OpenAI Codex, and while it matches Opus 4.6’s 1M token context window, it is just not well suited for the use case of orchestration and friendly assistant. It has a tendency to hallucinate and its projected demeanor is… weird. It’s like concentrated Mark Zuckerberg from a personality perspective. Decently good at technical tasks, tho.
I am currently using z.ai with their “Coding” plan, and I’m impressed. GLM-5.1 is remarkably similar to Opus 4.6 in my experience thus far. The 200k token window is tiny, unfortunately, but with some creative use of subagents, it’s manageable. I’ve also kept Codex around for now, modifying my standard operating procedures to encourage the use of Codex subagents for grunt work that requires a large token count.
I’ve started blocking out an hour at the end of the day for prompting — capturing tasks and projects that get codified into Markdown files placed in an “ingest” directory. Then, I run a custom command that tells my agent to create a context fork for each file, autonomously executing each task in a sandbox. Then, I head home. If one of the forks encounters a blocker for any reason, it sends me a message on Signal which I can reply to or ignore.