• The good news, I think, is that the two things are constitutionally linked: in order to make “AI” more powerful we will collectively also have to (or get to) relinquish centralized control over the shape of that power. The bad news is that it won’t be easy. But that’s very much the tradeoff we want: hard problems whose considered solutions make the world better, not easy problems whose careless solutions make it worse.

  • These will not yet have been the resonant vibes. All these performative gyrations to vibe-generate code, or chat-dampen its vibrations with test suites or self-evaluation loops, are cargo-cult rituals for the current sociopathic damaged-brain LLM proto-iterations of AI. We’re essentially working on how to play Tetris on ENIAC; we need to be working on how to zoom back so that we can see that the seams between the Tetris pieces are the pores in the contours of a face, and then back until we see that the face is ours. The right question is not why can’t a brain the size of a planet put four letters onto a 15x15 grid, it’s what do we want? Our story needs to be about purpose and inspiration and accountability, not verification and commit messages; not getting humans or data out of software but getting more of the world into it; moral instrumentality, not issue management; humanity, broadly diversified and defended and delighted.

  • . We do not seek to become more determined. We try to teach machines to play games in order to learn or express what the games mean, what the machines mean, how the games and the machines both express our restless and motive curiosity.