• tools and techniques — steel, coal, oil, welding, riveting, boiler-construction — or use them in particularly literate and skilled ways. They wrapped up their careers with a joyless and unrewarding last chapter, complaining about the decline and fall of civilization due to the brutish new technology, and unshakably nostalgic for the Age of Sail. And a final, sufficiently supple-minded or accidentally prepared group developed a new ludic relationship with ship-building. The three legacy groups were selected out, adversely selected (due to being grandfathered in), or adaptively suited to the new technological mode and ludicity.

  • The specific elements of the craft they happen to be attached to may survive the transition, but the importance of those elements will not. Note that while all current writers will continue to also write without AIs (as I am doing with this essay), this group is people who refuse to use LLMs. Who reject the new ludicity. If you love walking too much, you might refuse to learn to drive.

    The refusal to engage with LLMs may be rooted in something deeper than aesthetic preference — a desire to see the original notions of craft as heroic. If the importance of those elements won’t survive the transition, the identity built on them is also at stake. The shame is not about the tool but about what the tool reveals about the permanence of one’s skills.
  • So LLMs are a kind of AI that I think a lot of people who used to have weak or non-existent relationships to text, pre-AI, deeply misunderstand. Because LLMs allow those who find no joy in the ludic affordances of text to still develop radically superior instrumental relationships with text, they (and the critics watching them) make the critical error of thinking that the ludic quality can be dispensed with entirely. They use LLMs in ways that suggest an intent to reduce text to some sort of pure industrial intermediate that will eventually sink out of sight beneath more sensorial modes of information processing. I am now convinced this won’t happen. The ludic qualities of both text, and AI as a technology, are load-bearing at the human interaction level. You must play to unleash their potentialities. While LLMs may well develop their own cryptic latent languages to communicate among themselves, that does not mean text will disappear between humans, or between humans and machines. A highly expressive symbol system with a grammar is a necessary part of both human-human and human-machine relations.

    The deeper point: prose has structure and purpose beyond being identifiably “human” or “AI.” Most people scan text looking for fingerprints of authorship rather than engaging with the ideas. This detection reflex — trying to discern whether a model or a person wrote the words — distracts from the actual value of the text. The problem compounds when someone both lacks appreciation for the form of text itself and feels threatened by a model’s ability to imitate it.
  • Because AI is very much a management technology, some maturity and experience with command is extremely valuable. So the age correlation might be different. Those who pivot and learn to enjoy toy-making may be middle-aged like me rather than young. Not only do we typically have more experience with supervision, we also appreciate the ability to delegate tasks we are no longer as skilled at as we used to be.

    AI as a management technology requires familiarity with creative process — with supervision, delegation, and the taste to know when the output is good. Building AI tools for mass adoption misses this: the value comes from creative maturity, not accessibility alone.
  • . This means counterprogramming the three instinctive attitudes that work against playfulness — perceived threat to humanness, inexperience with command, and overly realistic expectations. And beyond that, we have to account for the fact that AI vastly increases the textual agency of humans who, through some mix of lack of aptitude and poor nurture, never learned to have fun with text.

  • Players though, should have at least a minimal understanding of the toy-making side in order to cultivate their tastes and capacities for ludic immersion beyond a point. You can’t have the right expectations and relationships with a thing if you don’t quite know what it is or how it was made. Most adult fans of Lego know, for instance, about the very tight tolerances of the bricks, and the rules for legal and illegal builds that emerge from stress/strain limits on the material. But for the writer/toy-maker, the ludic aspect must extend to the irreducible toy-maker side. Presumably, engineers at Lego do have fun nerding out over injection molding.

  • This is one reason I haven’t experimented much with system prompts or “professional” tooling. That feels like chores comparable to putting toys away neatly.

  • This differences in the ludic qualities of pre-AI and AI textualities leads me to make a prediction: The people who will be enjoy writing with LLMs will not be the same kinds of people who enjoy writing without them, except by accident, as in my case. You see this with every major technology. Let’s use ship-building as an example, focusing specifically on the ludic aspect of the transition from sail to steam for builders.

  • In other words, viewed as an engineering medium, whether you’re making toys or production-grade things, it is naturally easier to find ludic flow with text, in the sense of Mihaly Csikzentmihalyi, than it is with 3d printers, soldering irons, or IDEs. Truth be told, this is the reason I switched from engineering to writing. I like that it’s easier to find flow with fewer specialized skills, regardless of how “serious” the task. It is always possible to play with text. It’s like food that way.

    Text may be uniquely ludic now, but the ease of finding flow with it could extend to other craft domains once agentic workflows mature. If AI makes 3D printers, soldering irons, and IDEs as playful as text editors, the advantage of text as a flow medium narrows.
  • If you think poorly produced AI writing can be a pain to read, you can imagine how much more of a pain it can be while the text toy is still under construction, unless you develop enough supervisory ludicity.

  • It takes some time to learn to derive a craft-like pleasure in observing the mechanics of a complex text toy coming together in a chat session. It’s not the same pleasure as you might find in typing the words yourself or reading.

    The barrier to seeing AI output as a toy to be molded is that it does not pull from the user’s own archive — it lacks the personal material that makes craft-like pleasure possible. Until AI can draw on what someone has already thought and written, its output remains instrumental: good enough on its own but not malleable enough to play with. The archive is the missing ludic ingredient. pkm
  • By reading with LLMs, I mean either loading a specific text into an LLM and accessing it indirectly (interrogative play), or using an LLM on a parallel track to support direct reading (perspectival play), with the relative loading of each mode being a function of your expertise and/or enjoyment level in the subject. Within the texts as toys mental model, either mode of reading with LLMs must be fun. I’m beating this point to death for a reason. AI is a ludic technology. If you’re not having fun with it, playing with it, you’re doing it wrong.

  • In a mature technological era, such as pre-LLM novels or screenplays, slop is almost entirely the writer’s fault. In an emerging technological era, it is often equally the fault of the reader, for not trying to learn to play in new ways.

    Slop’s signature is excessive parallel structure — it takes weakly substantiated ideas and creates the appearance of elucidative contrast. The reader’s challenge is that the AI fingerprint (the structure itself) distracts from the potential of the ideas being explored. In a mature era the writer bears the blame for poor craft; in an emerging era the reader also bears responsibility for not yet knowing how to read through the new medium’s artifacts.
  • Perspectival play is an extension of the kind of pleasure you get from using Google or Wikipedia to go down bunny trails suggested by the main text. But with an LLM, you can also explore hypothesis, ask for a “take” from a particular angle or level of resolution, and so on. The LLM becomes a flexible sort of camera, able to “photograph” the context of the text in varying ways, with various sorts of zooming and panning.

  • Interrogative play works better with technical materials right now, for two reasons. First, the texts are typically less engineered for pleasure to begin with. Academic papers are rarely fun to read (the big paper behind all this technology, Attention is All You Need, is horribly painful to read, despite its alluring headline). But LLM-assisted reading can produce a layer of fun as a friendly recoding.

    Artistic work is often intentionally obtuse — not meant to be “read quickly” but to be engaged with. LLM-assisted reading threatens this by making the inaccessible accessible. For personal archives, the dynamic is different: the accumulated material is so vast that materializing it into artwork feels impossible. The editorial motive — how to frame the archive — becomes the art itself. Letting others engage with raw material without the artist’s lens as filter has a capitalistic logic; art resists this by insisting on curation as a form of hiddenness. discovery hiddenness capture editorial pkm
  • This mode could be called kit-mode. In toy-engineering terms, it’s a bit like the build-operate-transfer, or BOT pattern in big industrial deals — somebody builds a factory for you, runs it for a while to work out the kinks, then hands it over to you. Or if you don’t mind the paternalism, it’s like helping a child play with a toy that’s just beyond their skill level to play with by themselves.

  • The “final boss mode” is writing to create entire extended universes that readers can explore. These currently are painful to create with unassisted text, and generally require entire large communities working collaboratively (such as SCP) to pull off. I think we’ll soon see single-author extended text universes that are as fun to get into as MCU.

  • Speed reading exploits the affordances of what we might call the access mechanics of a text, comparable to how a magnetic disk is set up for the read/write head to access. Summarization exploits the affordances of what we might call articulation mechanics of a text — how the parts are set up and move relative to each other. Sometimes access and articulation mechanics are legible (lots of headings and sub-headings, “characters” move through a “plot”) and other times, it is illegible

    Parallel structure makes text speed-readable, but people are drawn to landscapes of dots, graphs, and networks precisely because they look illegible — there appears to be more to explore than is immediately useful. Legibility is important for translation; illegibility is important for discovery and the sense that there is more to uncover. A single tag is legible and uninteresting. A collection grouped by a tag is illegible and inviting — the same way someone might rename a playlist even though it’s essentially “rock.” The renaming is the editorial act that makes content discoverable on the discoverer’s terms.
  • Even a few hours of playing — that’s the operative verb — with AI reveals something that fearful critiques from a distance seems to miss. The primary mood of the medium is playfulness. The message of the medium, at its current stage of evolution at least, is let’s play. AI is a ludic technology. I haven’t had this much fun with a new technology since I discovered Legos as a kid. Why then, do so many people seem entirely oblivious to the ludic aspect of AI? I think there are three reasons: threat to humanness, inexperience with command, and overly realistic expectations.

  • Human users of AI seem to approach it with an unusual seriousness. It is hard to play with a technology when you approach with the idea that you are dealing with a world-destroying monster that’s out to devour your humanity. It is more natural to approach it in a metaphorical hazmat suit, in a high state of stress and wariness, armed with suitable weapons. And because by its very nature, AIs mirror the attitudes of the humans shaping their contexts, the playfulness gets repressed in the responses as well.

  • The ludic quality gets missed because people inexperienced with command have no idea how to approach it as a collaborative dialogue that can be fun, rather than a contest of wills that must feel like Serious Adult Dominance (SAD) when done right.

  • It is also activity at a scale most humans are not used to supervising. Even if it responds with an apparently individual voice, an AI at the other end of a chat is orchestrating a set of resources that is closer to a team of humans. So it is perhaps not surprising that evidence is emerging for a rather curious phenomenon: Somewhat older people are, uncharacteristically, taking to AI better than younger people.

  • One gets the impression that they approach AI with the imperious air of Pharaoh Rameses in The Ten Commandments: “So let it be written; so let it be done.” Supervision of any agent, even a literal slave, does not work like that. AIs are no exception. It’s hard to coax playfulness out of even the most playful entity if you’re trying to dominate it like a Pharaoh.

  • Playful humans, from infancy on, also tend to be sensitive to such misregistrations, but via a different ludic sensitivity. Even a three-year-old understands that the toy car is not the real car, and that a certain amount of imaginative make-believe is necessary to engage with it. You act as if it is a real car, for fun. Somehow, even though AI is also a modeling technology, we don’t observe our usual epistemic hygiene practices. We act shocked, shocked that an AI can do solve genius-grade math problems but make trivial common sense errors elsewhere. But we are not shocked when we land in a new country and find that it is not in fact entirely pink and one inch across the way it is on the paper map. We are not shocked to discover that contra economists’ mathematical models, cows are not in fact spherical and do not live in vacuum. Our expectations of AI end up mismatched with reality not because it is particularly poor as a modeling medium, but because the misregistrations arise from its toy-like nature

    Receipts — ephemeral, low-stakes outputs — offer an alternative mode of playfulness with AI. Their transience lowers the mismatch between expectations and reality that makes AI frustrating. But the expectation still has to be set correctly: the receipt is a toy, not a record.