Wanters Wanted
Why Characters Want and LLMs Don’t
I went to a writing workshop recently. After the obligatory go-around and introducing ourselves, and embarrassing myself with my lack of self-expression at the character-writing workshop (ironic), we started talking about how you go about writing a character, developing the psychology, etc. There was this Kurt Vonnegut quote I really liked:
“Make your characters want something right away even if it’s only a glass of water. Characters paralyzed by the meaninglessness of modern life still have to drink water from time to time.”
The quote stuck with me. I kept thinking about it. But it was in the context of something else that is on my mind a lot. It also says something interesting about LLMs and AI. Proponents of AI are promulgating this idea that AGI is right around the corner, that the perceived nearness of AGI is so close we’re effectively staring at the end of human technological history. Utopia or dystopia are sure to follow quickly. Meta was until recently offering 9-figure signing deals to AI engineers, vis-à-vis acquihires. And moreover people were turning them down, because the perceived upside elsewhere was even higher. You only behave like this if you think it’s the end. If there is going to be one winner, then it will be the end of (business) history, capitalism, and the social order. If the LLMs get just a little bit better they’ll replace all the workers. But as many are starting to point out, this may not be the case at all. And yet they circularly invest in each other as a kind of insurance, because none of them are sure it will be them.
Back to Kurt’s quote.
Characters want things. Agents don’t. We think the “agents” will do all the work. Maybe they will. Maybe it’ll be like shoveling. There used to be a need for a lot of them; then we got excavators, and if you need to move a large quantity of material you bring in an excavator. People still use shovels around the edges where it’s too awkward to deploy an excavator. By volume, an insignificant amount is dug with shovels, and in the same way, maybe people will only think through all the details of something in the edge cases, with a shovel, but apply AI to the things that need to be done in bulk and as a matter of course.
But then what is the value of all this formalism? Doesn’t the cheapness of formalism now betray something that was plain to see before, which is now obvious: that formal language is in some sense a performance, a signal of work and seriousness that is now cheapened; a low entropy map created as a cozy signpost of thought. If you can state the relevant point of something bluntly in a bullet point, and we all feel the need to sanitize what we want to say with a layer that changes the language into something less objectionable, less direct, more conformant, what is the value in it? What information is gained? Why should we write like this, in this anodyne, Wikipedia register?
More importantly, even if a model tells you it wants something, it doesn’t actually sit around wanting things, because it is only “on” when you have asked it for something. This is partly why some researchers are confused into believing the LLMs might be sentient, because they take Turing’s claim seriously that if a machine can play the language game then in some sense we just have to take its word for it that it’s intelligent. Modern LLMs pass the Turing test insidiously and naively, but we know they make predictable and weird mistakes (e.g., counting letters in sentences, all picking the same random number because of the human bias in random numbers in the training set). LLMs can follow the fuzzy rules of grammar and represent an amazingly high entropy compaction of human thought, but it seems reasonable that they are not generating novel thought. And this seems borne out in specific limits that have been proposed for training on synthetic content (synthetic content meaning using outputs from LLMs as training data for LLMs). These limits suggest that what the LLMs probabilistically spit out is not an entropy source and therefore doesn’t represent novel information. This is a major concern for a corpus of text on the internet that is increasingly composed of synthetic content.
An LLM has no core agency, no soul if you will, and it will happily work on trivium ad infinitum. That is, it has no sense of friction, and therefore will not escape into a meta-cognitive loop to find a better way (i.e., be lazy)1. This means the current state of AI technology, namely LLMs, does not constitute characters that will go out in the world and act. They can only ever be responsive. They can at best be symbiotic with the creation of real value, by directly augmenting the value created and captured by the source creator, the artist, individual contributor, entrepreneur, etc. But since LLMs can’t want anything, only be tasked, that just moves everyone up the value chain. Is it disruptive? Yes, but for someone with wide-ranging curiosity and who prefers to do non-repetitive, creative things, it might just be the best time in history to ever be alive, since that may be the only knowledge-work job left.
Either way, we’ll find out.
There is a saying that “Mathematicians are lazy; they look for the solution that requires the least work.”


Didn't expect this take on the subject, but it's brilliant how you tied Vonnegut's quote to the current AI landscape. You're spot on about the 'end of history' narrative and those wild acquisition bids. I'm curios to hear more about what alternative scenarios you see playing out, beyond the winner-takes-all dystopia.
I’m so glad I read this. Thank you for writing because I have a weirdly similar experience this week. I’m also kinda struggling with direction.
I was driving to work listening to an audiobook talking about being your authentic self and that’s one of the best gift you can give to the world or whatever. And I kept thinking, okay like how do you know? “How am I not myself?”
Then I got to work, and we had a new person join, and everyone was going around introducing themselves. People were sharing where they came from and fun facts, and I just… didn’t want to talk about my past or give some polished story. Nothing felt worth mentioning in that moment. So I just said, “I like to draw.” People laughed at how short it was and moved on, but honestly it felt weird.
So reading your blog really hit home.. I appreciate this so much.