Agents All the Way Down
On AI-generated content, the death of the epistemic signal, and what we should probably build instead.
Here's something I keep noticing. AI writes an article. Another AI summarises it. A third AI uses the summary to generate an answer for a user who didn't read anything. The user asks a fourth AI to reply on their behalf. And somewhere in that chain, the original human who actually thought about something just... fell out. Nobody noticed. The loop still closed.
I don't think most people have sat with how strange this is. We built all of this communication infrastructure, spent decades arguing about what the internet would do to public discourse, and the thing we may actually be building toward is a kind of content perpetual motion machine that doesn't require humans in the loop at all. You can say that's fine, information still flows, but I think that's wrong and I want to try to explain why.
You can tell. You really can.
The New York Times did a test a while back where readers were shown two versions of the same passage, one written by a human and one by an AI, and asked to identify which was which. Most people got it right. And here's the interesting part: most of them couldn't say exactly how they knew.
What they were picking up on, I think, is something like the trace of a mind that changed its own direction midway through a thought. Human prose has a specific texture. The writer commits to an analogy that almost doesn't work and decides to run with it anyway. They hedge something three paragraphs after they said it too confidently. They use the same word twice in a section because they couldn't find a better one and that bothered them and you can sort of feel it. None of this is bad writing. It is the actual record of thinking happening in real time, and readers have spent their whole lives learning to recognise it.
AI writing doesn't have any of that. It has the shape of thinking, the grammatical skeleton of an argument, but none of the resistance you feel when a person actually worked something out. It's very smooth. Suspiciously smooth. And we notice, even when we've stopped consciously looking.
"It has the shape of thinking, the grammatical skeleton of an argument, but none of the resistance you feel when a person actually worked something out."
The more worrying question is whether we'll keep noticing as the baseline shifts. As the ratio of AI content to human content in our environment climbs, does our ability to detect the human signal degrade? I don't know the answer to that but I think we should want to find out.
What using AI to write actually communicates
There's something that happens before you read the first sentence of anything someone wrote, which is that you've already received a message: this person thought this was worth their time. That signal matters. It's maybe the most important thing a piece of writing does, and we don't talk about it because it's not in the text.
When you use AI to write something you could have written yourself, you cancel that signal. You're saying, before the reader reads a word, that the idea wasn't important enough to think through carefully. Maybe you didn't mean to say that. But you said it. And if enough people say it often enough, the whole category of "written opinion" stops being legible as evidence of genuine belief and becomes just another content format. Which I think is where we're going.
Tristan Harris has spent years arguing that the attention economy's core problem is that it pits the entire resources of a billion-dollar company against one person trying to decide whether to put their phone down. The AI content version of this is a little different but structurally related. We're building a world where the effort required to produce a persuasive-sounding text approaches zero, and the effort required for a reader to assess whether anyone actually believes it approaches infinity. That asymmetry should bother us more than it does.
The incentive structure nobody wants to name
Nobody is making a decision to degrade the epistemic commons. That's worth saying clearly because the instinct is to find a villain and there isn't one, really. The mechanism is just that if the marginal cost of generating content is zero and the marginal return on human effort over AI generation is near zero, the rational actor generates. The market doesn't distinguish. The algorithm doesn't distinguish. And so the thing that gets selected for is volume, not the thing that makes writing worth reading, which is evidence that someone actually paid attention.
Sam Harris has a line about how meditators sometimes talk about the "quality of attention" as if attention were a thing with properties beyond mere presence or absence. I think that's right and I think it applies here. There's a difference between content that was produced by a process that involved sustained human attention and content that wasn't, and that difference is real even when the surface outputs look identical. The question is whether we can preserve the conditions under which the first kind gets made.
So what would you actually build
I've been thinking for a while about a media format that takes the Hegelian dialectic seriously as a reading experience. Not as a philosophy seminar, just as a structure. Thesis, antithesis, synthesis. You pick a contested topic. You find someone who genuinely holds the strongest version of one position and someone who genuinely holds the strongest version of the opposing one, and you let both make their case at full force, not a journalist's summary of what each side thinks but the actual argument, from the people who believe it. Then a synthesis that takes both seriously and moves somewhere neither could get to alone.
And then you give the reader a way in. Recommended books that built each of those positions. A course if they want to go deeper. The article as an entry point rather than a destination.
This isn't "both sides" journalism, which is mostly just false balance in a suit. The point isn't symmetry. It's that real understanding almost always requires sitting inside a view you don't hold and feeling why it seems true to the person who holds it, and our current media environment is catastrophically bad at producing that experience. Everything is optimised to confirm you, not to complicate you.
The reason I think this matters more now than it did five years ago is that AI is very good at producing the averaged consensus view on any question. It will give you the smoothed, hedged, inoffensive synthesis that nobody actually believes. What it genuinely cannot do is hold a real position with the weight of a mind that developed it over years and is willing to defend it and be wrong in public. That remains human. And we should probably build media around what remains human, rather than around what AI can simulate cheaply.
Attention is the last scarce thing
I keep coming back to this. In a world where content is free and infinite, what becomes valuable is the thing content used to signal: that a specific person paid the cost of actually thinking about something. Not processing it. Not summarising it. Thinking about it. Changing their mind partway through. Deciding to publish anyway knowing they might be wrong.
The platforms that matter in ten years won't be the ones with the most content. They'll be the ones where you could tell, at a glance, that humans showed up. That the friction you felt while reading was a real person's mind working through something, not a language model producing the shape of one.
Everything else is just agents talking to agents. And I genuinely don't know who that's for.
If you disagree with the dialectic platform idea specifically I'd like to hear it. The argument isn't settled in my own head.