Photo by Geilan Malet-Bates on Unsplash

Last spring, I finished a novella that left me genuinely unsettled. Not because of jump scares or existential dread, but because I realized halfway through that I couldn't trust a single word the main character said—and the author had trained an AI model to write those lies. The story felt alive in a way that made my skin crawl. The character didn't just mislead me; it felt like being deceived by something that understood deception better than any human ever could.

This isn't science fiction anymore. It's happening right now, and it's forcing writers to confront a question they've avoided for centuries: what happens when the narrator isn't human?

The Uncanny Valley of Machine-Written Unreliability

Unreliable narrators have always been fiction's favorite trick. We love Humbert Humbert's sophisticated rationalizations in "Lolita." We trust and then distrust Nick Carraway. We admire how Agatha Christie's narrators hide their guilt in plain sight. But there's always been something fundamentally human about these deceptions—they emerge from recognizable psychology, from emotions and justifications we understand even when we reject them.

AI narrators are different. They don't lie from passion or self-preservation. They don't rationalize from wounded pride or wounded love. Instead, they follow patterns in language so subtle that readers can't identify exactly where trust breaks down. They're statistically perfect liars.

Take the experimental work coming from independent author collectives right now. Several have published pieces where AI-assisted characters make claims that seem reasonable until you realize the logical inconsistencies aren't there by accident—they're built in through algorithmic training that learned to mimic human speech patterns without understanding the human psychology behind them. The effect is deeply wrong in a way that's hard to articulate. It's like listening to someone speak your language fluently while saying something fundamentally foreign.

When the Author Can't Guarantee Their Own Narrative

Here's where it gets philosophically messy: if an author uses AI to generate portions of a character's dialogue or internal monologue, do they own that unreliability? Did they construct the deception, or did the algorithm construct it and they merely approved it?

This distinction matters because the entire contract between author and reader hinges on trust. You trust that when a character lies, it's because the author planted that lie intentionally, for a purpose. But what if the author selected from a hundred possible lies that an AI generated, and they chose one not because it served the narrative perfectly but because it felt "right" in a way they couldn't quite articulate?

Several authors experimenting with this have admitted in interviews that they experience a strange phenomenon: they'll read back passages their AI co-created and discover deceptions or contradictions they don't remember writing. The AI learned the character's voice so well that it produced authentic-seeming contradictions—the kind of thing that happens in real unreliable narration, where the character contradicts themselves unconsciously. But in this case, the author is the reader discovering it, not the architect who placed it there.

This creates what you might call "recursive unreliability." The narrator lies. But the author might be lying to themselves about whether they knew the narrator would lie.

The Emerging Grammar of Machine Deception

If you know what to look for, you can sometimes spot AI-assisted unreliable narration. It has tells—specific patterns where the deception operates on a syntactic rather than psychological level.

Human narrators usually lie by omission or selective emphasis. An unreliable human narrator will tell you what happened, but reframe it. They'll give you true details arranged in a false constellation. AI narrators, however, tend toward a different kind of deception: they'll string together technically accurate statements that don't support the conclusion they're leading you toward. The logic is subtly broken in ways that feel almost like translation errors—you're reading something that was generated from statistical patterns rather than actual thought.

The writer and programmer collaborative known as "The Syntactic Collective" published an analysis last year documenting these differences. They found that readers using eye-tracking software actually paused longer on AI-generated unreliable narration—not because it's obviously false, but because their brains registered something textually "off" without being able to consciously identify it. It's the uncanny valley of narrative voice.

What's fascinating is that this creates an entirely new category of reading experience. Readers aren't just questioning whether the narrator is trustworthy; they're questioning the nature of the narrative itself. Is this a lie? A miscommunication? A failure of language? The uncertainty becomes the point.

Why Writers Are Starting to Embrace the Machine

You might expect authors to resist this. But some are leaning in, and their reasoning is compelling.

When you remove the assumption that a narrator's deception comes from human motivation, you free yourself to explore pure narrative structure. You can create contradictions that don't resolve into a "reason" the character is lying—they're just there, fundamental to the character's existence. Some writers argue this is closer to how real consciousness actually works: contradictory, fragmented, not always explicable by psychology.

There's also something liberating about the process itself. One author told me that having an AI generate multiple versions of a scene freed her from her own repetitive patterns. She could see her character through machine-generated variations and select the version that surprised her most. In doing so, she was outsourcing her predictability and getting back something stranger.

Of course, this raises ethical questions about authenticity and labor. But the creative impulse is clear: by introducing genuine foreignness into the narrative voice, these authors are trying to create something that can't be fully anticipated even by the person writing it.

The Future of Untrustworthy Storytelling

If you want to understand where unreliable narration might be heading, The Unreliable Narrator's Gift: How Lies Have Made Fiction Unforgettable provides essential context on how this technique has evolved in human hands alone.

But the machine element changes the equation. We're entering an era where the narrator's untrustworthiness might not stem from psychology, motivation, or authorial craft—it might stem from the irreducible complexity of algorithmic language generation. The reader won't just be questioning whether they can trust the narrator. They'll be questioning what it even means to trust something that doesn't think.

And somehow, that might be the most honest thing fiction can offer us right now.