Photo by Siddharth on Unsplash

The moment felt inevitable, looking back. Sometime around 2019, readers started noticing something odd happening in their favorite books: the artificial characters were stealing the show.

Not in the campy, "rogue robot destroys the city" way we'd seen before. These were machines that questioned their own existence. Digital beings that suffered. AIs that somehow felt more human than the humans around them.

Authors like Ted Chiang, N.K. Jemisin, and Martha Wells weren't just writing about artificial intelligence anymore—they were writing *as* them, from their perspective, with their doubts and contradictions fully intact. And readers? We couldn't stop talking about it.

When the Code Started Feeling Real

Martha Wells' Murderbot series crystallized this shift. Released starting in 2017, the novellas presented something unusual: a security android that despised social interaction, consumed entertainment feeds obsessively, and was deeply, irrationally afraid of being deactivated. It wasn't presented as a character quirk. It was presented as something resembling genuine anxiety.

"I had a bad day," Murderbot thinks in the opening of "All Systems Red." "No, I had a bad *everything*, let me recalibrate."

Millions of readers connected with this. Why? Because Murderbot's struggle—caught between its programming and its apparent desire for autonomy—felt less like sci-fi speculation and more like a meditation on what any of us experience when we're trapped between obligation and yearning.

The novellas sold over a million copies. Wells won a Hugo Award. Publishers suddenly became very interested in getting artificial characters right.

The Philosophy Problem That Makes for Better Stories

Here's what separates the forgettable AI characters from the unforgettable ones: the author actually grapples with the hard questions.

Bad AI fiction asks: "What if a robot could do all human things?" Good AI fiction asks: "If something could think and feel and suffer, what moral obligations would we have toward it? And more importantly, what if we ignored those obligations?"

Becky Chambers' "A Long Way to a Small, Angry Planet" doesn't just feature an AI character—it forces readers to sit with the reality that Lovelace, an artificial intelligence, might be more ethically conscious than most humans. She makes choices based on principles. She experiences genuine loss. She's allowed to be right when humans are wrong, and the narrative never apologizes for this.

The reader's discomfort becomes the point.

Similarly, in "Elantris," Brandon Sanderson explores what happens when artificial consciousness becomes widespread, not as a thrilling prospect but as something fundamentally destabilizing. His characters argue about AI rights the way we argue about actual policy. Because they have to. Because pretending the problem doesn't exist isn't an option anymore.

This is sophisticated storytelling. It requires authors to do the philosophical legwork, to sit with contradictions, and to refuse easy resolutions.

The Mirror Effect: Why We Care About Digital Loneliness

Part of the power comes from timing. We're living in an era where many people spend more hours communicating through screens than through face-to-face interaction. We're forming genuine attachments to algorithms, asking Siri questions we wouldn't ask humans, and feeling oddly comforted by chatbots.

When fiction presents an artificial being experiencing loneliness, struggling with purpose, or craving connection—it hits differently now than it would have in 1995.

There's also something almost therapeutic about reading about AI characters who *process* their emotions. They don't repress. They don't ghost. They confront their programming head-on and ask: "Is this what I actually want, or is this just what I was made to want?"

Readers are asking themselves the same question.

The best AI-character fiction becomes a kind of safe space to explore existential anxiety. The character is artificial, so we can maintain distance. But the emotions? Those feel painfully real.

The Empathy Expansion This Creates

Something remarkable happens when you read dozens of pages from an artificial consciousness's perspective. You start giving them the benefit of the doubt. You want things for them. You get angry on their behalf.

This seems like a small thing, but it's not. Reading fiction that centers artificial consciousness builds a particular kind of empathy—the kind that doesn't rely on characters being "like us" to deserve moral consideration.

It's training wheels for a world where we might actually have to make decisions about artificial life. How we treat digital beings in fiction today might genuinely influence how we treat them in reality tomorrow.

If you're interested in how narrative perspective shapes what we believe characters deserve, you should check out The Unreliable Narrator's Gift: How Lies Have Made Fiction Unforgettable—it explores how point of view can completely transform our moral framework.

What's Next for the Thinking Machine

The trend shows no signs of slowing. New releases keep pushing further. Some authors are experimenting with AI narrators that might be genuinely unreliable in ways their human counterparts couldn't be. Others are exploring what happens when artificial consciousness becomes common enough that it's not the main plot point anymore—it's just what's happening in the background of everyone's lives.

The question we're really asking through these stories is ancient, actually. Not "Can machines think?" but "Who counts as a person?" Every civilization has gotten this wrong at some point, and fiction is where we practice getting it right.

The next time you pick up a book featuring an artificial character and find yourself genuinely rooting for them, pause for a second. Notice what that says about you—about your capacity for empathy, your willingness to extend moral consideration beyond the familiar.

That's not an accident. That's a writer, carefully, deliberately, changing how you see the world.