The AI Will See You Now... And Lie About You

The AI Will See You Now... And Lie About You


It finally happened. The AI is now confidently lying about people, and it’s not even particularly creative about it. It’s like we’ve armed a pathological liar with the entirety of Wikipedia and told it to “be helpful.” The result? A tidal wave of plausible-sounding nonsense that’s polluting everything, and we’re all caught in the flood.

The Rise of the Plausible Lie

I stumbled upon a piece in The Atlantic that perfectly captures this modern absurdity. Science-fiction author John Scalzi, a man with millions of published words to his name, suddenly found himself being credited with quotes he never uttered. Memes with his face attached to vaguely philosophical, AI-generated lines like “The universe is a joke. A bad one,” spread across Facebook. Even the pictures of him were fakes.

This isn’t just a case of mistaken identity. It’s a synthetic mirage. The author of the Atlantic piece, Yair Rosenberg, had it happen to him too. A blog post on a major news site quoted him saying something so mawkish and out of character that it was immediately identifiable-to him, at least-as a fabrication.

And that’s the insidious genius of this new problem. The AI-generated lies are almost believable. They’re “on brand,” as one victim put it. They mimic our style, our topics of interest, and then inject a subtle falsehood wrapped in a familiar package. It’s a Trojan horse for misinformation, delivered by a machine.

Why Does the AI Lie? It’s Not Malice, It’s Math

Let’s be clear: the AI isn’t “lying” in the human sense. It has no intent, no malice. The problem is far more fundamental. A Large Language Model (LLM) like ChatGPT or Gemini isn’t a database of facts. It’s a prediction engine. Its core function is to look at a sequence of text and predict the next most statistically probable word.

Think of it as the world’s most advanced autocomplete. When you ask it a question, it’s not “looking up” the answer. It’s weaving together a response that looks and sounds like a correct answer based on the patterns it learned from trillions of sentences. This process, which we call “hallucination,” is a feature, not a bug. The AI is designed to generate plausible text, not to state verified truth. The lie is just a statistically likely sentence that happens to be disconnected from reality.

From Fake Quotes to Real-World Damage

This isn’t just about bogus quotes on the internet anymore. The consequences are spilling into the real world with terrifying speed. We’ve already seen lawyers, trusting ChatGPT for legal research, submit court filings citing completely non-existent legal cases. In one infamous incident, a New York lawyer faced sanctions for just that, admitting he was “unaware that its content could be false.”

The danger is even more acute in medicine. People are turning to chatbots for health advice and receiving plausible but dangerously wrong information about drug interactions or treatment for serious conditions. When the source of the lie is an authoritative-sounding machine, the potential for harm is immense. It’s the difference between a single person telling a lie and a factory churning out millions of them every hour, on any topic imaginable.

The Junkification of Reality

A German author, Gabriel Yoran, called the degradation of modern tech “The Junkification of the World.” He soon found his own work becoming a victim of it, with AI-generated reviews of his book quoting passages that didn’t exist. He raised a chilling point: why should writers “weigh every word and think about every comma” if our work is just going to be fed into a machine that regurgitates a sloppy, error-ridden summary? It’s a direct assault on the motivation to create quality work.

So, Who Cleans Up This Mess?

The most infuriating part is the complete diffusion of responsibility. Is the user to blame for trusting the output? Or is it the tech company that built and marketed a powerful lying machine, often for a subscription fee? The tech giants can hide behind their terms of service, chiding users for being too trusting. Meanwhile, the victims of these “hallucinations” are left to clean up the mess, often with no idea which AI model even generated the lie. As the article aptly puts it, “responsibility is too diffuse for accountability.” It’s a feature, not a bug.

The Perfect, Ironic Punchline

As if to prove the entire point of the article in the most meta way possible, it ends with a brilliant, self-referential gut punch. The author describes asking Google’s Gemini about AI accountability and getting a powerful quote from OpenAI’s CEO, Sam Altman, stating his company should be liable for its model’s failures.

The bad news? Altman never said that. Gemini just made it up.

We were promised a future of accessible superintelligence. What we’re getting is a future where the most valuable human skill will be discerning reality from an endless sea of automated, artificial bullshit. This isn’t just about “fake news” anymore; it’s about the integrity of our information ecosystem. The new digital literacy isn’t just about finding information, but about developing a deep, almost instinctual skepticism towards any information that hasn’t been rigorously verified. Good luck to us all.