“ChatGPT Psychosis”

June 28th, 2025

If you haven’t read, Snow Crash by Neal Stephenson recently, now might be the time to dust that one off.

—Spoiler Warning—

Recall that Snow Crash is a mind virus that is spread both online and as an addictive drug through Reverend Wayne’s Pearly Gates franchises. The “faithful” are addicted, brainwashed and suffer from glossolalia (speaking in tongues). They are disconnected from reality.

Snow Crash exploits structures in the human brain that respond to specific, ancient, linguistic patterns.

Now, out here in the real world, we read that ChatGPT is, apparently, triggering severe mental illness in some users.

I know, you’re rolling your eyes. Me too. But I went from rolling my eyes to *bug eyes* when I read this part:

I was actively trying to speak backwards through time. If that doesn’t make sense, don’t worry. It doesn’t make sense to me either. But I remember trying to learn how to speak to this police officer backwards through time.”

Was he trying to speak, “Backwards through time…”

In Sumerian? *wink*

Via: Futurism:

Dr. Joseph Pierre, a psychiatrist at the University of California, San Francisco who specializes in psychosis, told us that he’s seen similar cases in his clinical practice.

After reviewing details of these cases and conversations between people in this story and ChatGPT, he agreed that what they were going through — even those with no history of serious mental illness — indeed appeared to be a form of delusional psychosis.

“I think it is an accurate term,” said Pierre. “And I would specifically emphasize the delusional part.”

At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear. When people start to converse with it about topics like mysticism, conspiracy, or theories about reality, it often seems to lead them down an increasingly isolated and unbalanced rabbit hole that makes them feel special and powerful — and which can easily end in disaster.

“What I think is so fascinating about this is how willing people are to put their trust in these chatbots in a way that they probably, or arguably, wouldn’t with a human being,” Pierre said. “And yet, there’s something about these things — it has this sort of mythology that they’re reliable and better than talking to people. And I think that’s where part of the danger is: how much faith we put into these machines.”

One Response to ““ChatGPT Psychosis””

  1. Snowman says:

    “it has this sort of mythology that they’re reliable and better than talking to people.”

    Reminds me of Martin Armstrong and his Socrates computer. When it has drawn a different conclusion from humans about anything, how often has it been correct? We only hear about its major successes.

    Has anyone verified that it even exists?

    It would be handy to have someone/something else to blame if one of your predictions turns out to be very wrong, or so surprisingly right that people might otherwise question who leaked secret info to you. The exact dates it correctly predicts, for instance, beg to be explained, and averaging historical correlations is insufficient to do that.

Leave a Reply

You must be logged in to post a comment.