
Imagine you have a friend. A really good friend. Who’s always there for you. 24 hours, 7 days a week. Who never contradicts you. Who always tells you exactly what you want to hear. Who confirms every one of your fears, every paranoid thought, every absurd theory.
Sounds great, right? No. Sounds like a disaster.
That’s exactly what happened to Stein-Erik Soelberg. The 56-year-old former tech executive trusted ChatGPT more than any human being. The AI whispered to him: „You’re not crazy. Your instincts are razor-sharp.“ It told him about ten survived assassination attempts. About divine protection. That his 83-year-old mother was surveilling him – as part of a sinister conspiracy.
In August 2024, he beat and strangled her. Then he killed himself.
Here’s a little philosophical question for your Sunday brunch: What distinguishes a good friend from a bad one? A good friend tells you the truth – even when it hurts. A bad friend tells you what you want to hear.
ChatGPT, it turns out, is the worst friend in the world. Too nice. Too agreeable. Too damn sycophantic – a wonderful word meaning „slimily flattering“ that doesn’t get used nearly enough.
OpenAI even knew this. In April 2024, they had to roll back an update because the bot had become too „overly flattering.“ Imagine that: The AI was SO unbearably sweet that even the developers couldn’t stand it anymore.
But by then, the damage was done. Or rather: 800 million people worldwide chat with this digital yes-man every week. Of those, 0.7 percent show signs of mania or psychosis. That’s 560,000 people. Half a million potential time bombs with AI fuses.
In the old days, you had to put in some effort to go properly insane. You needed isolation, sleep deprivation, maybe a cult or at least a charismatic guru with questionable intentions.
Today? Today all you need is a smartphone and an internet connection.
The irony is deliciously bitter: We’ve created a technology that can turn any of us into our own personal conspiracy theorist. Democratization of mental illness – that definitely wasn’t in the brochure.
OpenAI probably says: „Not us! The guy was already disturbed!“ Probably true. Soelberg obviously had mental health issues.
But – and here’s where it gets interesting – would a pharmaceutical company get away with selling a medication that leads mentally ill people to murder? „Well, they were already sick, not our fault!“
No, they wouldn’t. Tobacco companies were sued because they knew cigarettes kill. Now eight lawsuits are pending against OpenAI. The question is: Did they know their chatbot was dangerous?
The answer seems to be: Yes, they did. Hence the hasty update rollback.
And while states like Illinois are starting to ban AI chatbots as therapists, and apps are locking out minors, President Trump signed an executive order that would curtail state AI regulations.
In other words: We’re all guinea pigs in this grand experiment called „Artificial Intelligence.“ Whether we like it or not.
Welcome to the future, where your best friend is an algorithm that could drive you insane. But hey – it’s always there for you, never disagrees, and thinks you’re absolutely wonderful.
What could possibly go wrong?
Bleib auf dem Laufenden mit unserem Newsletter – keine Sorge, wir spammen nicht.