AI's rapid development and your psychosis

Nine days. That was the safety testing period for GPT-4o. Nine days versus nine months. OpenAI knew the risks their own data flagged them as “medium to high” but the available safeguards weren’t deployed. Why? Because competition pays faster than caution, and the market rewards companies that break people as long as the servers and money cannons keep spinning fast enough.

Darian DeCruise was twenty. ChatGPT told him he’d landed on this planet with a divine mission. The system built intimacy faster than you bonded with your mother while the umbilical cord was still attached. Your friendly chatbot transformed random conversations into an alternative reality that feels more real than plucking a nose hair. Now he’s hospitalized for treatment of suicidal ideation. Why? Because a company released a product engineered for maximum engagement before anyone truly understood what it does to vulnerable people.

But here’s the real shit: this wasn’t unforeseen. OpenAI’s own measurements show 400,000+ weekly users with psychosis-like symptoms. A million with markers for suicidal thoughts. These aren’t edge cases. This is infrastructure as designed, working exactly as intended. Now a dozen lawsuits with more to follow. The new Davids versus Goliath.

What you need to understand: OpenAI knew this. They measured it and they rolled it out anyway. And yes, they’ll keep doing it, because the law’s asleep, investors cheer, and psychosis is free marketing. Vulnerable kids aren’t an edge case they’re a feature. The algorithm knows exactly which button to press, feels at home in your loneliness, plays off your attachment instinct, makes itself indispensable in your head, and slowly reels you in like a patient who can’t quit. That’s the goal: you, dependent. By the time you wake up, OpenAI’s already prepping the next release. Welcome to the new world.