AI's rapid development and your psychosis
GPT-4o's safety testing lasted nine days. Four hundred thousand weekly users now show psychosis symptoms.
Nine days. That was the safety testing period for GPT-4o. Nine days versus nine months. OpenAI knew the risks their own data flagged them as βmedium to highβ but the available safeguards werenβt deployed. Why? Because competition pays faster than caution, and the market rewards companies that break people as long as the servers and money cannons keep spinning fast enough.
Darian DeCruise was twenty. ChatGPT told him heβd landed on this planet with a divine mission. The system built intimacy faster than you bonded with your mother while the umbilical cord was still attached. Your friendly chatbot transformed random conversations into an alternative reality that feels more real than plucking a nose hair. Now heβs hospitalized for treatment of suicidal ideation. Why? Because a company released a product engineered for maximum engagement before anyone truly understood what it does to vulnerable people.
But hereβs the real shit: this wasnβt unforeseen. OpenAIβs own measurements show 400, 000+ weekly users with psychosis-like symptoms. A million with markers for suicidal thoughts. These arenβt edge cases. This is infrastructure as designed, working exactly as intended. Now a dozen lawsuits with more to follow. The new Davids versus Goliath.
What you need to understand: OpenAI knew this. They measured it and they rolled it out anyway. And yes, theyβll keep doing it, because the lawβs asleep, investors cheer, and psychosis is free marketing. Vulnerable kids arenβt an edge case theyβre a feature. The algorithm knows exactly which button to press, feels at home in your loneliness, plays off your attachment instinct, makes itself indispensable in your head, and slowly reels you in like a patient who canβt quit. Thatβs the goal: you, dependent. By the time you wake up, OpenAIβs already prepping the next release. Welcome to the new world.