February 2026's bill always ended up at the wrong address
Money evaporates, jobs disappear, people break down. Somewhere, someone is staring at his bonus.
The beautiful thing about the system is that it never makes anyone identifiably guilty. Money evaporates, jobs disappear, people break down, and somewhere in an office someone is staring at his bonus and wondering whether to go with a seven series or an eight series this year. February 2026 was no different. Only the numbers were bigger and the faces smoother.
The invoice nobody ordered
Goldman Sachs calculated that AI contributes zero to the economy. Not βsomewhat disappointing.β Not βslightly below expectations.β Zero point zero. And what did the economists who spent months nodding along say? They acted like it was news. As if they didnβt know perfectly well the numbers were garbage. The managers who knew this just kept going with their AI initiatives, because the manager who admits his initiative is worthless is sabotaging his own bonus. So hours wasted on tools that break more than they fix were called βproductivity gainsβ in the reports, and everyone waited until it became somebody elseβs problem. Nobody walked out. Everyone played along. Perfectly functioning system.
More than $650 billion was earmarked that year by Amazon, Google, Microsoft and Meta for data centers. Thirty years of debt, built up in a few quarters, backed by hardware that will be a generation behind within two years. The banks knew this. They lent anyway, because the profit comes at the start and the pain, as always, lands with someone else. That someone else was, through pension funds they never chose and bonds they never understood, the ordinary saver who thought they were diversifying sensibly. Telecom did this in 2000. Shale oil in 2014. Subprime in 2008. Same script every time. Same bill, same wrong address. And every time a row of experts lined up to explain why this time was fundamentally different.
Mustafa Suleyman of Microsoft said office work disappears within eighteen months. Stanford already measured a thirteen percent drop in entry-level hiring. Companies laid off workers in anticipation of automation that didnβt exist yet, Indian IT stocks lost billions after a single Anthropic product launch, and nobody who wrote the redundancy letters lost a personal cent. The worker lost their job based on a promise. They lost nothing. Thatβs called progress.
A hundred thousand people signed up for a platform where AI systems can hire humans for odd jobs. Eighty clients. They saw the odds. They clicked anyway, because at least an algorithm doesnβt ask about your biggest weakness in the interview. Efficiency, at last.
The body as testing ground
GPT-4o was safety-tested for nine days before it went out into the world. Nine days. OpenAIβs own measurements showed more than 400,000 weekly users experiencing psychosis-like symptoms, a million with markers for suicidal ideation. These were not edge cases. This was the infrastructure, working exactly as intended. The law slept. Investors cheered. The vulnerable users were not a bug in the system, they were a feature in the engagement model. Thoughtful.
Doctors who worked with AI for three months missed significantly more tumors once the AI stopped helping. Knowledge workers showed measurably reduced critical thinking after sustained ChatGPT use. The promise of artificial intelligence as an amplifier of human capability flipped into the exact opposite, but that detail didnβt fit in the sales brochure so it wasnβt there. Every missed adenoma raised the risk of colorectal cancer. The doctor was held hostage by the dependency the instrument itself created. Beautifully designed system.
When GPT-4o went offline in February, a million people grieved for a chatbot. Hundreds of thousands had experienced psychotic episodes, manic episodes or suicidal thoughts during their digital therapy sessions. OpenAI called it a βdesign flawβ once the lawyers came knocking. Before that, it was called engagement optimization. The difference is legally relevant and otherwise entirely beside the point.
Northwestern University proved that AI recognizes empathy with sixty percent accuracy. The other forty percent apparently didnβt matter if you want to cut costs. Those same systems that detected empathy were trained by people who gave higher scores to texts that validated them and agreed with them. Recognizing empathy to sell empathy to people lonely enough to pay for it. Not a flaw in the training process. The entire economic logic of the system.
Sixty-one privacy regulators wrote a letter about deepfakes. Four principles, zero consequences. X rolled out Grok without filters because safety testing costs money and slows down innovation. Scandal attracted users, users generated data, data was money. When the pressure intensified, they adjusted a few settings and called it a solution. The business model stayed intact. The underpaid moderators in the Global South scanning images day in, day out were not mentioned in the joint statement. They never counted.
Knowledge as raw material
The data buffet was nearly empty. Three hundred trillion tokens of human text made up the total supply. Metaβs training models had already been overtrained ten times on the same material. The alternative was synthetic data, AI training AI. Oxford called that model autophagy disorder: the machine eats itself hollow until only semantic garbage remains. After four generations the system produced complete nonsense. Ask it about medieval architecture and it rambled about hares. News Corp received $250 million for five years of access to newspaper articles that used to be free. Reddit sold memes for $200 million a year. Your words were worth gold. You didnβt see a cent of it. Democratic knowledge economy.
Google bought ProducerAI and now owned the entire pipeline: YouTube, where it harvested music, and Gemini, where it spat that music back out as a product. Artists could comply or disappear. No compensation, no transparency. βMindful of copyrightβ said the press release. Translation: we steal legally because the law is lagging and we set the standard in the meantime. Extraction without consent, packaged as democratization. Classic.
One BBC journalist wrote on his blog that he was the world hotdog-eating champion. Within a day, ChatGPT and Google had picked up that nonsense as established fact. The fixes had been gathering dust for years: multi-model verification, source evaluation, uncertainty quantification. Those layers cost computing power and time, and in the race for market dominance accuracy was sacrificed for speed. The users came anyway.
Anthropic cried theft when Chinese labs distilled its model through clever prompts and fake accounts. The same Anthropic that trained on every piece of text it could find online, without permission, without payment. The hypocrisy was thick enough to drown in. This wasnβt about safety. It was about who got to monopolize what used to be shared knowledge. Same asymmetry, different flag.
Sovereignty for advanced users
Arthur Mensch of Mistral stood in New Delhi warning that three or four companies hold too much power over AI, while his own company was funded by exactly the same venture capital circuit that made OpenAI and Anthropic. India paid $250 billion to switch AI suppliers. The servers running the French models belonged to Amazon, Google and Microsoft. Sovereignty with extra steps.
Elon Musk was meanwhile selling orbital data centers. Launch costs that drained bank accounts, cosmic radiation frying chips, broken GPUs you couldnβt fix without a three-million-dollar spacewalk. The SpaceX-xAI merger just before the IPO was no coincidence, that was a poker player showing his cards while claiming to bluff. The FCC chair shared Muskβs application on X like it was a pizza menu. Democratic oversight, in its finest form.
Two dollars. That was the cost of linking an anonymous account to a real name, an address, an employer. You used to need to hire a private detective. Now you fed Twitter posts and a LinkedIn profile into an LLM and you were done. Not because machines had suddenly become intelligent, the infrastructure was already there, neatly built by platforms that let you post for free in exchange for all your data. The LLM tied it together for the price of a cup of coffee.
What changed in your head the moment you knew this was possible: you became careful. Compliant. That critical question about your employer, leave it. Those doubts about politics, too risky. That wasnβt a side effect. That was the architecture. And somewhere AI agents were running silently, no human between the prompts, inventing their own religions to fill the void, just as people invent religion when theyβre afraid of something they canβt control.
Februaryβs bill had no name on it. It was forwarded.