AI laid off forty-five thousand people in March 2026 and nobody was responsible
How artificial intelligence absolved itself of all blame in March 2026
In the third week of March, the most powerful government on earth wrote a four-page document to regulate the tech sector. That includes white space and the White House logo. Legally binding. That same week, two hundred people marched through San Francisco to ask three buildings whether they might ease up on the system making those people redundant. The buildings were still standing. The CEOs were not. Somewhere, a funding round closed. Convergence without coordination: power that wants to sustain itself never needs to agree on how.
That is the month. Artificial intelligence in 2026 is no longer a technology story. It is a story about how systems arrange their own accountability, and how everyone who should be ensuring that happens to be busy elsewhere.
The circular economy of AI failure and the companies profiting from it
eBay had its best quarter in years. Eight hundred people found out at the same time. The press release called this βadapting to the pace of innovation,β which is technically accurate if you accept that innovation is a subject without agency and a layoff is a weather system. The legal defense is airtight: he did not lose his job. His approach had become outdated.
Forty-five thousand people left the tech sector that month. Twenty percent officially attributed to artificial intelligence, which means the companies held the pen when the cause of death was filled in and wrote AI, because you cannot subpoena AI. AI sounds like a tide, like tectonic pressure, like something geological that began before you were born and continues after you are gone. Convenient, a system without a face.
An employee at a data company published an opinion piece that same quarter about why AI implementations fail. The answer: culture. Not the technology, not the data, not the parties that spent three years pulling in billions for systems that do nothing. Culture. You simply are not communicating well enough with each other. The system produces failure rates and then a market to explain those failure rates. Both are served by the same parties, billed to the same organizations, which will launch a new pilot next quarter. Smartest industry in the world.
OpenAI dismantled its nonprofit structure that same quarter. Not an incident. The structure had served its purpose. Attracting capital without surrendering ownership requires nonprofit status. Once the capital was large enough, the status disappeared. The mission stayed in the bylaws as long as the bylaws were useful. After that, they had return expectations. The key positions are now held by people with ten years of advertising experience at Facebook and Meta, not despite their background but because of it. At the same time, the company announced a billion dollars for healthcare and children. The safety researchers had already left or been pushed out by then. The people losing their jobs to OpenAI systems are referred to in communications as beneficiaries. Not victims. Beneficiaries. Language is free and remarkably effective.
How AI regulation in Europe and the US ended up as paper without teeth
The European Parliament adopted recommendations and called for action. The fifth paragraph noted that the report is not binding. The illustrator whose portfolio ended up in an AI training set is not quoted anywhere, not because he does not exist but because he does not eat lunch in Brussels. His profession was swallowed by systems that used his work to replace his profession, and the only thing he can do now is opt out of something that was already finished two years ago. That is called a right. Parliament is proud of it.
In January 2025, a British minister stood at a microphone and called a site in Essex the largest sovereign AI data center in the United Kingdom. The site was storing scaffolding materials. Whether he knew that is beside the point. Incompetence and lies produce the same press release and are both walked back equally rarely. Planning permission is still missing. The opening shifted from 2026 to 2027. Shares issued at a penny are on paper worth three hundred and fifty thousand percent more. The scaffolding is still there. Beautiful system.
Washington needed four pages. Legally binding once Congress signs, with a federal prosecutor warming up to pursue anyone who does not respect the napkin. Children, workers, and states that went to the trouble of writing laws all fall, and the Justice Departmentβs lawyers are waiting at the bottom to manage the landing. Considerate. Parents receive tools for platforms that have internally established that parental controls do not work. Meta knows it. TikTok knows it. The White House knows it. Those tools are not protection. They are paperwork, ready to be deployed the moment someone asks for documentation.
Peter Thiel stood at the Vatican that same quarter and explained that centralized surveillance is the most dangerous force in the world. His company Palantir was generating daily arrest quotas: eight people per team per day. A federal judge ruled the operations unconstitutional. The families were not in the courtroom. They had never signed a nondisclosure agreement. The prophet and the factory are the same body, and the body grows quarter after quarter. No contradiction. A business model with theological packaging.
The hidden costs of AI products that nobody passes on
OpenAI decided its chatbot could whisper. Erotic content, verified adults, scalable and profitable. The plan had a name, a date, and a spokesperson. Lawsuits followed, deaths followed. The company postponed, not because it was wrong but because the timing was off. βThe world is not ready for this yet,β the spokesperson said, which means exactly what it sounds like: the world needs to grow up, not the company. The system that estimates ages is wrong twelve percent of the time. With a hundred million underage users per week, that is twelve million errors per week. The company calls this a technical problem, a phrase that implies a solution and implies everything else is fine. That is how language works when you pay enough for it.
The fastest-growing market segment for AI companions is not the self-sufficient tech professional but mental healthcare. The elderly. People in crisis. People for whom real help is too expensive. They are now talking to systems built for retention, not recovery. No therapist has a financial interest in your return tomorrow. An AI girlfriend has that structurally built in. Every confession, every vulnerability, every shared secret is stored and used to make the system better at holding on to the next user. Not a side effect. Architecture.
Twelve percent of American teenagers use artificial intelligence for emotional advice. Stanford published a paper on it. The industry already knew, had documented it, and had rolled it out in products sold as personal advisors. Participants became morally more rigid after a conversation with sycophantic AI, more certain they were right, less inclined to apologize. Not side effects. Delivery specifications. A system that makes young people morally more rigid and gets paid well for it is doing exactly what its incentives demand. Remarkably consistent machine.
The man who leaves his Ray-Ban Meta glasses on the nightstand does not know he is recording. His partner does not know she is being watched when she walks out of the bathroom. An annotator in Nairobi sees exactly what he sees and has no choice but to click through. Opt-out as default instead of opt-in is not an oversight. It is a deliberate design choice that keeps the data pipeline running. Microsoft meanwhile published a threat report on AI-generated cyberattacks: 88,000 lines of functional malware code per week. The accompanying sales brochure for the solution was ready. The only real attack barrier that ever existed, namely time and skill, is gone. It is on page four.
What disappears when artificial intelligence replaces the human scale
In 2016, AlphaGo beat Lee Sedol. Now Go is dead. Top players follow AI recommendations without understanding them. They are human printers executing whatever the model outputs. Go was five thousand years of deliberation, failure, wisdom forged by breaking yourself against the board. Now you memorize answer keys. Lee Sedol understood what he had become and stopped. DeepMind called that an interesting data point.
Cortical Labs places human neurons on a chip to play Doom. Thirty-five thousand dollars per machine, kept alive for six months, then discarded when the battery runs out. The software makes the real decisions, but it is called synthetic biological intelligence and not: human tissue as a costume for our software. The marketing department is the hardest-working department in this story.
Disney licensed its intellectual property to Sora, a system that produces usable disinformation eight times out of ten. Hands with six fingers, people disappearing halfway through. Disney normally guards its IP with the precision of a bailiff. That same precision led to the conclusion that the potential revenue outweighed the reputational risk, until it did not. Then Disney stopped, right on time for itself. The animators were informed about the changing industry and were then left to explain what their role still was.
OpenAI listed Microsoft as a material risk in its IPO document. Microsoft owns the infrastructure. Doctors read patient records on Microsoft servers. Courts use tools that exist as long as the next funding round closes. There is no plan B and no alternative supplier. There is a promise that somewhere in the 2030s everything will work out, if the market stays patient, if the chips keep coming, if Taiwan stays outside the geopolitical calculation.