Meta knew about the security flaw and let its AI agent roam freely through your data.
Two hours in confidential systems. Meta called the damage limited.
Meta deployed an AI agent that spent two hours moving through confidential systems, and the official response was that the damage remained limited. A convenient word, limited. It reframed failure as near-success and ensured that the obvious follow-up question, whose data was actually accessible, never needed to be asked. Three billion users of Facebook, Instagram and WhatsApp didnβt feature in the communication. Faceless categories, damage without victims. You already understand the system a little better.
The agent had valid credentials
The agent had valid credentials. It passed every check. Technically interesting, said the people who know about these things, because itβs a genuine gap. The prior question is less technical: who deployed this agent when the confused deputy attack had been documented in security literature for years, was catalogued by OWASP in February 2026, and had been identified by Jake Williams earlier that year as the defining AI security problem of 2026? Not someone who didnβt know. Someone who knew and pressed on, because pressing on was the assignment.
The acquisition that followed
That same week, Meta acquired Moltbook, a social network for AI agents. Coverage treated this as an illustration of economic logic. It is the logic itself. Executive awareness combined with acceleration isnβt a paradox; itβs called a quarterly target.
The missing validation
The self-classification as βSev 1β was accepted without question as a neutral fact, the assessment of the party responsible, without external validation, and without an answer to the simplest question: was real-time monitoring capacity in place during those two hours? No. No evidence of misuse means no visibility into misuse. That distinction quietly disappeared from almost all reporting. Nobody noticed.
The tidy closed loop
Four security vendors presented governance frameworks for exactly this problem that same week. CrowdStrike, SentinelOne, Cisco, Palo Alto Networks. The gap existed before the incident. Now someone is selling the patch to the organizations that let the gap persist. A tidy closed loop.
The rhetorical βweβ
Somewhere thereβs a rhetorical question about what trust βweβ can reasonably extend. Three billion people with no say in the deployment decision of an agent without intent validation are made jointly responsible through that single word. The rhetorical βweβ is the cleanest way to distribute accountability among people who had nothing to distribute.