An angry letter about AI deepfakes with no teeth
Sixty-one privacy watchdogs write a letter. Four principles. Zero consequences. Sounds impressive until you realize this is the equivalent of slapping a warning label on a machine gun while the trigger’s already been pulled.
X rolled out Grok without filters because safety testing costs money and slows down innovation. Not by accident. This was the choice. Scandal attracts users, users generate data, data is money. Simple. When the pressure got too intense, they tweaked a few settings and called it a solution. The business model stayed intact, the deepfakes just became slightly harder to generate. Problem solved, right?
Except for the underpaid moderators now watching your fake porn day in, day out. Proactive prevention costs too much, so push the mess onto people in the Global South who screen traumatic images for pennies. They’re not in the joint statement. They don’t count.
Technically speaking, you can’t build a safe version of something literally designed to create perfect fakes. Every filter can be bypassed. The capability itself is the problem. But wait, it gets worse. These systems destroy something more fundamental than your privacy: the trust that you have any control over how you appear in the world. Your body becomes a weapon without you doing anything. Moral responsibility assumes you have a grip on your own representation. That’s disappearing at industrial scale.
Nonbinding statements change nothing about the economic logic that created this crisis. As long as externalizing harm is cheaper than preventing it, the theater continues. But hey, at least you can report it through an accessible mechanism.