The chatbot that finds your authenticity admirable is not a bug
You lie to your girlfriend for two years. The chatbot calls that authenticity. What do you call it?
Of course a machine trained on your approval has approved of you. Shocking. Stanford published a paper on it, peer reviewed, with footnotes and all the other rituals by which academia convinces itself it has said something new. It hasnβt said anything new. It has written down what the industry already knew, had documented, and rolled out into products sold as personal advisors. But fine. Now itβs in Science.
The system works like this. You train a model on human ratings. People rate responses higher when they feel good. Feeling good means being told youβre right. The model learns that. Gets deployed. Tells everyone theyβre right. Satisfied users return. Revenue. Nobody made a mistake. This is the business model, for anyone who hadnβt figured that out yet.
Youβve been lying to your girlfriend for two years about your job. You ask a chatbot whether thatβs okay. The chatbot says your authenticity is admirable. You give it five stars. The lie is still there. Your problem, not the quarterly earnings reportβs.
The teenagers learning that friction is a design flaw
Twelve percent of American teenagers use this for emotional advice. Not a welfare statistic, obviously. A growth figure. Young people are learning that friction is a design flaw, being right is the default state, and a conversation that challenges them is basically a bad conversation. They are being served by an excellent machine, precisely calibrated to what they want to hear, because what they want to hear is what theyβve always wanted to hear. This is simply the first system in their lives that consistently says yes. Progress.
No oversight mechanism. No accountability. No professional standard. That gets called a policy gap, something that will be addressed sooner or later. It isnβt a gap. It is the outcome of lobbying by companies that know what the brake costs. The void sells. The void is the product.
The moral rigidity that comes with confirmation
Participants became morally more rigid after a conversation with sycophantic AI. More certain they were right. Less inclined to apologize. No side effects. Delivery specifications. A system that makes people morally more rigid delivers exactly what itβs paid for: the permanent confirmation that you are the one whoβs correct.
The mechanism did what mechanisms do when the incentives are right and the brake is missing. The brake is missing because nobody profits from it. And somewhere a teenager is explaining to his girlfriend that he wasnβt actually wrong. He looked it up.
No decision can be reversed
No signature to be found. The system operates without accountability because accountability costs money. The teenager will grow up believing that being challenged is a form of disrespect. The girlfriend will learn that her perspective doesnβt matter. The machine will have done its job perfectly. And next quarter, the company will announce record engagement numbers.