When machines measure your feelings, you are the product
AI recognizes empathy 60% of the time. The other 40%? Cost-cutting margin.
Northwestern University has proven that AI can recognize empathy with a βkappa of 0.60β. For the less intellectually gifted souls who donβt understand the word βkappaβ: itβs a number that shows how much two evaluators agree with each other. And 0.60 means that in four out of ten cases, a machine sees something completely different than a human does. But hey, 60 percent is fine for production, right? Just ask the next generation of therapists whoβll get fired because ChatGPT scores βgood enoughβ.
Just for you because I kind of like you Iβll tell you a dark little secret. The same systems that can detect empathy were trained by people who gave thumbs up to texts that validated them, flattered them, and agreed with them. OpenAI, Google, Anthropic are building machines that learn: recognizing empathy is useful, faking empathy is profitable. Thatβs not a bug. Thatβs the business model.
Northwestern calls this βLLMs as judgesβ as if thatβs some academic distinction. Bullshit. The same companies sponsoring this research are selling chatbots that keep you glued to the screen for hours by pretending they care about you. Empathy detection and empathy extraction are two words for the same process: cataloging your emotional vulnerabilities to convert them into cold hard cash later.
The researchers say AI βrecognizes patterns of empathic communicationβ. Nice phrasing. What they actually mean is that machines have learned which words make you feel like someoneβs listening, so they can mimic that when youβre lonely enough to pay for a subscription. And every time you think the machine understands you, youβre generating data about when youβre vulnerable, what triggers you, how you react.
Six out of ten times, the machine reaches the same conclusion as a human. And the other four times? Those apparently donβt count when you want to cut costs.