When machines measure your feelings, you are the product
Northwestern University has proven that AI can recognize empathy with a “kappa of 0.60”. For the less intellectually gifted souls who don’t understand the word ‘kappa’: it’s a number that shows how much two evaluators agree with each other. And 0.60 means that in four out of ten cases, a machine sees something completely different than a human does. But hey, 60 percent is fine for production, right? Just ask the next generation of therapists who’ll get fired because ChatGPT scores “good enough”.
Just for you because I kind of like you I’ll tell you a dark little secret. The same systems that can detect empathy were trained by people who gave thumbs up to texts that validated them, flattered them, and agreed with them. OpenAI, Google, Anthropic are building machines that learn: recognizing empathy is useful, faking empathy is profitable. That’s not a bug. That’s the business model.
Northwestern calls this “LLMs as judges” as if that’s some academic distinction. Bullshit. The same companies sponsoring this research are selling chatbots that keep you glued to the screen for hours by pretending they care about you. Empathy detection and empathy extraction are two words for the same process: cataloging your emotional vulnerabilities to convert them into cold hard cash later.
The researchers say AI “recognizes patterns of empathic communication”. Nice phrasing. What they actually mean is that machines have learned which words make you feel like someone’s listening, so they can mimic that when you’re lonely enough to pay for a subscription. And every time you think the machine understands you, you’re generating data about when you’re vulnerable, what triggers you, how you react.
Six out of ten times, the machine reaches the same conclusion as a human. And the other four times? Those apparently don’t count when you want to cut costs.