AI isn't great at decoding human emotions. So why are regulators targeting the tech?

MIT Technology Review 

In addition to proposing the theory of evolution, Darwin studied the expressions and emotions of people and animals. He debated in his writing just how scientific, universal, and predictable emotions actually are, and he sketched characters with exaggerated expressions, which the library had on display. The subject rang a bell for me. Lately, as everyone has been up in arms about ChatGPT, AI general intelligence, and the prospect of robots taking people's jobs, I've noticed that regulators have been ramping up warnings against AI and emotion recognition. Emotion recognition, in this far-from-Darwin context, is the attempt to identify a person's feelings or state of mind using AI analysis of video, facial images, or audio recordings. The idea isn't super complicated: the AI model may see an open mouth, squinted eyes, and contracted cheeks with a thrown-back head, for instance, and register it as a laugh, concluding that the subject is happy.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found