Goto

Collaborating Authors

Why is it hard for AI to detect human bias?

#artificialintelligence

AI bias is in the news – and it's a hard problem to solve When AI engages with humans – how does AI know what humans really means? In other words, why is it hard for AI to detect human bias? That's because humans do not say what they really mean due to factors such as cognitive dissonance. Cognitive dissonance refers to a situation involving conflicting attitudes, beliefs or behaviours. This produces a feeling of mental discomfort leading to an alteration in one of the attitudes, beliefs or behaviours to reduce the discomfort and restore balance.


Don't look now: why you should be worried about machines reading your emotions

The Guardian

Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short. While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train "behavior detection officers" to scan faces for signs of deception.


Why faces don't always tell the truth about feelings

#artificialintelligence

Human faces pop up on a screen, hundreds of them, one after another. Some have their eyes stretched wide, others show lips clenched. Some have eyes squeezed shut, cheeks lifted and mouths agape. For each one, you must answer this simple question: is this the face of someone having an orgasm or experiencing sudden pain? Psychologist Rachael Jack and her colleagues recruited 80 people to take this test as part of a study1 in 2018.


A computational model implementing subjectivity with the 'Room Theory'. The case of detecting Emotion from Text

arXiv.org Machine Learning

This work introduces a new method to consider subjectivity and general context dependency in text analysis and uses as example the detection of emotions conveyed in text. The proposed method takes into account subjectivity using a computational version of the Framework Theory by Marvin Minsky (1974) leveraging on the Word2Vec approach to text vectorization by Mikolov et al. (2013), used to generate distributed representation of words based on the context where they appear. Our approach is based on three components: 1. a framework/"room" representing the point of view; 2. a benchmark representing the criteria for the analysis - in this case the emotion classification, from a study of human emotions by Robert Plutchik (1980); and 3. the document to be analyzed. By using similarity measure between words, we are able to extract the relative relevance of the elements in the benchmark - intensities of emotions in our case study - for the document to be analyzed. Our method provides a measure that take into account the point of view of the entity reading the document. This method could be applied to all the cases where evaluating subjectivity is relevant to understand the relative value or meaning of a text. Subjectivity can be not limited to human reactions, but it could be used to provide a text with an interpretation related to a given domain ("room"). To evaluate our method, we used a test case in the political domain.


Designing For The Internet Of Emotional Things – Smashing Magazine

#artificialintelligence

More and more of our experience online is personalized. Search engines, news outlets and social media sites have become quite smart at giving us what we want. Perhaps Ali, one of the hundreds of people I've interviewed about our emotional attachment to technology, put it best: "Netflix's recommendations have become so right for me that even though I know it's an algorithm, it feels like a friend." Personalization algorithms can shape what you discover, where you focus attention, and even who you interact with online. When these algorithms work well, they can feel like a friend. At the same time, personalization doesn't feel all that personal. There can be an uncomfortable disconnect when we see an ad that doesn't match our expectations. When personalization tracks too closely to interests that we've expressed, it can seem creepy. Personalization can create a filter bubble1 by showing us more of what we've clicked on before, rather than exposing us to new people or ideas.