3 Ways AI Is Getting More Emotional

#artificialintelligence

In January of 2018, Annette Zimmermann, vice president of research at Gartner, proclaimed: "By 2022, your personal device will know more about your emotional state than your own family." Just two months later, a landmark study from the University of Ohio claimed that their algorithm was now better at detecting emotions than people are. AI systems and devices will soon recognize, interpret, process, and simulate human emotions. With companies like Affectiva, BeyondVerbal and Sensay providing plug-and-play sentiment analysis software, the affective computing market is estimated to grow to $41 billion by 2022, as firms like Amazon, Google, Facebook, and Apple race to decode their users' emotions. Emotional inputs will create a shift from data-driven IQ-heavy interactions to deep EQ-guided experiences, giving brands the opportunity to connect to customers on a much deeper, more personal level.



How Facebook's brain-machine interface measures up

#artificialintelligence

Somewhat unceremoniously, Facebook this week provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs -- Facebook's Pittsburgh-based division devoted to augmented reality and virtual reality R&D -- described a prototypical system capable of reading and decoding study subjects' brain activity while they speak. It's impressive no matter how you slice it: The researchers managed to make out full, spoken words and phrases in real time. Study participants (who were prepping for epilepsy surgery) had a patch of electrodes placed on the surface of their brains, which employed a technique called electrocorticography (ECoG) -- the direct recording of electrical potentials associated with activity from the cerebral cortex -- to derive rich insights. A set of machine learning algorithms equipped with phonological speech models learned to decode specific speech sounds from the data and to distinguish between questions and responses.


Brain-computer interfaces are developing faster than the policy debate around them

#artificialintelligence

A few days ago, Facebook disentangled itself from a nettlesome investigation by the Federal Trade Commission into how the company violated users' privacy. And then, with that matter now squarely behind it, Facebook on Tuesday stepped forward to share some information about its effort to read our minds. Two years after the company announced its mind-reading initiative, Facebook has an update to share. The company sponsored an experiment conducted by researchers at the University of California San Francisco in which they built an interface for decoding spoken dialogue from brain signals. The results were published today in Nature Communication.


Salient Contour Extraction by Temporal Binding in a Cortically-based Network

Neural Information Processing Systems

It has been suggested that long-range intrinsic connections in striate cortex may play a role in contour extraction (Gilbert et aI., 1996). A number of recent physiological and psychophysical studies have examined the possible role of long range connections in the modulation of contrast detection thresholds (Polat and Sagi, 1993,1994; Kapadia et aI., 1995; Kovacs and Julesz, 1994) and various pre-attentive detection tasks (Kovacs and Julesz, 1993; Field et aI., 1993). We have developed a network architecture based on the anatomical connectivity of striate cortex, as well as the temporal dynamics of neuronal processing, that is able to reproduce the observed experimental results. The network has been tested on real images and has applications in terms of identifying salient contours in automatic image processing systems. 1 INTRODUCTION Vision is an active process, and one of the earliest, preattentive actions in visual processing is the identification of the salient contours in a scene. We propose that this process depends upon two properties of striate cortex: the pattern of horizontal connections between orientation columns, and temporal synchronization of cell responses. In particular, we propose that perceptual salience is directly related to the degree of cell synchronization. We present results of network simulations that account for recent physiological and psychophysical "pop-out" experiments, and which successfully extract salient contours from real images.