Goto

Collaborating Authors

Results


Deep Neural Networks Help to Explain Living Brains

#artificialintelligence

In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technology, would at times toil past midnight on his machine vision project. He was painstakingly designing a system that could recognize objects in pictures, regardless of variations in size, position and other properties -- something that humans do with ease. The system was a deep neural network, a type of computational device inspired by the neurological wiring of living brains. "I remember very distinctly the time when we found a neural network that actually solved the task," he said. It was 2 a.m., a tad too early to wake up his adviser, James DiCarlo, or other colleagues, so an excited Yamins took a walk in the cold Cambridge air. "I was really pumped," he said. It would have counted as a noteworthy accomplishment in artificial intelligence alone, one of many that would make neural networks the darlings of AI technology over the next few years.


Neural pathway crucial to successful rapid object recognition in primates

#artificialintelligence

MIT researchers have identified a brain pathway critical in enabling primates to effortlessly identify objects in their field of vision. The findings enrich existing models of the neural circuitry involved in visual perception and help to further unravel the computational code for solving object recognition in the primate brain. Led by Kohitij Kar, a postdoc at the McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, the study looked at an area called the ventrolateral prefrontal cortex (vlPFC), which sends feedback signals to the inferior temporal (IT) cortex via a network of neurons. The main goal of this study was to test how the back-and-forth information processing of this circuitry -- that is, this recurrent neural network -- is essential to rapid object identification in primates. The current study, published in Neuron and available via open access, is a followup to prior work published by Kar and James DiCarlo, the Peter de Florez Professor of Neuroscience, the head of MIT's Department of Brain and Cognitive Sciences, and an investigator in the McGovern Institute and the Center for Brains, Minds, and Machines.


Neural Network Filters Weak and Strong External Stimuli to Help Brain Make "Yes or No" Decisions

#artificialintelligence

A University of Michigan-led research team has uncovered a neural network that enables Drosophila melanogaster fruit flies to convert external stimuli of varying intensities into a "yes or no" decision about when to act. The research, described in Current Biology, helps to decode the biological mechanism that the fruit fly nervous system uses to convert a gradient of sensory information into a binary behavioral response. The findings offer up new insights that may be relevant to how such decisions work in other species, and could possibly even be applied to help artificial intelligence machines learn to categorize information. Senior study author Bing Ye, PhD, a faculty member at the University of Michigan Life Science Institute (LSI), believes the mechanism uncovered could have far-reaching applications. "There is a dominant idea in our field that these decisions are made by the accumulation of evidence, which takes time," Ye said.


A New Model of the Brain's Real-Life Neural Networks - Neuroscience News

#artificialintelligence

Summary: A new computational model predicts how information deep inside the brain could flow from one network to another, and how neural network clusters can self optimize over time. Researchers at the Cyber-Physical Systems Group at the USC Viterbi School of Engineering, in conjunction with the University of Illinois at Urbana-Champaign, have developed a new model of how information deep in the brain could flow from one network to another and how these neuronal network clusters self-optimize over time. Their work, chronicled in the paper "Network Science Characteristics of Brain-Derived Neuronal Cultures Deciphered From Quantitative Phase Imaging Data," is believed to be the first study to observe this self-optimization phenomenon in in vitro neuronal networks, and counters existing models. Their findings can open new research directions for biologically inspired artificial intelligence, detection of brain cancer and diagnosis and may contribute to or inspire new Parkinson's treatment strategies. The team examined the structure and evolution of neuronal networks in the brains of mice and rats in order to identify the connectivity patterns.


Why It's Notoriously Difficult to Compare AI and Human Perception

#artificialintelligence

Science fiction is becoming reality as increasingly intelligent machines are gradually emerging -- ones that not only specialize in things like chess, but that can also carry out higher-level reasoning, or even answer deep philosophical questions. For the past few decades, experts have been collectively bending their efforts toward the creation of such a human-like artificial intelligence, or a so-called "strong" or artificial general intelligence (AGI), which can learn to perform a wide range of tasks as easily as a human might. But while current AI development may take some inspiration from the neuroscience of the human brain, is it actually appropriate to compare the way AI processes information with the way humans do it? The answer to that question depends on how experiments are set up, and how AI models are structured and trained, according to new research from a team of German researchers from the University of Tübingen and other research institutes. The team's study suggests that because of the differences between the way AI and humans arrive at such decisions, any generalizations from such a comparison may not be completely reliable, especially if machines are used to automate critical tasks.


Artificial intelligence diagnoses Alzheimer's with more than 95% accuracy

#artificialintelligence

An artificial intelligence (AI) algorithm has produced another significant breakthrough using attention mechanisms and a convolutional neural network to accurately identify tell-tale signs of Alzheimer's. The AI tool developed by the Stevens Institute of Technology is said to be able to explain its conclusions, thus enabling human experts to check the accuracy of its diagnosis by up to 95%. AI has made huge strides in the medical sector and this latest news is further evidence that the speed at which the technology is moving shows no signs of ceasing any time soon. The algorithm is trained to identify subtle linguistic patterns previously overlooked by using texts composed by both healthy subjects and known Alzheimer's sufferers. The team of researchers then converted each sentence into a unique numerical sequence, or vector, representing a specific point in a 512-dimensional space.


Visual Methods for Sign Language Recognition: A Modality-Based Review

arXiv.org Artificial Intelligence

Sign language visual recognition from continuous multi-modal streams is still one of the most challenging fields. Recent advances in human actions recognition are exploiting the ascension of GPU-based learning from massive data, and are getting closer to human-like performances. They are then prone to creating interactive services for the deaf and hearing-impaired communities. A population that is expected to grow considerably in the years to come. This paper aims at reviewing the human actions recognition literature with the sign-language visual understanding as a scope. The methods analyzed will be mainly organized according to the different types of unimodal inputs exploited, their relative multi-modal combinations and pipeline steps. In each section, we will detail and compare the related datasets, approaches then distinguish the still open contribution paths suitable for the creation of sign language related services. Special attention will be paid to the approaches and commercial solutions handling facial expressions and continuous signing.


The Use of AI for Thermal Emotion Recognition: A Review of Problems and Limitations in Standard Design and Data

arXiv.org Artificial Intelligence

With the increased attention on thermal imagery for Covid-19 screening, the public sector may believe there are new opportunities to exploit thermal as a modality for computer vision and AI. Thermal physiology research has been ongoing since the late nineties. This research lies at the intersections of medicine, psychology, machine learning, optics, and affective computing. We will review the known factors of thermal vs. RGB imaging for facial emotion recognition. But we also propose that thermal imagery may provide a semi-anonymous modality for computer vision, over RGB, which has been plagued by misuse in facial recognition. However, the transition to adopting thermal imagery as a source for any human-centered AI task is not easy and relies on the availability of high fidelity data sources across multiple demographics and thorough validation. This paper takes the reader on a short review of machine learning in thermal FER and the limitations of collecting and developing thermal FER data for AI training. Our motivation is to provide an introductory overview into recent advances for thermal FER and stimulate conversation about the limitations in current datasets.


Top 15 Data Science Experts of the World in 2020

#artificialintelligence

To learn the best, you must learn from the finest. Geoffrey Hilton is called the Godfather of Deep Learning in the field of data science. Mr. Hinton is best known for his work on neural networks and artificial intelligence. A Ph.D. in artificial intelligence, he is accredited for his exemplary work on neural nets. The co-founder of the term, "Data Science", Jeff Hammerbacher developed methods and techniques for capturing, storing and analysing a large amount of data.


Entropy, Computing and Rationality

arXiv.org Artificial Intelligence

Making decisions freely presupposes that there is some indeterminacy in the environment and in the decision making engine. The former is reflected on the behavioral changes due to communicating: few changes indicate rigid environments; productive changes manifest a moderate indeterminacy, but a large communicating effort with few productive changes characterize a chaotic environment. Hence, communicating, effective decision making and productive behavioral changes are related. The entropy measures the indeterminacy of the environment, and there is an entropy range in which communicating supports effective decision making. This conjecture is referred to here as the The Potential Productivity of Decisions. The computing engine that is causal to decision making should also have some indeterminacy. However, computations performed by standard Turing Machines are predetermined. To overcome this limitation an entropic mode of computing that is called here Relational-Indeterminate is presented. Its implementation in a table format has been used to model an associative memory. The present theory and experiment suggest the Entropy Trade-off: There is an entropy range in which computing is effective but if the entropy is too low computations are too rigid and if it is too high computations are unfeasible. The entropy trade-off of computing engines corresponds to the potential productivity of decisions of the environment. The theory is referred to an Interaction-Oriented Cognitive Architecture. Memory, perception, action and thought involve a level of indeterminacy and decision making may be free in such degree. The overall theory supports an ecological view of rationality. The entropy of the brain has been measured in neuroscience studies and the present theory supports that the brain is an entropic machine. The paper is concluded with a number of predictions that may be tested empirically.