Decoding the human brain

#artificialintelligence

CHENNAI: Google DeepMind's AlphaGo, an artificial intelligence programme developed using deep neural networks and machine learning techniques, hit global headlines last year when it beat South Korean Go grandmaster Lee Sedol to win the series 4-1. However, not many know that AlphaGo has consumed a whopping 30,000 watts of power to complete the task, while the human brain consumes around 20 watts! What gives the human brain such efficiency has so far proven elusive to replicate in computers. Not surprisingly, man's most defining organ is also the least understood. Although an adult human brain weighing 1.4 kg is made up of close to 100 billion neurons, scientists do not know how many different kinds of human neurons exist.


Companion-Based Ambient Robust Intelligence (CARING)

AAAI Conferences

We present a Companion-based Ambient Robust INtelliGence (CARING) system, for communication with, and support of, clients with Traumatic brain injury (TBI) or Amyotrophic Lateral Sclerosis (ALS). A central component of this system is an artificial companion, combined with a range of elements for ambient intelligence. The companion acts as a personalized intermediary for multi-party communication between the client, the environment (e.g. a Smart Home), caregivers and health professionals. CARING is based on tightly coupled systems drawing from natural language processing, speech recognition and adaptation, deep language understanding and constraint-based knowledge representation and reasoning. A major innovation of the system is its ability to adapt and accommodate different interfaces associated with different client capabilities and needs. The system will use, as a proxy, different interaction requirements of clients (e.g., Brain-Computer Interfaces) at different stages of ALS progression and with different types of TBI impairments. Ultimately, this technology is expected to improve the quality of life for clients through conversation with a computer.


Artificial (Emotional) Intelligence

Communications of the ACM

Anyone who has been frustrated asking questions of Siri or Alexa--and then annoyed at the digital assistant's tone-deaf responses--knows how dumb these supposedly intelligent assistants are, at least when it comes to emotional intelligence. "Even your dog knows when you're getting frustrated with it," says Rosalind Picard, director of Affective Computing Research at the Massachusetts Institute of Technology (MIT) Media Lab. "Siri doesn't yet have the intelligence of a dog," she says. Yet developing that kind of intelligence--in particular, the ability to recognize human emotions and then respond appropriately--is essential to the true success of digital assistants and the many other artificial intelligences (AIs) we interact with every day. Whether we're giving voice commands to a GPS navigator, trying to get help from an automated phone support line, or working with a robot or chatbot, we need them to really understand us if we're to take these AIs seriously.


Handling Representation Changes by Autistic Reasoning

AAAI Conferences

We discover the patterns of autistic reasoning in the conditions requiring change in representation of domain knowledge. The formalism of nonmonotonic logic of defaults is used to simulate the autistic decision-making while learning how to adjust an action to the environment which forces new representation structure. Our main finding is that while autistic reasoning may be able to process single default rules, they have a characteristic difficulty in cases with nontrivial representation changes, where multiple default rules conflict. We evaluate our hypothesis that the skill of representation adjustment can be advanced by learning default reasoning patterns via a set of exercises.


New 'moonshot challenge' at Harvard aims to make giant leap in AI

AITopics Original Links

Humanity has big hopes for artificial intelligence, but in reality machines have a long way to go to catch up with the human brain. Enter Harvard University, which has just won a $28 million grant to change all that. The grant aims to help scientists figure out why mammalian brains are so good at learning, and then design better computers accordingly. Harvard researchers will record activity in the brain's visual cortex in "unprecedented detail," map its connections and then reverse-engineer the data to inspire better computer algorithms for learning. "This is a moonshot challenge, akin to the Human Genome Project in scope," said project leader David Cox, assistant professor of molecular and cellular biology and computer science at Harvard.