How close are we to AI-automated healthcare?

#artificialintelligence

We have seen incredible progress in machine learning and artificial intelligence (AI) over the past few years, especially through the application of deep learning algorithms. AI systems will get even better as more data is collected, so faster data gathering and better data integration should lead to smarter and more useful AI systems. Recently I described a new class of system that I believe will take form and leverage AI and combine workflow automation to improve how care is delivered -- I termed this: "Intelligent Clinical Decision Automation." This AI-powered automation will consume vast amounts of data and will automate entire processes or workflows, learning and adapting as it goes. Some clinicians and others may be concerned that this sort of automation removes the "gut instinct" of the experienced professional from the mix, but in fact it is that sort of thinking and reasoning process -- even unconscious reasoning process -- that is embodied in this approach.


Decoding the human brain

#artificialintelligence

CHENNAI: Google DeepMind's AlphaGo, an artificial intelligence programme developed using deep neural networks and machine learning techniques, hit global headlines last year when it beat South Korean Go grandmaster Lee Sedol to win the series 4-1. However, not many know that AlphaGo has consumed a whopping 30,000 watts of power to complete the task, while the human brain consumes around 20 watts! What gives the human brain such efficiency has so far proven elusive to replicate in computers. Not surprisingly, man's most defining organ is also the least understood. Although an adult human brain weighing 1.4 kg is made up of close to 100 billion neurons, scientists do not know how many different kinds of human neurons exist.


New Theory of Intelligence May Disrupt AI and Neuroscience

#artificialintelligence

Recent advancement in artificial intelligence, namely in deep learning, has borrowed concepts from the human brain. The architecture of most deep learning models is based on layers of processing– an artificial neural network that is inspired by the neurons of the biological brain. Yet neuroscientists do not agree on exactly what intelligence is, and how it is formed in the human brain -- it's a phenomena that remains unexplained. Technologist, scientist, and co-founder of Numenta, Jeff Hawkins, presented an innovative framework for understanding how the human neocortex operates, called "The Thousand Brains Theory of Intelligence," at the Human Brain Project Summit in Maaastricht, the Netherlands, in October 2018. The neocortex is the part of the human brain that is involved in higher-order functions such as conscious thought, spatial reasoning, language, generation of motor commands, and sensory perception.


Google AI mimics human 'navigation' brain cells -- and takes shortcuts

#artificialintelligence

If you have to walk a different route to the shops, it's normally not too much of a stretch to consult our'inner satnav' and chart a new course. That's because the human brain has a range of built-in mechanisms that help you find your way. But the underlying brain computation that goes into even simple navigation, such as planning the most direct route between points A and B, remains pretty murky. A team from Google DeepMind and University College London in the United Kingdom have trained a form of artificial intelligence to traverse a virtual environment from one point to another. The computer program, described in the journal Nature today, developed "neurons" similar to "grid cells", which are the brain cells found in mammals that bestow navigation skills.


Correlations strike back (again): the case of associative memory retrieval

Neural Information Processing Systems

It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate.