brain


Across Time and Space: Machine Learning Reveals Paths to Dementia

#artificialintelligence

Researchers face a challenge in understanding the brain changes during the long course of Alzheimer's disease. It's not possible to track neurodegeneration continuously in individual people for up to 30 years, so instead scientists collect snapshots of the disease from different people in all stages of the disease. Now, using advanced computational approaches and a massive trove of MRI brain volume data, scientists have stitched together a series of these snapshots. This way, they identified disease subtypes with distinct progression patterns in people with Alzheimer's disease or with mutations that cause frontotemporal dementia (FTD) and amyotrophic lateral sclerosis (ALS). They dubbed their method SuStaIn, for Subtype and Stage Inference.


I'm worried Artificial Intelligence could make us stupid

#artificialintelligence

Once upon a time if I wanted to find my way to somewhere unfamiliar, I would have pulled out a map and plotted my route. These days I just put the destination into my smartphone and let it make all the decisions. Is this a simple, practical thing to do or, by relying on increasingly smarter phones, are we allowing them to make us, day by day, a little bit dumber? I've spent the last few days at an international conference on artificial intelligence pondering just this question. We were discussing, among other things, the effect that the rise of machine intelligence is having on our brains.


Scientists grow functioning human neural networks in 3-D from stem cells

#artificialintelligence

A team of Tufts University-led researchers has developed three-dimensional (3-D) human tissue culture models for the central nervous system that mimic structural and functional features of the brain and demonstrate neural activity sustained over a period of many months. With the ability to populate a 3-D matrix of silk protein and collagen with cells from patients with Alzheimer's disease, Parkinson's disease, and other conditions, the tissue models allow for the exploration of cell interactions, disease progression and response to treatment. The development and characterization of the models are reported today in ACS Biomaterials Science & Engineering, a journal of the American Chemical Society. The new 3-D brain tissue models overcome a key challenge of previous models -the availability of human source neurons. This is due to the fact that neurological tissues are rarely removed from healthy patients and are usually only available post-mortem from diseased patients.


AI could spot Alzheimer's in MRI scans up to a decade before symptoms show

#artificialintelligence

Artificial intelligence can be trained to spot structural changes in the brain linked to Alzheimer's disease nearly 10 years before doctors can diagnose it through symptoms, researchers claim. According to New Scientist, a team at the University of Bari in Italy has developed a machine learning algorithm that is able to spot alterations in how different regions of the brain are connected – alterations that could be early signs of the disease. Their algorithm was trained using MRI scans from 67 patients, 38 of which were from people affected by the disease and 29 from healthy patients. The scans came from the Alzheimer's Disease Neuroimaging Initiative database at the University of Southern California in Los Angeles. The AI was trained to correctly spot the difference between diseased and healthy brains, before being tested on its accuracy abilities on a second set of 148 scans – 52 of which were healthy, 48 had Alzheimer's and the other 48 had a mild cognitive impairment that was known to develop into Alzheimer's within 10 years.


Is Humanity Overrated?

#artificialintelligence

As the compassion and perspective of the human mind clashes with the zeros and ones of artificial intelligence, we find ourselves questioning some of the very basic aspects of clinical care and even humanity itself. We stand at an inflection point in human history and marvel at the brightness of the looming singularity. Well, at least some of us do. Others, perhaps grounded in a more practical world, look at technology as an extension of humanity and offer the advances of artificial intelligence less as AI and more as IA -- intelligence augmented. I would bet the consensus for the argument is the latter.


Before artificial intelligence we need to understand human intelligence

#artificialintelligence

In the global race to build artificial intelligence, it was a missed opportunity. Jeff Hawkins, a Silicon Valley veteran who spent the last decade exploring the mysteries of the human brain, arranged a meeting with DeepMind, the world's leading AI lab. Scientists at DeepMind, which is owned by Google's parent company, Alphabet, want to build machines that can do anything the brain can do. Hawkins runs a little company with one goal: figure out how the brain works and then reverse engineer it. The meeting, which had been set for April at DeepMind's offices in London, never happened.


Electrical properties of dendrites help explain our brain's unique computing power

MIT News

Neurons in the human brain receive electrical signals from thousands of other cells, and long neural extensions called dendrites play a critical role in incorporating all of that information so the cells can respond appropriately. Using hard-to-obtain samples of human brain tissue, MIT neuroscientists have now discovered that human dendrites have different electrical properties from those of other species. Their studies reveal that electrical signals weaken more as they flow along human dendrites, resulting in a higher degree of electrical compartmentalization, meaning that small sections of dendrites can behave independently from the rest of the neuron. These differences may contribute to the enhanced computing power of the human brain, the researchers say. "It's not just that humans are smart because we have more neurons and a larger cortex. From the bottom up, neurons behave differently," says Mark Harnett, the Fred and Carole Middleton Career Development Assistant Professor of Brain and Cognitive Sciences.


Why Doesn't Ancient Fiction Talk About Feelings? - Issue 65: In Plain Sight

Nautilus

Reading medieval literature, it's hard not to be impressed with how much the characters get done--as when we read about King Harold doing battle in one of the Sagas of the Icelanders, written in about 1230. The first sentence bristles with purposeful action: "King Harold proclaimed a general levy, and gathered a fleet, summoning his forces far and wide through the land." By the end of the third paragraph, the king has launched his fleet against a rebel army, fought numerous battles involving "much slaughter in either host," bound up the wounds of his men, dispensed rewards to the loyal, and "was supreme over all Norway." What the saga doesn't tell us is how Harold felt about any of this, whether his drive to conquer was fueled by a tyrannical father's barely concealed contempt, or whether his legacy ultimately surpassed or fell short of his deepest hopes. In his short story "Forever Overhead," the 13-year-old protagonist takes 12 pages to walk across the deck of a public swimming pool, wait in line at the high diving board, climb the ladder, and prepare to jump.


Boeing creates unit to focus on super-computing that mimics the brain, hack-proof communications

The Japan Times

CHICAGO – Boeing Co. is creating a new unit to focus on technology that's seemingly straight out of science fiction, including super-fast computing that mimics the synapses of the human brain and hack-proof communications links based on applied quantum physics. So-called neuromorphic processing and quantum communications, two of the futuristic technologies Boeing wants to explore, may seem an odd fit for the world's largest plane-maker. But such concepts increasingly form the core of aerospace innovation, like the networks that may one day manage millions of airborne drones, said Greg Hyslop, Boeing's chief technology officer. The technology being developed around advanced computing and sensors is going to have a "profound impact" on Boeing, Hyslop said in an interview Wednesday. "We thought it's time to do this."


Brain-inspired algorithm helps AI systems multitask and remember

#artificialintelligence

Behind most of today's artificial intelligence technologies, from self-driving cars to facial recognition and virtual assistants, lie artificial neural networks. Though based loosely on the way neurons communicate in the brain, these "deep learning" systems remain incapable of many basic functions that would be essential for primates and other organisms. However, a new study from University of Chicago neuroscientists found that adapting a well-known brain mechanism can dramatically improve the ability of artificial neural networks to learn multiple tasks and avoid the persistent AI challenge of "catastrophic forgetting." The study, published in Proceedings of the National Academy of Sciences, provides a unique example of how neuroscience research can inform new computer science strategies, and, conversely, how AI technology can help scientists better understand the human brain. When combined with previously reported methods for stabilizing synaptic connections in artificial neural networks, the new algorithm allowed single artificial neural networks to learn and perform hundreds of tasks with only minimal loss of accuracy, potentially enabling more powerful and efficient AI technologies.