New research in Scientific Reports conducted by Washington University shows how comprehending brain activity as a network rather than by electroencephalography readings, provides more accurate identification of epileptic seizures in real-time. The study, which mixes machine learning with systems theory, was steered by lead author Walter Bomela. "Our technique allows us to get raw data, process it and extract a feature that's more informative for the machine learning model to use," Bomela stated in a news release. "The major advantage of our approach is to fuse signals from 23 electrodes to one parameter that can be efficiently processed with much less computing resources." As explained by researchers, using an EEG, epileptic seizures can be observed through irregular brain activity in the form of spikes and waves during the measurement of electrical output.
By now, it's almost old news that artificial intelligence (AI) will have a transformative role in medicine. Algorithms have the potential to work tirelessly, at faster rates and now with potentially greater accuracy than clinicians. In 2016, it was predicted that'machine learning will displace much of the work of radiologists and anatomical pathologists'. In the same year, a University of Toronto professor controversially announced that'we should stop training radiologists now'. But is it really the beginning of the end for some medical specialties?
The Helmholtz International BigBrain Analytics and Learning Laboratory (HIBALL) is a collaboration between McGill University and Forschungszentrum Jülich to develop next-generation high-resolution human brain models using cutting-edge Machine- and Deep Learning methods and high-performance computing. HIBALL is based on the high-resolution BigBrain model first published by the Jülich and McGill teams in 2013. Over the next five years, the lab will be funded with a total of up to 6 million Euro by the German Helmholtz Association, Forschungszentrum Jülich, and Healthy Brains, Healthy Lives at McGill University. In 2003, when Jülich neuroscientist Katrin Amunts and her Canadian colleague Alan Evans began scanning 7,404 histological sections of a human brain, it was completely unclear whether it would ever be possible to reconstruct this brain on the computer in three dimensions. At that time, there were no technical possibilities to cope with the huge amount of data.
A new mass discovered in the CNS is a common reason for referral to a neurosurgeon. CNS masses are typically discovered on MRI or computed tomography (CT) scans after a patient presents with new neurologic symptoms. Presenting symptoms depend on the location of the tumor and can include headaches, seizures, difficulty expressing or comprehending language, weakness affecting extremities, sensory changes, bowel or bladder dysfunction, gait and balance changes, vision changes, hearing loss and endocrine dysfunction. A mass in the CNS has a broad differential diagnosis, including tumor, infection, inflammatory or demyelinating process, infarct, hemorrhage, vascular malformation and radiation treatment effect. The most likely diagnoses can be narrowed based on patient demographics, medical history, imaging characteristics and adjunctive laboratory studies. However, accurate histopathologic interpretation of tissue obtained at the time of surgery is frequently required to make a diagnosis and guide intraoperative decision making. Over half of CNS tumors in adults are metastases from systemic cancer originating elsewhere in the body . An estimated 9.6% of adults with lung cancer, melanoma, breast cancer, renal cell carcinoma and colorectal cancer have brain metastases .
In what is somehow the cutest science story of the new year so far, scientists at the University of Washington have announced a new artificial intelligence system for decoding mouse squeaks. Dubbed DeepSqueak, the software program can analyze rodent vocalizations and then pattern-match the audio to behaviors observed in laboratory settings. As such, the software can be used to partially decode the language of mice and other rodents. Researchers hope that the technology will be helpful in developing a broad range of medical and psychological studies. Published this week in the journal Neuropsychopharmacology, the study is based around a novel use of sonogram technology, which transforms an audio signal into an image or series of graphs.
Thanks to open banking, fintech early adopters likely already have accounts that round up transactions to boost savings or connect to third-party tools for loan applications, budget management and more. But the new wave of fintech startups are proving there's much more that can be done using open banking, the two-year-old mandate from UK regulators that required banks to easily allow their customers to share their data with third parties such as apps. "Open banking offers people the chance to get personalised, tailored support to help them manage their money by allowing regulated companies to securely analyse their bank data," says Lubaina Manji, senior programme manager at Nesta Challenges, one of the organisations behind the Open Up 2020 Challenge, alongside the Open Banking Implementation Entity (OBIE). "It's enabled the creation of new services and tools to help people with every aspect of money management – from budgeting to investing, and much, much more, all in a safe and secure way." And some of the innovations from finalists in the Open Up 2020 Challenge have surprised with their ingenuity and customer focus, she says, citing Sustainably's round-up tool for automated charity donations, and Kalgera's neuroscience-informed AI to help spot fraud targeting people with dementia – two projects that highlight the purpose-driven idea behind open banking and the aim to get financial support to show who need it the most.
UNIGE scientists developed a neuro-computer model which helps explain how the brain identifies syllables in natural speech. The model uses the equivalent of neuronal oscillations produced by brain activity to process the continuous sound flow of connected speech. The model functions according to a theory known as predictive coding, whereby the brain optimizes perception by constantly trying to predict the sensory signals based on candidate hypotheses (syllables in this model).
As a health reporter, I've not only built a career on the idea that knowledge is power, I also apply it to my own life. Knowing the science behind the best healthy foods not only informs what I write about, but also what I eat. And if there's a way to do a crunch that's most effective, I want to know that, too. But, for me, this way of thinking has long stopped when it came to Alzheimer's disease, a progressive mental deterioration that can occur in middle or old age, due to generalized degeneration of the brain, and affects an estimated 5.8 million Americans. In my mind, Alzheimer's was a chronic, progressive disease with no cure.
In the summer of 2009, the Israeli neuroscientist Henry Markram endeavored onto the TED stage in Oxford, England, and introduced an immodest proposal: he and his colleagues would develop a full human brain simulation inside a supercomputer within a decade. They had been mapping the cells in the neocortex, the supposed seat of thought and perception, for years already. "It's a bit like going and cataloging one piece of rainforest," explained Markram. "How many trees it has? What features are the trees? "His team would now establish a virtual Silicon rainforest from which they hoped artificial intelligence would evolve organically.