Goto

Collaborating Authors

Results


How AI can help predict Alzheimer's disease progression

#artificialintelligence

Paul De Sousa, head of life sciences at Massive Analytic and former researcher at Edinburgh University, writes about a study using artificial precognition AI to analyse results of protein biomarker tests associated with Alzheimer's disease progression. Accounting for over 30 million Disability Adjusted Life Years worldwide, Alzheimer's disease (AD) is a global societal challenge and a threat to healthcare systems around the world. A long history of failures of AD drug trials has highlighted the need for early detection and diagnosis to support patients and clinicians to implement the best life adjustments or medical interventions to alter the course of the disease and personalise the care of those at risk. Biomarkers are measurable indicators of the biological conditions of health, on which disease prognosis and diagnosis is founded. In AD there are a range of diagnostic procedures to detect these biomarkers including testing Cerebrospinal fluid (CSF) and PET scans for markers of amyloid-β and tau that can accurately detect AD pathology, but their cost and invasive nature preclude the broad accessibility required for early detection.


Machine learning of EEG spectra classifies unconsciousness during GABAergic anesthesia - PubMed

#artificialintelligence

Drug- and patient-specific electroencephalographic (EEG) signatures of anesthesia-induced unconsciousness have been identified previously. We applied machine learning approaches to construct classification models for real-time tracking of unconscious state during anesthesia-induced unconsciousness. We used cross-validation to select and train the best performing models using 33,159 2s segments of EEG data recorded from 7 healthy volunteers who received increasing infusions of propofol while responding to stimuli to directly assess unconsciousness. Cross-validated models of unconsciousness performed very well when tested on 13,929 2s EEG segments from 3 left-out volunteers collected under the same conditions (median volunteer AUCs 0.99-0.99). Models showed strong generalization when tested on a cohort of 27 surgical patients receiving solely propofol collected in a separate clinical dataset under different circumstances and using different hardware (median patient AUCs 0.95-0.98),


Submit Abstract - Pathology Utilitarian Conference

#artificialintelligence

Do you have a latest findings on Pathology, Digital Pathology, submit your papers today to enroll & participate at the Pathology Utilitarian Conference & Digital Pathology Meetings.


Artificial intelligence to monitor water quality more effectively

#artificialintelligence

Environmental protection agencies and industry bodies currently monitor the'trophic state' of water -- its biological productivity -- as an indicator of ecosystem health. Large clusters of microscopic algae, or phytoplankton, is called eutrophication and can turn into HABs, an indicator of pollution and which pose risk to human and animal health. HABs are estimated to cost the Scottish shellfish industry £1.4 million per year, and a single HAB event in Norway killed eight million salmon in 2019, with a direct value of over £74 million. Lead author Mortimer Werther, a PhD Researcher in Biological and Environmental Sciences at Stirling's Faculty of Natural Sciences, said: "Currently, satellite-mounted sensors, such as the Ocean and Land Instrument (OLCI), measure phytoplankton concentrations using an optical pigment called chlorophyll-a. However, retrieving chlorophyll-a across the diverse nature of global waters is methodologically challenging. "We have developed a method that bypasses the chlorophyll-a retrieval and enables us to estimate water health status directly from the signal measured at the remote sensor." Eutrophication and hypereutrophication is often caused by excessive nutrient input, for example from agricultural practices, waste discharge, or food and energy production. In impacted waters, HABs are common, and cyanobacteria may produce cyanotoxins which affect human and animal health. In many locations, these blooms are of concern to the finfish and shellfish aquaculture industries. Mr Werther said: "To understand the impact of climate change on freshwater aquatic environments such as lakes, many of which serve as drinking water resources, it is essential that we monitor and assess key environmental indicators, such as trophic status, on a global scale with high spatial and temporal frequency.


Machine learning model generates realistic seismic waveforms

#artificialintelligence

LOS ALAMOS, N.M., April 22, 2021--A new machine-learning model that generates realistic seismic waveforms will reduce manual labor and improve earthquake detection, according to a study published recently in JGR Solid Earth. "To verify the efficacy of our generative model, we applied it to seismic field data collected in Oklahoma," said Youzuo Lin, a computational scientist in Los Alamos National Laboratory's Geophysics group and principal investigator of the project. "Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms." Quickly and accurately detecting earthquakes can be a challenging task. Visual detection done by people has long been considered the gold standard, but requires intensive manual labor that scales poorly to large data sets.


Yale Study Shows Limitations of Applying Artificial Intelligence to Registry Databases

#artificialintelligence

Artificial intelligence will play a pivotal role in the future of health care, medical experts say, but so far, the industry has been unable to fully leverage this tool. A Yale study has illuminated the limitations of these analytics when applied to traditional medical databases -- suggesting that the key to unlocking their value may be in the way datasets are prepared. Machine learning techniques are well-suited for processing complex, high-dimensional data or identifying nonlinear patterns, which provide researchers and clinicians with a framework to generate new insights. But the study suggests that achieving the potential of artificial intelligence will require improving the data quality of electronic health records (EHR). "Our study found that advanced methods that have revolutionized predictions outside healthcare did not meaningfully improve prediction of mortality in a large national registry. These registries that rely on manually abstracted data within a restricted number of fields may, therefore, not be capturing many patient features that have implications for their outcomes," said Rohan Khera, MD, MS, the first author of the new study published in JAMA Cardiology.


Alzheimer's has four distinct types; scientists find using machine learning [details]

#artificialintelligence

When it comes to neurodegenerative diseases, Alzheimer's disease is considered one of the worst. It promotes the onset of dementia--an irreversible decline in thinking, memory, and ability to perform simple everyday tasks--in around 60 to 70 percent of patients. Now, researchers have suggested that the disease can be divided into four distinct subtypes; potentially opening doors for individualized treatment among sufferers. In an international study, scientists illustrated that tau--proteins found in neurons that are associated with neurodegenerative conditions--spread through the brain in four distinct patterns. This leads to varied symptoms and outcomes among affected individuals. Machine learning (ML) was leveraged by the authors to distinguish between the different subtypes.


Impact of Upstream Medical Image Processing on the Downstream Performance of a Head CT Triage Neural Network

#artificialintelligence

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. We developed a convolutional neural network (CNN) to triage head CTs (HCTs) and investigated the impact of upstream medical image processing on the CNN's performance. We retrospectively collected 9,776 HCTs (2001 – 2014) and trained a CNN to triage them as normal or abnormal.


How Machine Learning Will Eliminate Failures In Clinical Trials

#artificialintelligence

One of the hottest topics in drug development is the use of AI in clinical trials. AI is defined as a science concerned with building smart machines capable of performing tasks that typically require human intelligence. AI's full potential will not be realized for at least another decade, but machine learning (a component of AI) is a technology that is available now. Still, judging by the amount of content and webinars I see on this topic, it seems many in the industry are still trying to understand how it will be used and why it may be a game changer when it comes to trial conduct. If you have a small amount of data from two variables, it is easy to plug that information into a spreadsheet, graph it, and look at the relationship between the variables.


Ethics of AI: Benefits and risks of artificial intelligence

ZDNet

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems. Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised. Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived." Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers. But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve. Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life. That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners. Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens. Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing. As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?" Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion. Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December.