If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
As we know by now, Alexa can play a song, order a pizza or do a quick online search. But now it can do something much more valuable: save your life. According to the results of a new proof-of-concept study, Alexa can accurately identify a specific pattern of breathing known as agonal breathing or gasping for air, that develops in the setting of an impending cardiac arrest, or when your heart stops beating. The research was published yesterday in the npj Digital Medicine. The implications for this novel form of contactless AI monitoring to detect cardiac arrest are broad, and offer the unique possibility to dispatch an ambulance to a victim who may be alone at home.
Washington: Scientists have developed a new artificial intelligence (AI) system to monitor people for cardiac arrest while they are asleep without touching them. People experiencing cardiac arrest will suddenly become unresponsive and either stop breathing or gasp for air, a sign known as agonal breathing, said rese-archers at the University of Washington (UW) in the US. A new skill for a smart speaker -- like Google Home and Amazon Alexa -- or smartphone lets the device detect the gasping sound of agonal breathing and call for help. Immediate Cardiop-ulmonary resuscitation (CPR) can double or triple someone's chance of survival, but that requires a bystander to be present. CPR is an emergency procedure that combines chest compressions often with artificial ventilation in an effort to manually preserve intact brain function.
In an effort to tackle in-home cardiac arrest, University of Washington researchers have devised a novel contactless system that uses smartphones or voice-based personal assistants to identify telltale breathing patterns that accompany an attack. The proof-of-concept strategy, described in an NPJ Digital Medicine paper published this morning, involved a supervised machine learning model called a support-vector machine that was trained for use in the bedroom, a controlled environment in which the majority of in-home cardiac arrests occur. "Sometimes reported as'gasping' breaths, agonal respirations may hold potential as an audible diagnostic biomarker, particularly in unwitnessed cardiac arrests that occur in a private residence, the location of [two-thirds] of all [out-of-hospital cardiac arrests]," the researchers wrote. "The widespread adoption of smartphones and smart speakers (projected to be in 75% of US households by 2020) presents a unique opportunity to identify this audible biomarker and connect unwitnessed cardiac arrest victims to emergency medical services (EMS) or others who can administer cardiopulmonary resuscitation." Cross-validation analysis of the trained classifier yielded an overall sensitivity and specificity of 97.24% and 99.51%.
One of the many remarkable things about artificial intelligence is that while we tend to think of it as something that will have a big effect in the not-too-distant future, it is already transforming people's lives in profound and powerful ways today. In factories and warehouses, AI is improving workplace safety by scanning thousands of videos to detect potential risks. In the U.S., researchers are exploring how AI can help public health organizations around the world prevent the spread of deadly diseases like Ebola, Chikungunya, and Zika by detecting the presence of pathogens in the environment and stopping transmission to humans before outbreaks can begin. I believe this is the true promise and challenge of AI – using these new technologies to create a healthier and safer world for everyone. Now that AI has given computers the ability to recognize words and images, discover patterns in complex systems and reason and learn much like people do, it is enabling our devices to behave more naturally and more responsively.
A New Jersey woman is alive because her Apple Watch alerted her to an elevated heart rate. It turned out she had fluid around her heart from a viral infection. Medical alert systems have been around for some time. Often, they're wearable devices that can detect when you fall, and alert emergency personnel if it senses you aren't responding. But what happens if you aren't wearing a device, or if you aren't experiencing any triggering signs or symptoms of a medical emergency at all?
The research was led by Justin Chan, a PhD student in the department of computer science and engineering. Almost 500,000 Americans die each year from a cardiac arrest, the researchers wrote in the journal npj Digital Medicine. And the condition kills 100,000 Britons annually, according to Arrhythmia Alliance. Study author Dr Jacob Sunshine, assistant professor of anesthesiology and pain medicine, said: 'Cardiac arrests are a very common way for people to die and right now many of them can go unwitnessed. 'Part of what makes this technology so compelling is that it could help us catch more patients in time for them to be treated.'
Machine learning, a branch of artificial intelligence, has become more accurate than human medical professionals in predicting incidence of heart attack or death in patients at risk of coronary artery disease. Machine learning, a branch of artificial intelligence, was more accurate than human medical professionals in predicting myocardial infarction (MI) or death among patients suspected of having coronary artery disease (CAD), according to an abstract presented at the 2019 International Conference on Nuclear Cardiology and Cardiac CT, held May 12-14 in Lisbon, Portugal. Physicians routinely make treatment decisions using risk scores, which are based on few variables and are typically only moderately accurate for individual patients. Machine learning can use repetition and adjustment to exploit large quantities of data and identify complex patterns that may go unnoticed by humans. "Humans have a very hard time thinking further than three dimensions (a cube) or four dimensions (a cube through time)," said the study's lead researcher, Luis Eduardo Juarez-Orozco, MD, PhD, in a statement.
Large vessel occlusion (LVO) plays an important role in the diagnosis of acute ischemic stroke. Identifying LVO of patients in the early stage on admission would significantly lower the probabilities of suffering from severe effects due to stroke or even save their lives. In this paper, we utilized both structural and imaging data from all recorded acute ischemic stroke patients in Hong Kong. Total 300 patients (200 training and 100 testing) are used in this study. We established three hierarchical models based on demographic data, clinical data and features obtained from computerized tomography (CT) scans. The first two stages of modeling are merely based on demographic and clinical data. Besides, the third model utilized extra CT imaging features obtained from deep learning model. The optimal cutoff is determined at the maximal Youden index based on 10-fold cross-validation. With both clinical and imaging features, the Level-3 model achieved the best performance on testing data. The sensitivity, specificity, Youden index, accuracy and area under the curve (AUC) are 0.930, 0.684, 0.614, 0.790 and 0.850 respectively.
This paper presents an innovative and generic deep learning approach to monitor heart conditions from ECG signals.We focus our attention on both the detection and classification of abnormal heartbeats, known as arrhythmia. We strongly insist on generalization throughout the construction of a deep-learning model that turns out to be effective for new unseen patient. The novelty of our approach relies on the use of topological data analysis as basis of our multichannel architecture, to diminish the bias due to individual differences. We show that our structure reaches the performances of the state-of-the-art methods regarding arrhythmia detection and classification.
Recurrent neural networks (RNNs) are commonly applied to clinical time-series data with the goal of learning patient risk stratification models. Their effectiveness is due, in part, to their use of parameter sharing over time (i.e., cells are repeated hence the name recurrent). We hypothesize, however, that this trait also contributes to the increased difficulty such models have with learning relationships that change over time. Conditional shift, i.e., changes in the relationship between the input X and the output y, arises if the risk factors for the event of interest change over the course of a patient admission. While in theory, RNNs and gated RNNs (e.g., LSTMs) in particular should be capable of learning time-varying relationships, when training data are limited, such models often fail to accurately capture these dynamics. We illustrate the advantages and disadvantages of complete weight sharing (RNNs) by comparing an LSTM with shared parameters to a sequential architecture with time-varying parameters on three clinically-relevant prediction tasks: acute respiratory failure (ARF), shock, and in-hospital mortality. In experiments using synthetic data, we demonstrate how weight sharing in LSTMs leads to worse performance in the presence of conditional shift. To improve upon the dichotomy between complete weight sharing vs. no weight sharing, we propose a novel RNN formulation based on a mixture model in which we relax weight sharing over time. The proposed method outperforms standard LSTMs and other state-of-the-art baselines across all tasks. In settings with limited data, relaxed weight sharing can lead to improved patient risk stratification performance.