In my work as a journalist I am lucky enough to meet some brilliant people and learn about exciting advances in technology - along with a few duds. But every now and then I come across something that resonates in a deeply personal way. So it was in October 2018, when I visited a company called Medopad, based high up in London's Millbank Tower. This medical technology firm was working with the Chinese tech giant Tencent on a project to use artificial intelligence to diagnose Parkinson's Disease. This degenerative condition affects something like 10 million people worldwide.
The last half decade has ushered in the era of humans interacting with technology through speech, with Amazon's Alexa, Apple's Siri, and Google's AI rapidly becoming ubiquitous elements of the human experience. But, while the migration from typing to voice has brought great convenience for some folks (and improved safety, in the case of people utilizing technology while driving), it has not delivered on its potential for the people who might otherwise stand to benefit the most from it: those of us with disabilities. For people with Down Syndrome, for example, voice-based control of technology offers the promise of increased independence – and even of some new, potentially life-saving products. Yet, for this particular group of people, today's voice-recognizing AIs pose serious problems, as a result of a combination of 3 factors: To address this issue, and as a step forward towards ensuring that people with health conditions that cause AIs to be unable to understand them are able to utilize modern technology, Google is partnering with the Canadian Down Syndrome Society; via an effort called Project Understood, Google hopes to obtain recordings of people with Down Syndrome reading simple phrases, and to use those recordings to help train its AI to understand the speech patterns common to those with Down Syndrome. This effort is an extension of Google's own Project Euphonia, which seeks to improve computers' abilities to understand diverse speech patterns including impaired speech, and, which, earlier this year, began an effort to train AIs to recognize communication from people with the neuro-degenerative condition ALS, commonly known as Lou Gehrig's Disease.
Today at AWS re:Invent in Las Vegas, NFL commissioner Roger Goodell joined AWS CEO Andy Jassy on stage to announce a new partnership to use machine learning to help reduce head injuries in professional football. "We're excited to announce a new strategic partnership together, which is going to combine cloud computing, machine learning and data science to work on transforming player health and safety," Jassy said today. NFL football is a fast and violent sport involving large men. Injuries are a part of the game, but the NFL is hoping to reduce head injuries in particular, a huge problem for the sport. A 2017 study found that 110 out of 111 deceased NFL players had chronic traumatic encephalopathy (CTE).
Dr. Ansgar Koene Dr. Ansgar Koene is Global AI Ethics and Regulatory Leader at EY where he supports the AI Lab's Policy activities on Trusted AI. He is also a Senior Research Fellow at the RCUK funded Horizon Digital Economy Research institute (University of Nottingham) where he contributes to the policy impact activities of the institute and leads the policy related stakeholder engagement activities of the ReEnTrust project. As part of this work Ansgar has provided evidence to twelve UK parliamentary inquiries, co-authored a report on Bias in Algorithmic Decision-Making for the Centre for Data Ethics and Innovation, and was lead author of a Science Technology Options Assessment report on a Governance Framework for Algorithmic Accountability and Transparency for the European Parliament. Ansgar chairs the IEEE P7003 Standard for Algorithmic Bias Considerations working group, is the Bias Focus Group leader for the IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), and a trustee for the 5Rgiths foundation for the Rights of Young People Online. Ansgar has a multi-disciplinary research background, having worked and published on topics ranging from Policy and Governance of Algorithmic Systems (AI), data-privacy, AI Ethics, AI Standards, bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies.
Epilepsy occurs when localized electrical activity of neurons suffer from an imbalance. One of the most adequate methods for diagnosing and monitoring is via the analysis of electroencephalographic (EEG) signals. Despite there is a wide range of alternatives to characterize and classify EEG signals for epilepsy analysis purposes, many key aspects related to accuracy and physiological interpretation are still considered as open issues. In this paper, this work performs an exploratory study in order to identify the most adequate frequently-used methods for characterizing and classifying epileptic seizures. In this regard, a comparative study is carried out on several subsets of features using four representative classifiers: Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM).
Imagine having to choose from over 14,000 different treatment scenarios to decide which drugs might be best for a child or a loved one affected by epilepsy. This is what faces many families according to the experts at Stanford and doc.ai who have announced a new type of clinical trial using artificial intelligence (AI). The project's goal is to help make the process more scientific using population data and less prone to lengthy individual trial-and-error. Researchers are analyzing medications, side effects, genomic information, environmental exposures, activity and even physical traits. This type of work produces vast amounts of information and requires so much processing power that it can only be performed by the latest AI systems.
Owkin, which is developing federated learning and AI technologies to advance medical research, has announced a collaboration with technology company NVIDIA and King's College London (KCL) to deliver federated learning in the healthcare and life sciences sector. It will initially connect four of London's teaching hospitals before expanding throughout the UK, and will offer AI services with the aim of accelerating research and improving clinical practice in a wide range of therapeutic areas, including cancer, heart failure and neurodegenerative disease. Owkin's co-founder and chief scientific officer, Gilles Wainrib, said: "This partnership brings together the best players in life science & healthcare, machine learning and data centre infrastructure. NVIDIA's platforms create the ideal and flexible footprint for hospitals to invest in machine learning. King's College London has assembled the engineering, medical and data science talent, the high-quality patient data, and the governance framework in the AI4VBH Centre, that will show the world the future of healthcare analytics and the power of machine learning. Together we will be enabling the formation of a decentralised dataset that will generate enormous value for research and clinical practice. "Owkin hopes to demonstrate that a Federating Learning architecture is safer for patients, and statistically equivalent to the traditional pooled model for analysis.
This week, we discuss what to do about bias in algorithms; Russia's limits in the Middle East; learning from other countries' experiences with fentanyl; what protests could mean for democracy in the Middle East; how cities can help U.S. diplomacy; and helping U.S. Army special operations forces assess their missions. Earlier this month, a controversy about gender bias in the Apple Card algorithm lit up social media; an outraged tech executive posted about how his credit line was 20 times higher than his wife's, even though the two share all assets. According to RAND's Osonde Osoba, problems like this may become more common as artificial intelligence is used in more kinds of decisionmaking. It's not always possible to pinpoint how a complex algorithm led to a bad outcome, he says. But there are ways for companies to audit algorithms for sexist, racist, biased behaviors.
Traumatic brain injury (TBI) is a significant global cause of mortality and morbidity with an increasing incidence, especially in low-and-middle income countries. The most severe TBIs are treated in intensive care units (ICU), but in spite of the proper and high-quality care, about one in three patients dies. Patients that suffer from severe TBI are unconscious, which makes it challenging to accurately monitor the condition of the patient during intensive care. In the ICU, many tens of variables are continuously monitored (e.g. However, only one variable, such as intracranial pressure, may yield hundreds of thousands of data points per day.