Collaborating Authors


Facebook and NYU trained an AI to estimate COVID outcomes


COVID-19 has infected more than 23 million Americans and killed 386,000 of them to date, since the global pandemic began last March. Complicating the public health response is the fact that we still know so little about how the virus operates -- such as why some patients remain asymptomatic while it ravages others. Effectively allocating resources like ICU beds and ventilators becomes a Sisyphean task when doctors can only guess as to who might recover and who might be intubated within the next 96 hours. However a trio of new machine learning algorithms developed by Facebook's AI division (FAIR) in cooperation with NYU Langone Health can help predict patient outcomes up to four days in advance using just a patient's chest x-rays. The models can, respectively, predict patient deterioration based on either a single X-ray or a sequence as well as determine how much supplemental oxygen the patient will likely need.

AI-Powered Text From This Program Could Fool the Government


In October 2019, Idaho proposed changing its Medicaid program. The state needed approval from the federal government, which solicited public feedback via But half came not from concerned citizens or even internet trolls. They were generated by artificial intelligence. And a study found that people could not distinguish the real comments from the fake ones.

Sparse encoding for more-interpretable feature-selecting representations in probabilistic matrix factorization Machine Learning

Dimensionality reduction methods for count data are critical to a wide range of applications in medical informatics and other fields where model interpretability is paramount. For such data, hierarchical Poisson matrix factorization (HPF) and other sparse probabilistic non-negative matrix factorization (NMF) methods are considered to be interpretable generative models. They consist of sparse transformations for decoding their learned representations into predictions. However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features. HPF is often incorrectly interpreted in the literature as if it possesses encoder sparsity. The distinction between decoder sparsity and encoder sparsity is subtle but important. Due to the lack of encoder sparsity, HPF does not possess the column-clustering property of classical NMF -- the factor loading matrix does not sufficiently define how each factor is formed from the original features. We address this deficiency by self-consistently enforcing encoder sparsity, using a generalized additive model (GAM), thereby allowing one to relate each representation coordinate to a subset of the original data features. In doing so, the method also gains the ability to perform feature selection. We demonstrate our method on simulated data and give an example of how encoder sparsity is of practical use in a concrete application of representing inpatient comorbidities in Medicare patients.

Knowledge Graph and Machine Learning: 3 Key Business Needs, One Platform Registration


Connect internal and external datasets and pipelines with a distributed Graph Database - UnitedHealth Group is connecting 200 sources to deliver a real-time customer 360 to improve quality of care for 50 million members and deliver call center efficiencies. Xandr (part of AT&T) is connecting multiple data pipelines to build an identity graph for entity resolution to power the next-generation AdTech platform.

How AI-controlled sensors could save lives in 'smart' hospitals and homes


"We have the ability to build technologies into the physical spaces where health care is delivered to help cut the rate of fatal errors that occur today due to the sheer volume of patients and the complexity of their care," said Arnold Milstein, a professor of medicine and director of Stanford's Clinical Excellence Research Center (CERC). Milstein, along with computer science professor Fei-Fei Li and graduate student Albert Haque, are co-authors of a Nature paper that reviews the field of "ambient intelligence" in health care -- an interdisciplinary effort to create such smart hospital rooms equipped with AI systems that can do a range of things to improve outcomes. For example, sensors and AI can immediately alert clinicians and patient visitors when they fail to sanitize their hands before entering a hospital room. AI tools can be built into smart homes where technology could unobtrusively monitor the frail elderly for behavioral clues of impending health crises. And they prompt in-home caregivers, remotely located clinicians and patients themselves to make timely, life-saving interventions.

Learning how to approve updates to machine learning algorithms in non-stationary settings Machine Learning

Machine learning algorithms in healthcare have the potential to continually learn from real-world data generated during healthcare delivery and adapt to dataset shifts. As such, the FDA is looking to design policies that can autonomously approve modifications to machine learning algorithms while maintaining or improving the safety and effectiveness of the deployed models. However, selecting a fixed approval strategy, a priori, can be difficult because its performance depends on the stationarity of the data and the quality of the proposed modifications. To this end, we investigate a learning-to-approve approach (L2A) that uses accumulating monitoring data to learn how to approve modifications. L2A defines a family of strategies that vary in their "optimism''---where more optimistic policies have faster approval rates---and searches over this family using an exponentially weighted average forecaster. To control the cumulative risk of the deployed model, we give L2A the option to abstain from making a prediction and incur some fixed abstention cost instead. We derive bounds on the average risk of the model deployed by L2A, assuming the distributional shifts are smooth. In simulation studies and empirical analyses, L2A tailors the level of optimism for each problem-setting: It learns to abstain when performance drops are common and approve beneficial modifications quickly when the distribution is stable.

How Explainable AI (XAI) for Health Care Helps Build User Trust -- Even During Life-and-Death…


Picture this: You're using an AI model when it recommends a course of action that doesn't seem to make sense. However, because the model can't explain itself, you've got no insight into the reasoning behind the recommendation. Your only options are to trust it or not -- but without any context. It's a frustrating yet familiar experience for many who work with artificial intelligence (AI) systems, which in many cases function as so-called "black boxes" that sometimes can't even be explained by their own creators. For some applications, black box-style AI systems are completely suitable (or even preferred by those who would rather not explain their proprietary AI).

How AI and machine learning can solve the problem of medical fraud


Shiraaz Joosub, Healthcare Sales Executive, T-Systems South Africa Medical malpractice litigation costs South Africa millions of rands every year and drives up the cost of healthcare. While some claims of medical negligence have merit, the unfortunate reality is that there has been a spike in fraud in this area since 2017. Over the years, this has cost government billions and has had a devastating effect on the country's public healthcare sector. Now, in the wake of the COVID-19 pandemic, it is more important than ever to stop medical fraud in its tracks and reduce medical malpractice. Fortunately, data analytics, artificial intelligence (AI) and machine learning can assist greatly in developing a digital audit trail to protect healthcare providers against fraudulent medical malpractice claims.A cost beyond billionsIn 2018, the Special Investigations Unit (SIU) began investigating medical fraud in the Eastern Cape and Gauteng, after a spike of R8.4 billion in medical negligence claims.

RPA - 10 Powerful Examples in Enterprise - Algorithm-X Lab


More and more enterprises are turning to a promising technology called RPA (robotic process automation) to become more productive and efficient. Successful implementation also helps to cut costs and reduce error rates. RPA can automate mundane and predictable tasks and processes leaving employees to focus more on high-value work. Other companies, see RPA as the next step before fully adopting intelligent automation technology such as machine learning and artificial intelligence. RPA is one of the fastest-growing sectors in the field of enterprise technology. In 2018 RPA software soared in value to $864 million, a growth of over 63%. In the course of this article, we clearly explain exactly what RPA really is and how it works. To help our understanding we will also explore the potential benefits and disadvantages of this technology. Finally, we will highlight some of the most powerful and exciting ways in which it is already transforming enterprises in a range of industries. Robotic Process Automation, or RPA for short, is a way of automating structured, repetitive, or rules-based tasks and processes. It has a number of different applications. Its tools can capture data, retrieve information, communicate with other digital systems and process transactions. Implementation can help to prevent human error, particularly when charged with completing long, repetitive tasks. It can also reduce labor costs. A report by Deloitte revealed that one large, commercial bank implemented RPA into 85 software bots. These were used to tackle 13 processes interacting with 1.5 million requests in a year.

Mount Sinai puts machine learning to work for quality and safety


Robbie Freeman, vice president of clinical innovation at New York's Mount Sinai Health System began his career working at the bedside, so he has an intimate appreciation of the real-world value of patient safety projects – and importance of ensuring key data is gathered and made actionable with optimal workflows."I'm In an earlier, pre-digital age, many of the flow sheets and assessments collected during a nursing assessment, or other clinical information entered into the chart, might not have been "used or even necessarily looked at," he said. But in recent years, "they've become very valuable in the world of predictive analytics. There's a lot of information in those flow sheets that we can tap into for these models."