Goto

Collaborating Authors

Results


AI-Powered Text From This Program Could Fool the Government

WIRED

In October 2019, Idaho proposed changing its Medicaid program. The state needed approval from the federal government, which solicited public feedback via Medicaid.gov. But half came not from concerned citizens or even internet trolls. They were generated by artificial intelligence. And a study found that people could not distinguish the real comments from the fake ones.


Sparse encoding for more-interpretable feature-selecting representations in probabilistic matrix factorization

arXiv.org Machine Learning

Dimensionality reduction methods for count data are critical to a wide range of applications in medical informatics and other fields where model interpretability is paramount. For such data, hierarchical Poisson matrix factorization (HPF) and other sparse probabilistic non-negative matrix factorization (NMF) methods are considered to be interpretable generative models. They consist of sparse transformations for decoding their learned representations into predictions. However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features. HPF is often incorrectly interpreted in the literature as if it possesses encoder sparsity. The distinction between decoder sparsity and encoder sparsity is subtle but important. Due to the lack of encoder sparsity, HPF does not possess the column-clustering property of classical NMF -- the factor loading matrix does not sufficiently define how each factor is formed from the original features. We address this deficiency by self-consistently enforcing encoder sparsity, using a generalized additive model (GAM), thereby allowing one to relate each representation coordinate to a subset of the original data features. In doing so, the method also gains the ability to perform feature selection. We demonstrate our method on simulated data and give an example of how encoder sparsity is of practical use in a concrete application of representing inpatient comorbidities in Medicare patients.


Could GPT-3 Change The Way Future AI Models Are Developed and Deployed ?

#artificialintelligence

Much has been said about GPT-3 already. Traditionally, we start with data for a problem and develop the model based on the data. The model is specific to the problem. If you want to train a model to predict traffic patterns in New York, you build a model of New York traffic patterns. If you want to model air pollution in New York, that's a different model With GPT-3 you start with the model instead of the data.


Learning how to approve updates to machine learning algorithms in non-stationary settings

arXiv.org Machine Learning

Machine learning algorithms in healthcare have the potential to continually learn from real-world data generated during healthcare delivery and adapt to dataset shifts. As such, the FDA is looking to design policies that can autonomously approve modifications to machine learning algorithms while maintaining or improving the safety and effectiveness of the deployed models. However, selecting a fixed approval strategy, a priori, can be difficult because its performance depends on the stationarity of the data and the quality of the proposed modifications. To this end, we investigate a learning-to-approve approach (L2A) that uses accumulating monitoring data to learn how to approve modifications. L2A defines a family of strategies that vary in their "optimism''---where more optimistic policies have faster approval rates---and searches over this family using an exponentially weighted average forecaster. To control the cumulative risk of the deployed model, we give L2A the option to abstain from making a prediction and incur some fixed abstention cost instead. We derive bounds on the average risk of the model deployed by L2A, assuming the distributional shifts are smooth. In simulation studies and empirical analyses, L2A tailors the level of optimism for each problem-setting: It learns to abstain when performance drops are common and approve beneficial modifications quickly when the distribution is stable.


'Smarter AI can help fight bias in healthcare'

#artificialintelligence

Leading researchers discussed which requirements AI algorithms must meet to fight bias in healthcare during the'Artificial Intelligence and Implications for Health Equity: Will AI Improve Equity or Increase Disparities?' session which was held on 1 December. The speakers were: Ziad Obermeyer, associate professor of health policy and management at the Berkeley School of Public Health, CA; Luke Oakden-Rayner, director of medical imaging research at the Royal Adelaide Hospital, Australia; Constance Lehman, professor of radiology at Harvard Medical School, director of breast imaging, and co-director of the Avon Comprehensive Breast Evaluation Center at Massachusetts General Hospital; and Regina Barzilay, professor in the department of electrical engineering and computer science and member of the Computer Science and AI Lab at the Massachusetts Institute of Technology. The discussion was moderated by Judy Wawira Gichoya, assistant professor in the Department of Radiology at Emory University School of Medicine, Atlanta. Artificial intelligence (AI) may unintentionally intensify inequities that already exist in modern healthcare and understanding those biases may help defeat them. Social determinants partly cause poor healthcare outcomes and it is crucial to raise awareness about inequity in access to healthcare, as Prof Sam Shah, founder and director of Faculty of Future Health in London, explained in a keynote during the HIMSS & Health 2.0 European Digital event.


Amazon is cozying up in all corners of the healthcare ecosystem--AI is its next frontier

#artificialintelligence

Amazon Web Services (AWS) launched Amazon HealthLake--a new HIPAA-eligible platform that lets healthcare organizations seamlessly store, transform, and analyze data in the cloud. The platform standardizes unstructured clinical data (like clinical notes or imaging info) by in a way that makes it easily accessible and unlocks meaningful insights--an otherwise complex and error-prone process. For example, Amazon HealthLake can match patients to clinical trials, analyze population health trends, improve clinical decision-making, and optimize hospital operations. Amazon already has links in different parts of the healthcare ecosystem--now that it's taking on healthcare AI, smaller players like Nuance and Notable Health should be worried. Amazon has inroads in everything from pharmacy to care delivery: Amazon Pharmacy was built upon its partnerships with payers like Blue Cross Blue Shield and Horizon Healthcare Services, Amazon Care was expanded to all Amazon employees in Washington state this September, and it launched its Amazon Halo wearable in August.


RPA - 10 Powerful Examples in Enterprise - Algorithm-X Lab

#artificialintelligence

More and more enterprises are turning to a promising technology called RPA (robotic process automation) to become more productive and efficient. Successful implementation also helps to cut costs and reduce error rates. RPA can automate mundane and predictable tasks and processes leaving employees to focus more on high-value work. Other companies, see RPA as the next step before fully adopting intelligent automation technology such as machine learning and artificial intelligence. RPA is one of the fastest-growing sectors in the field of enterprise technology. In 2018 RPA software soared in value to $864 million, a growth of over 63%. In the course of this article, we clearly explain exactly what RPA really is and how it works. To help our understanding we will also explore the potential benefits and disadvantages of this technology. Finally, we will highlight some of the most powerful and exciting ways in which it is already transforming enterprises in a range of industries. Robotic Process Automation, or RPA for short, is a way of automating structured, repetitive, or rules-based tasks and processes. It has a number of different applications. Its tools can capture data, retrieve information, communicate with other digital systems and process transactions. Implementation can help to prevent human error, particularly when charged with completing long, repetitive tasks. It can also reduce labor costs. A report by Deloitte revealed that one large, commercial bank implemented RPA into 85 software bots. These were used to tackle 13 processes interacting with 1.5 million requests in a year.


Mount Sinai puts machine learning to work for quality and safety

#artificialintelligence

Robbie Freeman, vice president of clinical innovation at New York's Mount Sinai Health System began his career working at the bedside, so he has an intimate appreciation of the real-world value of patient safety projects – and importance of ensuring key data is gathered and made actionable with optimal workflows."I'm In an earlier, pre-digital age, many of the flow sheets and assessments collected during a nursing assessment, or other clinical information entered into the chart, might not have been "used or even necessarily looked at," he said. But in recent years, "they've become very valuable in the world of predictive analytics. There's a lot of information in those flow sheets that we can tap into for these models."


Mount Sinai puts machine learning to work for quality and safety

#artificialintelligence

Robbie Freeman, vice president of clinical innovation at New York's Mount Sinai Health System began his career working at the bedside, so he has an intimate appreciation of the real-world value of patient safety projects – and importance of ensuring key data is gathered and made actionable with optimal workflows. "I'm a registered nurse, and I think working with patients and spending a lot of time on data entry is what kind of led us to this real focus on clinical workflows and delivering additional value," said Freeman, speaking Wednesday at the HIMSS Machine Learning & AI for Healthcare Digital Summit about some of Mount Sinai's recent automation initiatives. In an earlier, pre-digital age, many of the flow sheets and assessments collected during a nursing assessment, or other clinical information entered into the chart, might not have been "used or even necessarily looked at," he said. But in recent years, "they've become very valuable in the world of predictive analytics. There's a lot of information in those flow sheets that we can tap into for these models."


UVA Artificial Intelligence Project Among 7 Finalists for $1 Million Prize

#artificialintelligence

A UVA Health data science team is one of seven finalists in a national competition to improve healthcare with the help of artificial intelligence. UVA's proposal was selected as a finalist from among more than 300 applicants in the first-ever Centers for Medicare & Medicaid Services (CMS) Artificial Intelligence Health Outcomes Challenge. UVA's project predicts which patients are at risk for adverse outcomes and then suggests a personalized plan to ensure appropriate healthcare delivery and avoid unnecessary hospitalizations. CMS selected the seven finalists after reviewing the accuracy of their artificial intelligence models and evaluating how well healthcare providers could use visual displays created by each project team to improve outcomes and patient care. Each team of finalists received $60,000 and will compete for a grand prize of up to $1 million.