Goto

Collaborating Authors

Results


Using AI to Improve Chronic Disease Outcomes

#artificialintelligence

This article reports the results of a study that follows a multi-year, pragmatic clinical trial in a real world, community based primary care. What started as a quality project evolved to include the development and deployment of Artificial Intelligence (AI) decision support to guide medication choices when treating hypertension (HTN). Results show that primary care physicians significantly improved HTN outcomes as compared to the national average of success. All patients with a hypertension diagnosis were tracked across three years–including the COVID pandemic period. Of the 13, 441 HTN patients 94% had a blood pressure "at goal" (i.e. less than 140/90)–as of their last clinician visit. The last published study of US blood pressure control which occurred prior to the pandemic was 44%. Because the use of AI in primary care is novel as of this writing, the concept of AI is often unfamiliar to many practicing clinicians and medical group leaders.


In simulation of how water freezes, artificial intelligence breaks the ice

#artificialintelligence

A team based at Princeton University has accurately simulated the initial steps of ice formation by applying artificial intelligence (AI) to solving equations that govern the quantum behavior of individual atoms and molecules. The resulting simulation describes how water molecules transition into solid ice with quantum accuracy. This level of accuracy, once thought unreachable due to the amount of computing power it would require, became possible when the researchers incorporated deep neural networks, a form of artificial intelligence, into their methods. The study was published in the journal Proceedings of the National Academy of Sciences. "In a sense, this is like a dream come true," said Roberto Car, Princeton's Ralph W. *31 Dornte Professor in Chemistry, who co-pioneered the approach of simulating molecular behaviors based on the underlying quantum laws more than 35 years ago.


Biologists train AI to generate medicines and vaccines

#artificialintelligence

Scientists have developed artificial intelligence software that can create proteins that may be useful as vaccines, cancer treatments, or even tools for pulling carbon pollution out of the air. This research, reported today in the journal Science, was led by the University of Washington School of Medicine and Harvard University. The article is titled "Scaffolding protein functional sites using deep learning." "The proteins we find in nature are amazing molecules, but designed proteins can do so much more," said senior author David Baker, an HHMI Investigator and professor of biochemistry at UW Medicine. "In this work, we show that machine learning can be used to design proteins with a wide variety of functions." For decades, scientists have used computers to try to engineer proteins.


Origin of the 'Black Beauty' meteorite is revealed

Daily Mail - Science & tech

Scientists have revealed more about the origins of the famous'Black Beauty' meteorite, also known as NWA 7034. The researchers used AI to analyse thousands of high-resolution planetary images of the Martian surface from a range of Mars missions. They found Black Beauty was ejected into space when an asteroid impacted the planet's surface and created the six-mile-wide Karratha Crater 5-10 million years ago. Black Beauty, which weighs just 11 ounces (320 grams), led to the creation of a new class of meteorite when it was discovered in 2011 in the Western Sahara Desert. The meteorite was ejected from Mars' Karratha Crater 5-10 million years ago by an asteroid impact Five to ten million years ago an asteroid smashed into Mars.


Cooperation and Learning Dynamics under Wealth Inequality and Diversity in Individual Risk

Journal of Artificial Intelligence Research

We examine how wealth inequality and diversity in the perception of risk of a collective disaster impact cooperation levels in the context of a public goods game with uncertain and non-linear returns. In this game, individuals face a collective-risk dilemma where they may contribute or not to a common pool to reduce their chances of future losses. We draw our conclusions based on social simulations with populations of independent reinforcement learners with diverse levels of risk and wealth. We find that both wealth inequality and diversity in risk assessment can hinder cooperation and augment collective losses. Additionally, wealth inequality further exacerbates long term inequality, causing rich agents to become richer and poor agents to become poorer. On the other hand, diversity in risk only amplifies inequality when combined with bias in group assortment--i.e., high probability that agents from the same risk class play together. Our results also suggest that taking wealth inequality into account can help to design effective policies aiming at leveraging cooperation in large group sizes, a configuration where collective action is harder to achieve. Finally, we characterize the circumstances under which risk perception alignment is crucial and those under which reducing wealth inequality constitutes a deciding factor for collective welfare.


Can AI Identify Patients With Long COVID?

#artificialintelligence

Long COVID refers to the condition where people experience long-term effects from their infection with the SARS CoV-2 virus that is responsible for the COVID-19 disease (Coronavirus disease 2019) pandemic according to the U.S. Centers for Disease Control and Prevention (CDC). A new study published in The Lancet Digital Health applies artificial intelligence (AI) machine learning to identify patients with long COVID-19 using data from electronic health records with high accuracy. "Patients identified by our models as potentially having long COVID can be interpreted as patients warranting care at a specialty clinic for long COVID, which is an essential proxy for long COVID diagnosis as its definition continues to evolve," the researchers concluded. "We also achieve the urgent goal of identifying potential long COVID in patients for clinical trials." Globally there have been over 510 million confirmed cases of COVID-19 and more than 6.2 million deaths according to April 2022 statistics from Johns Hopkins University.


A Sensor Sniffs for Cancer, Using Artificial Intelligence

#artificialintelligence

Researchers at Memorial Sloan Kettering Cancer Center (MSK) have developed a sensor that can be trained to sniff for cancer, with the help of artificial intelligence. Although the training doesn't work the same way one trains a police dog to sniff for explosives or drugs, the sensor has some similarity to how the nose works. The nose can detect more than a trillion different scents, even though it has just a few hundred types of olfactory receptors. The pattern of which odor molecules bind to which receptors creates a kind of molecular signature that the brain uses to recognize a scent. Like the nose, the cancer detection technology uses an array of multiple sensors to detect a molecular signature of the disease.


Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches

Journal of Artificial Intelligence Research

This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society's most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench "bias," are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI's long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.


Multi-Agent Advisor Q-Learning

Journal of Artificial Intelligence Research

In the last decade, there have been significant advances in multi-agent reinforcement learning (MARL) but there are still numerous challenges, such as high sample complexity and slow convergence to stable policies, that need to be overcome before wide-spread deployment is possible. However, many real-world environments already, in practice, deploy sub-optimal or heuristic approaches for generating policies. An interesting question that arises is how to best use such approaches as advisors to help improve reinforcement learning in multi-agent domains. In this paper, we provide a principled framework for incorporating action recommendations from online suboptimal advisors in multi-agent settings. We describe the problem of ADvising Multiple Intelligent Reinforcement Agents (ADMIRAL) in nonrestrictive general-sum stochastic game environments and present two novel Q-learning based algorithms: ADMIRAL - Decision Making (ADMIRAL-DM) and ADMIRAL - Advisor Evaluation (ADMIRAL-AE), which allow us to improve learning by appropriately incorporating advice from an advisor (ADMIRAL-DM), and evaluate the effectiveness of an advisor (ADMIRAL-AE). We analyze the algorithms theoretically and provide fixed point guarantees regarding their learning in general-sum stochastic games. Furthermore, extensive experiments illustrate that these algorithms: can be used in a variety of environments, have performances that compare favourably to other related baselines, can scale to large state-action spaces, and are robust to poor advice from advisors.


Devaluing Stocks With Adversarially Crafted Retweets

#artificialintelligence

A joint research collaboration between US universities and IBM has formulated a proof-of-concept adversarial attack that's theoretically capable of causing stock market losses, simply by changing one word in a retweet of a Twitter post. In one experiment, the researchers were able to hobble the Stocknet prediction model with two methods: a manipulation attack and a concatenation attack. The attack surface for an adversarial attack on automated and machine learning stock prediction systems is that a growing number of them are relying on organic social media as predictors of performance; and that manipulating this'in-the-wild' data is a process that can, potentially, be reliably formulated. Besides Twitter, systems of this nature ingest data from Reddit, StockTwits, and Yahoo News, among others. The difference between Twitter and the other sources is that retweets are editable, even if the original tweets are not.