Goto

Collaborating Authors

Results


Cooperation and Learning Dynamics under Wealth Inequality and Diversity in Individual Risk

Journal of Artificial Intelligence Research

We examine how wealth inequality and diversity in the perception of risk of a collective disaster impact cooperation levels in the context of a public goods game with uncertain and non-linear returns. In this game, individuals face a collective-risk dilemma where they may contribute or not to a common pool to reduce their chances of future losses. We draw our conclusions based on social simulations with populations of independent reinforcement learners with diverse levels of risk and wealth. We find that both wealth inequality and diversity in risk assessment can hinder cooperation and augment collective losses. Additionally, wealth inequality further exacerbates long term inequality, causing rich agents to become richer and poor agents to become poorer. On the other hand, diversity in risk only amplifies inequality when combined with bias in group assortment--i.e., high probability that agents from the same risk class play together. Our results also suggest that taking wealth inequality into account can help to design effective policies aiming at leveraging cooperation in large group sizes, a configuration where collective action is harder to achieve. Finally, we characterize the circumstances under which risk perception alignment is crucial and those under which reducing wealth inequality constitutes a deciding factor for collective welfare.


Can AI Identify Patients With Long COVID?

#artificialintelligence

Long COVID refers to the condition where people experience long-term effects from their infection with the SARS CoV-2 virus that is responsible for the COVID-19 disease (Coronavirus disease 2019) pandemic according to the U.S. Centers for Disease Control and Prevention (CDC). A new study published in The Lancet Digital Health applies artificial intelligence (AI) machine learning to identify patients with long COVID-19 using data from electronic health records with high accuracy. "Patients identified by our models as potentially having long COVID can be interpreted as patients warranting care at a specialty clinic for long COVID, which is an essential proxy for long COVID diagnosis as its definition continues to evolve," the researchers concluded. "We also achieve the urgent goal of identifying potential long COVID in patients for clinical trials." Globally there have been over 510 million confirmed cases of COVID-19 and more than 6.2 million deaths according to April 2022 statistics from Johns Hopkins University.


A Sensor Sniffs for Cancer, Using Artificial Intelligence

#artificialintelligence

Researchers at Memorial Sloan Kettering Cancer Center (MSK) have developed a sensor that can be trained to sniff for cancer, with the help of artificial intelligence. Although the training doesn't work the same way one trains a police dog to sniff for explosives or drugs, the sensor has some similarity to how the nose works. The nose can detect more than a trillion different scents, even though it has just a few hundred types of olfactory receptors. The pattern of which odor molecules bind to which receptors creates a kind of molecular signature that the brain uses to recognize a scent. Like the nose, the cancer detection technology uses an array of multiple sensors to detect a molecular signature of the disease.


Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches

Journal of Artificial Intelligence Research

This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society's most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench "bias," are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI's long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.


Multi-Agent Advisor Q-Learning

Journal of Artificial Intelligence Research

In the last decade, there have been significant advances in multi-agent reinforcement learning (MARL) but there are still numerous challenges, such as high sample complexity and slow convergence to stable policies, that need to be overcome before wide-spread deployment is possible. However, many real-world environments already, in practice, deploy sub-optimal or heuristic approaches for generating policies. An interesting question that arises is how to best use such approaches as advisors to help improve reinforcement learning in multi-agent domains. In this paper, we provide a principled framework for incorporating action recommendations from online suboptimal advisors in multi-agent settings. We describe the problem of ADvising Multiple Intelligent Reinforcement Agents (ADMIRAL) in nonrestrictive general-sum stochastic game environments and present two novel Q-learning based algorithms: ADMIRAL - Decision Making (ADMIRAL-DM) and ADMIRAL - Advisor Evaluation (ADMIRAL-AE), which allow us to improve learning by appropriately incorporating advice from an advisor (ADMIRAL-DM), and evaluate the effectiveness of an advisor (ADMIRAL-AE). We analyze the algorithms theoretically and provide fixed point guarantees regarding their learning in general-sum stochastic games. Furthermore, extensive experiments illustrate that these algorithms: can be used in a variety of environments, have performances that compare favourably to other related baselines, can scale to large state-action spaces, and are robust to poor advice from advisors.


Devaluing Stocks With Adversarially Crafted Retweets

#artificialintelligence

A joint research collaboration between US universities and IBM has formulated a proof-of-concept adversarial attack that's theoretically capable of causing stock market losses, simply by changing one word in a retweet of a Twitter post. In one experiment, the researchers were able to hobble the Stocknet prediction model with two methods: a manipulation attack and a concatenation attack. The attack surface for an adversarial attack on automated and machine learning stock prediction systems is that a growing number of them are relying on organic social media as predictors of performance; and that manipulating this'in-the-wild' data is a process that can, potentially, be reliably formulated. Besides Twitter, systems of this nature ingest data from Reddit, StockTwits, and Yahoo News, among others. The difference between Twitter and the other sources is that retweets are editable, even if the original tweets are not.


AI helps scientists design novel plastic-eating enzyme

#artificialintelligence

In brief A synthetic enzyme designed using machine-learning software can break down waste plastics in 24 hours, according to research published in Nature. Scientists at the University of Texas Austin studied the natural structure of PETase, an enzyme known to degrade polymer chains in polyethylene. Next, they trained a model to generate mutations of the enzyme that work fast at low temperatures, let the software loose, and picked from the output a variant they named FAST-PETase to synthesize. FAST stands for functional, active, stable, and tolerant. FAST-PETase, we're told, can break down plastic in as little as 24 hours at temperatures between 30 and 50 degrees Celsius.


A Data-Driven Exploration of the Race between Human Labor and Machines in the 21st Century

Communications of the ACM

Anxiety about automation is prevalent in this era of rapid technological advances, especially in artificial intelligence (AI), machine learning (ML), and robotics. Accordingly, how human labor competes, or cooperates, with machines in performing a range of tasks (what we term "the race between human labor and machines") has attracted a great deal of attention among the public, policymakers, and researchers.14,15,18 While there have been persistent concerns about new technology and automation replacing human tasks at least since the Industrial Revolution,8 recent technological advances in executing sophisticated and complex tasks--enabled by a combinatorial innovation of new techniques and algorithms, advances in computational power, and exponential increases in data--differentiate the 21st century from previous ones.14 For instance, recent advances in autonomous self-driving cars demonstrate the way a wide range of human tasks that have been considered least susceptible to automation may no longer be safe from automation and computerization. Another case in point is human competition against machines, such as IBM's Watson on the TV game show "Jeopardy!" Both cases imply that some tasks, such as pattern recognition and information processing, are being rapidly computerized. Furthermore, recent studies suggest that robotics also plays a role in automating manual tasks and decreasing employment of low-wage workers.3,22


Solving The Challenges Of Robotic Pizza-Making - Liwaiwai

#artificialintelligence

For a robot, working with a deformable object like dough is tricky because the shape of dough can change in many ways, which are difficult to represent with an equation. Plus, creating a new shape out of that dough requires multiple steps and the use of different tools. It is especially difficult for a robot to learn a manipulation task with a long sequence of steps -- where there are many possible choices -- since learning often occurs through trial and error. Researchers at MIT, Carnegie Mellon University, and the University of California at San Diego, have come up with a better way. They created a framework for a robotic manipulation system that uses a two-stage learning process, which could enable a robot to perform complex dough-manipulation tasks over a long timeframe.


Agent-Based Modeling for Predicting Pedestrian Trajectories Around an Autonomous Vehicle

Journal of Artificial Intelligence Research

This paper addresses modeling and simulating pedestrian trajectories when interacting with an autonomous vehicle in a shared space. Most pedestrian–vehicle interaction models are not suitable for predicting individual trajectories. Data-driven models yield accurate predictions but lack generalizability to new scenarios, usually do not run in real time and produce results that are poorly explainable. Current expert models do not deal with the diversity of possible pedestrian interactions with the vehicle in a shared space and lack microscopic validation. We propose an expert pedestrian model that combines the social force model and a new decision model for anticipating pedestrian–vehicle interactions. The proposed model integrates different observed pedestrian behaviors, as well as the behaviors of the social groups of pedestrians, in diverse interaction scenarios with a car. We calibrate the model by fitting the parameters values on a training set. We validate the model and evaluate its predictive potential through qualitative and quantitative comparisons with ground truth trajectories. The proposed model reproduces observed behaviors that have not been replicated by the social force model and outperforms the social force model at predicting pedestrian behavior around the vehicle on the used dataset. The model generates explainable and real-time trajectory predictions. Additional evaluation on a new dataset shows that the model generalizes well to new scenarios and can be applied to an autonomous vehicle embedded prediction.