Goto

Collaborating Authors

 decision strategy


A Customized SAT-based Solver for Graph Coloring

Brand, Timo, Faber, Daniel, Held, Stephan, Mutzel, Petra

arXiv.org Artificial Intelligence

We introduce ZykovColor, a novel SAT-based algorithm to solve the graph coloring problem working on top of an encoding that mimics the Zykov tree. Our method is based on an approach of Hébrard and Katsirelos (2020) that employs a propagator to enforce transitivity constraints, incorporate lower bounds for search tree pruning, and enable inferred propagations. We leverage the recently introduced IPASIR-UP interface for CaDiCaL to implement these techniques with a SAT solver. Furthermore, we propose new features that take advantage of the underlying SAT solver. These include modifying the integrated decision strategy with vertex domination hints and using incremental bottom-up search that allows to reuse learned clauses from previous calls. Additionally, we integrate a more effective clique computation and an algorithm for computing the fractional chromatic number to improve the lower bounds used for pruning during the search. We validate the effectiveness of each new feature through an experimental analysis. ZykovColor outperforms other state-of-the-art graph coloring implementations on the DIMACS benchmark set. Further experiments on random Erdős-Rényi graphs show that our new approach matches or outperforms state-of-the-art SAT-based methods for both very sparse and highly dense graphs. We give an additional configuration of ZykovColor that dominates other SAT-based methods on the Erdős-Rényi graphs.


Beyond Predictions: A Participatory Framework for Multi-Stakeholder Decision-Making

Vineis, Vittoria, Perelli, Giuseppe, Tolomei, Gabriele

arXiv.org Artificial Intelligence

Conventional decision-support systems, primarily based on supervised learning, focus on outcome prediction models to recommend actions. However, they often fail to account for the complexities of multi-actor environments, where diverse and potentially conflicting stakeholder preferences must be balanced. In this paper, we propose a novel participatory framework that redefines decision-making as a multi-stakeholder optimization problem, capturing each actor's preferences through context-dependent reward functions. Our framework leverages $k$-fold cross-validation to fine-tune user-provided outcome prediction models and evaluate decision strategies, including compromise functions mediating stakeholder trade-offs. We introduce a synthetic scoring mechanism that exploits user-defined preferences across multiple metrics to rank decision-making strategies and identify the optimal decision-maker. The selected decision-maker can then be used to generate actionable recommendations for new data. We validate our framework using two real-world use cases, demonstrating its ability to deliver recommendations that effectively balance multiple metrics, achieving results that are often beyond the scope of purely prediction-based methods. Ablation studies demonstrate that our framework, with its modular, model-agnostic, and inherently transparent design, integrates seamlessly with various predictive models, reward structures, evaluation metrics, and sample sizes, making it particularly suited for complex, high-stakes decision-making contexts.


Leveraging automatic strategy discovery to teach people how to select better projects

Heindrich, Lovis, Lieder, Falk

arXiv.org Artificial Intelligence

The decisions of individuals and organizations are often suboptimal because normative decision strategies are too demanding in the real world. Recent work suggests that some errors can be prevented by leveraging artificial intelligence to discover and teach prescriptive decision strategies that take people's constraints into account. So far, this line of research has been limited to simplified decision problems. This article is the first to extend this approach to a real-world decision problem, namely project selection. We develop a computational method (MGPS) that automatically discovers project selection strategies that are optimized for real people and develop an intelligent tutor that teaches the discovered strategies. We evaluated MGPS on a computational benchmark and tested the intelligent tutor in a training experiment with two control conditions. MGPS outperformed a state-of-the-art method and was more computationally efficient. Moreover, the intelligent tutor significantly improved people's decision strategies. Our results indicate that our method can improve human decision-making in naturalistic settings similar to real-world project selection, a first step towards applying strategy discovery to the real world.


Ensemble Reinforcement Learning: A Survey

Song, Yanjie, Suganthan, P. N., Pedrycz, Witold, Ou, Junwei, He, Yongming, Chen, Yingwu, Wu, Yutong

arXiv.org Artificial Intelligence

Reinforcement Learning (RL) has emerged as a highly effective technique for addressing various scientific and applied problems. Despite its success, certain complex tasks remain challenging to be addressed solely with a single model and algorithm. In response, ensemble reinforcement learning (ERL), a promising approach that combines the benefits of both RL and ensemble learning (EL), has gained widespread popularity. ERL leverages multiple models or training algorithms to comprehensively explore the problem space and possesses strong generalization capabilities. In this study, we present a comprehensive survey on ERL to provide readers with an overview of recent advances and challenges in the field. Firstly, we provide an introduction to the background and motivation for ERL. Secondly, we conduct a detailed analysis of strategies such as model selection and combination that have been successfully implemented in ERL. Subsequently, we explore the application of ERL, summarize the datasets, and analyze the algorithms employed. Finally, we outline several open questions and discuss future research directions of ERL. By offering guidance for future scientific research and engineering applications, this survey significantly contributes to the advancement of ERL.


V2V-based Collision-avoidance Decision Strategy for Autonomous Vehicles Interacting with Fully Occluded Pedestrians at Midblock on Multilane Roadways

Zou, Fengjiao, Deng, Hsien-Wen, Iunn, Tsing-Un, Ogle, Jennifer Harper, Jin, Weimin

arXiv.org Artificial Intelligence

ABSTRACT Pedestrian occlusion is challenging for autonomous vehicles (AVs) at midblock locations on multilane roadways because an AV cannot detect crossing pedestrians that are fully occluded by downstream vehicles in adjacent lanes. This paper tests the capability of vehicle-to-vehicle (V2V) communication between an AV and its downstream vehicles to share midblock pedestrian crossings information. The researchers developed a V2V-based collision-avoidance decision strategy and compared it to a base scenario (i.e., decision strategy without the utilization of V2V). Simulation results showed that for the base scenario, the near-zero time-to-collision (TTC) indicated no time for the AV to take appropriate action and resulted in dramatic braking followed by collisions. But the V2V-based collision-avoidance decision strategy allowed for a proportional braking approach to increase the TTC allowing the pedestrian to cross safely. To conclude, the V2V-based collision-avoidance decision strategy has higher safety benefits for an AV interacting with fully occluded pedestrians at midblock locations on multilane roadways. Key Words: Autonomous vehicle (AV); Fully occluded pedestrian; Collision-avoidance decisions; Time-to-collision (TTC); Vehicle-to-vehicle (V2V) communication INTRODUCTION One of the safety challenges for autonomous vehicles (AVs) in the absence of connectivity is occluded pedestrians because AVs could not detect the occluded pedestrians in time to take evasive actions (Shetty et al., 2021).


Boosting human decision-making with AI-generated decision aids

Becker, Frederic, Skirzyński, Julian, van Opheusden, Bas, Lieder, Falk

arXiv.org Artificial Intelligence

Human decision-making is plagued by many systematic errors. Many of these errors can be avoided by providing decision aids that guide decision-makers to attend to the important information and integrate it according to a rational decision strategy. Designing such decision aids used to be a tedious manual process. Advances in cognitive science might make it possible to automate this process in the future. We recently introduced machine learning methods for discovering optimal strategies for human decision-making automatically and an automatic method for explaining those strategies to people. Decision aids constructed by this method were able to improve human decision-making. However, following the descriptions generated by this method is very tedious. We hypothesized that this problem can be overcome by conveying the automatically discovered decision strategy as a series of natural language instructions for how to reach a decision. Experiment 1 showed that people do indeed understand such procedural instructions more easily than the decision aids generated by our previous method. Encouraged by this finding, we developed an algorithm for translating the output of our previous method into procedural instructions. We applied the improved method to automatically generate decision aids for a naturalistic planning task (i.e., planning a road trip) and a naturalistic decision task (i.e., choosing a mortgage). Experiment 2 showed that these automatically generated decision-aids significantly improved people's performance in planning a road trip and choosing a mortgage. These findings suggest that AI-powered boosting might have potential for improving human decision-making in the real world.


Moving from AI awareness to meaningful implementation

#artificialintelligence

While most executives at financial institutions agree that artificial intelligence (AI) is important to their organization's success, few have fully implemented AI projects. In a recent Cognizant survey of 230 financial services executives, three-quarters said AI is extremely or very important to the success of their organizations. However, only 61% of those were aware of an AI project at their company. Even more telling, only 29% were aware of a project that had been fully implemented. Clearly, AI is quickly becoming a competitive requirement, creating the risk that those who are not implementing or updating AI capabilities will fall behind.


Beating humans in a penny-matching game by leveraging cognitive hierarchy theory and Bayesian learning

Tian, Ran, Li, Nan, Kolmanovsky, Ilya, Girard, Anouck

arXiv.org Artificial Intelligence

Beating humans in a penny-matching game by leveraging cognitive hierarchy theory and Bayesian learning Ran Tian, Nan Li, Ilya Kolmanovsky, and Anouck Girard Abstract -- It is a longstanding goal of artificial intelligence (AI) to be superior to human beings in decision making. Games are suitable for testing AI capabilities of making good decisions in non-numerical tasks. In this paper, we develop a new AI algorithm to play the penny-matching game considered in Shannon's "mind-reading machine" (1953) against human players. In particular, we exploit cognitive hierarchy theory and Bayesian learning techniques to continually evolve a model for predicting human player decisions, and let the AI player make decisions according to the model predictions to pursue the best chance of winning. Experimental results show that our AI algorithm beats 27 out of 30 volunteer human players. I NTRODUCTION Developing artificial intelligence (AI) to beat humans in strategic games has been drawing attention/interest of researchers for decades [1]-[10].


Four Steps For Defining An Accurate Digital Decision With AI

#artificialintelligence

Attempts to use artificial intelligence (AI) technology in industry settings often fail to identify "how" AI will help a user make a better decision. For example, I have seen recommender systems developed without considering how the users' actions can improve future recommendations. If users can't see the value of their interactions, they may choose not to use the system. To address this challenge, I created a four-step process to help improve the success of AI applications. These steps can help users define the parts of their AI solution through brainstorming, discussion and iteration.


Pitt Researcher Uses Video Games to Unlock New Levels of AI

#artificialintelligence

A University of Pennsylvania computer scientists designs algorithms that learn decision strategies in complex and uncertain environments, and tests them in the simulated environments of Multiplayer Online Battle Arena games. The University of Pittsburgh's Daniel Jiang has developed algorithms that learn decision strategies in complex and uncertain environments, and tests them on a genre of video games called Multiplayer Online Battle Arena (MOBA). MOBAs involve players controlling one of several "hero" characters in order to destroy opponents' bases while protecting their own. A successful algorithm for training a gameplay artificial intelligence system must overcome several challenges, like real-time decision making and long decision horizons. Jiang's team designed the algorithm to evaluate 41 pieces of information and output one of 22 different actions; the most successful player used the Monte Carlo tree search method to generate data, which was fed into a neural network.