Goto

Collaborating Authors

 gamble


Individual utilities of life satisfaction reveal inequality aversion unrelated to political alignment

Cooper, Crispin, Fredrich, Ana, Reggiani, Tommaso, Poortinga, Wouter

arXiv.org Artificial Intelligence

How should well-being be prioritised in society, and what trade-offs are people willing to make between fairness and personal well-being? We investigate these questions using a stated preference experiment with a nationally representative UK sample (n = 300), in which participants evaluated life satisfaction outcomes for both themselves and others under conditions of uncertainty. Individual-level utility functions were estimated using an Expected Utility Maximisation (EUM) framework and tested for sensitivity to the overweighting of small probabilities, as characterised by Cumulative Prospect Theory (CPT). A majority of participants displayed concave (risk-averse) utility curves and showed stronger aversion to inequality in societal life satisfaction outcomes than to personal risk. These preferences were unrelated to political alignment, suggesting a shared normative stance on fairness in well-being that cuts across ideological boundaries. The results challenge use of average life satisfaction as a policy metric, and support the development of nonlinear utility-based alternatives that more accurately reflect collective human values. Implications for public policy, well-being measurement, and the design of value-aligned AI systems are discussed.


Meet the Guys Betting Big on AI Gambling Agents

WIRED

When Carson Szeder turned five dollars into more than a thousand by betting on an NFL game last year, he knew he was onto something major. "Definitely my biggest win," he says. He hadn't scored because he was especially deft at football analytics--or because he was particularly lucky. Instead, he says he used an AI program to help him decide how to gamble. Since a federal ban on sports betting was struck down in the United States seven years ago, gambling on the internet has exploded in popularity.


Inside the Biden Administration's Gamble to Freeze China's AI Future

WIRED

Alan Estevez was sitting at his dining room table wearing a t-shirt when Secretary of Commerce Gina Raimondo called on Zoom to ask if he wanted to be the Biden administration's top export control official. "You're going to have to sell me on this," Estevez recalls telling her. It was 2021, and the outspoken New Jersey native thought he had finally left public service behind. After more than three decades at the Pentagon, he had left and taken a job in consulting. He wasn't sure if he was ready to go back.


Formal Power Series Representations in Probability and Expected Utility Theory

Pedersen, Arthur Paul, Alexander, Samuel Allen

arXiv.org Artificial Intelligence

We advance a general theory of coherent preference that surrenders restrictions embodied in orthodox doctrine. This theory enjoys the property that any preference system admits extension to a complete system of preferences, provided it satisfies a certain coherence requirement analogous to the one de Finetti advanced for his foundations of probability. Unlike de Finetti's theory, the one we set forth requires neither transitivity nor Archimedeanness nor boundedness nor continuity of preference. This theory also enjoys the property that any complete preference system meeting the standard of coherence can be represented by utility in an ordered field extension of the reals. Representability by utility is a corollary of this paper's central result, which at once extends H older's Theorem and strengthens Hahn's Embedding Theorem.


Steering Risk Preferences in Large Language Models by Aligning Behavioral and Neural Representations

Zhu, Jian-Qiao, Yan, Haijiang, Griffiths, Thomas L.

arXiv.org Artificial Intelligence

Changing the behavior of large language models (LLMs) can be as straightforward as editing the Transformer's residual streams using appropriately constructed "steering vectors." These modifications to internal neural activations, a form of representation engineering, offer an effective and targeted means of influencing model behavior without retraining or fine-tuning the model. But how can such steering vectors be systematically identified? We propose a principled approach for uncovering steering vectors by aligning latent representations elicited through behavioral methods (specifically, Markov chain Monte Carlo with LLMs) with their neural counterparts. To evaluate this approach, we focus on extracting latent risk preferences from LLMs and steering their risk-related outputs using the aligned representations as steering vectors. We show that the resulting steering vectors successfully and reliably modulate LLM outputs in line with the targeted behavior.


Japan's Rapidus starts test production in AI chipmaking gamble

The Japan Times

Japan's state-backed chip venture Rapidus began test production of next-generation chips on Tuesday, an early but key step in the country's efforts to make its own artificial intelligence components. The 2-year-old company is gearing up to mass produce semiconductors using 2-nanometer processes in 2027, which on paper would match Taiwan Semiconductor Manufacturing Co. in terms of chipmaking prowess. Japan has to date earmarked 1.72 trillion ( 11.5 billion) to support the startup, part of a yearslong push to regain some of the tech leadership it's ceded to the U.S., Taiwan and South Korea. "It was extraordinarily difficult to develop 2 nm technology and the knowhow for mass production," and more experimentation lies ahead, Chief Executive Officer Atsuyoshi Koike, who is 72, said at a news conference. "We will take things step by step to lower error rates and secure customer trust."

  Country:

Learning to Represent Individual Differences for Choice Decision Making

Chen, Yan-Ying, Weng, Yue, Filipowicz, Alexandre, Iliev, Rumen, Chen, Francine, Hakimi, Shabnam, Zhang, Yanxia, Lee, Matthew, Lyons, Kent, Wu, Charlene

arXiv.org Artificial Intelligence

Human decision making can be challenging to predict because decisions are affected by a number of complex factors. Adding to this complexity, decision-making processes can differ considerably between individuals, and methods aimed at predicting human decisions need to take individual differences into account. Behavioral science offers methods by which to measure individual differences (e.g., questionnaires, behavioral models), but these are often narrowed down to low dimensions and not tailored to specific prediction tasks. This paper investigates the use of representation learning to measure individual differences from behavioral experiment data. Representation learning offers a flexible approach to create individual embeddings from data that are both structured (e.g., demographic information) and unstructured (e.g., free text), where the flexibility provides more options for individual difference measures for personalization, e.g., free text responses may allow for open-ended questions that are less privacy-sensitive. In the current paper we use representation learning to characterize individual differences in human performance on an economic decision-making task. We demonstrate that models using representation learning to capture individual differences consistently improve decision predictions over models without representation learning, and even outperform well-known theory-based behavioral models used in these environments. Our results propose that representation learning offers a useful and flexible tool to capture individual differences.


Function-Coherent Gambles

Wheeler, Gregory

arXiv.org Artificial Intelligence

The desirable gambles framework provides a foundational approach to imprecise probability theory but relies heavily on linear utility assumptions. This paper introduces {\em function-coherent gambles}, a generalization that accommodates non-linear utility while preserving essential rationality properties. We establish core axioms for function-coherence and prove a representation theorem that characterizes acceptable gambles through continuous linear functionals. The framework is then applied to analyze various forms of discounting in intertemporal choice, including hyperbolic, quasi-hyperbolic, scale-dependent, and state-dependent discounting. We demonstrate how these alternatives to constant-rate exponential discounting can be integrated within the function-coherent framework. This unified treatment provides theoretical foundations for modeling sophisticated patterns of time preference within the desirability paradigm, bridging a gap between normative theory and observed behavior in intertemporal decision-making under genuine uncertainty.


Function-Coherent Gambles with Non-Additive Sequential Dynamics

Wheeler, Gregory

arXiv.org Artificial Intelligence

The desirable gambles framework provides a rigorous foundation for imprecise probability theory but relies heavily on linear utility via its coherence axioms. In our related work, we introduced function-coherent gambles to accommodate non-linear utility. However, when repeated gambles are played over time -- especially in intertemporal choice where rewards compound multiplicatively -- the standard additive combination axiom fails to capture the appropriate long-run evaluation. In this paper we extend the framework by relaxing the additive combination axiom and introducing a nonlinear combination operator that effectively aggregates repeated gambles in the log-domain. This operator preserves the time-average (geometric) growth rate and addresses the ergodicity problem. We prove the key algebraic properties of the operator, discuss its impact on coherence, risk assessment, and representation, and provide a series of illustrative examples. Our approach bridges the gap between expectation values and time averages and unifies normative theory with empirically observed non-stationary reward dynamics.


Benchmarking the rationality of AI decision making using the transitivity axiom

Song, Kiwon, Jennings, James M. III, Davis-Stober, Clintin P.

arXiv.org Artificial Intelligence

Fundamental choice axioms, such as transitivity of preference, provide testable conditions for determining whether human decision making is rational, i.e., consistent with a utility representation. Recent work has demonstrated that AI systems trained on human data can exhibit similar reasoning biases as humans and that AI can, in turn, bias human judgments through AI recommendation systems. We evaluate the rationality of AI responses via a series of choice experiments designed to evaluate transitivity of preference in humans. We considered ten versions of Meta's Llama 2 and 3 LLM models. We applied Bayesian model selection to evaluate whether these AI-generated choices violated two prominent models of transitivity. We found that the Llama 2 and 3 models generally satisfied transitivity, but when violations did occur, occurred only in the Chat/Instruct versions of the LLMs. We argue that rationality axioms, such as transitivity of preference, can be useful for evaluating and benchmarking the quality of AI-generated responses and provide a foundation for understanding computational rationality in AI systems more generally.