Goto

Collaborating Authors

 psychometric data


COOPERA: Continual Open-Ended Human-Robot Assistance

Ma, Chenyang, Lu, Kai, Desai, Ruta, Puig, Xavier, Markham, Andrew, Trigoni, Niki

arXiv.org Artificial Intelligence

To understand and collaborate with humans, robots must account for individual human traits, habits, and activities over time. However, most robotic assistants lack these abilities, as they primarily focus on predefined tasks in structured environments and lack a human model to learn from. This work introduces COOPERA, a novel framework for COntinual, OPen-Ended human-Robot Assistance, where simulated humans, driven by psychological traits and long-term intentions, interact with robots in complex environments. By integrating continuous human feedback, our framework, for the first time, enables the study of long-term, open-ended human-robot collaboration (HRC) in different collaborative tasks across various time-scales. Within COOPERA, we introduce a benchmark and an approach to personalize the robot's collaborative actions by learning human traits and context-dependent intents. Experiments validate the extent to which our simulated humans reflect realistic human behaviors and demonstrate the value of inferring and personalizing to human intents for open-ended and long-term HRC. Project Page: https://dannymcy.github.io/coopera/


Reverse-Engineering the Reader

Kiegeland, Samuel, Wilcox, Ethan Gotlieb, Amini, Afra, Reich, David Robert, Cotterell, Ryan

arXiv.org Artificial Intelligence

Numerous previous studies have sought to determine to what extent language models, pretrained on natural language text, can serve as useful models of human cognition. In this paper, we are interested in the opposite question: whether we can directly optimize a language model to be a useful cognitive model by aligning it to human psychometric data. To achieve this, we introduce a novel alignment technique in which we fine-tune a language model to implicitly optimize the parameters of a linear regressor that directly predicts humans' reading times of in-context linguistic units, e.g., phonemes, morphemes, or words, using surprisal estimates derived from the language model. Using words as a test case, we evaluate our technique across multiple model sizes and datasets and find that it improves language models' psychometric predictive power. However, we find an inverse relationship between psychometric power and a model's performance on downstream NLP tasks as well as its perplexity on held-out test data. While this latter trend has been observed before (Oh et al., 2022; Shain et al., 2024), we are the first to induce it by manipulating a model's alignment to psychometric data.


Senior Machine Learning Engineer (m/d/f) computer vision / NLP - Remote Tech Jobs

#artificialintelligence

Become part of a very successful Medtech startup that has operated for over 5 years and is on its way to skyrocket in the field of analyzing human psychology and developing an AI engine that helps psychologists to make better diagnoses. Starting in gaming, optimized for human potential and health:…