human-machine teaming
Risks and Opportunities in Human-Machine Teaming in Operationalizing Machine Learning Target Variables
Guo, Mengtian, Gotz, David, Wang, Yue
Predictive modeling has the potential to enhance human decision-making. However, many predictive models fail in practice due to problematic problem formulation in cases where the prediction target is an abstract concept or construct and practitioners need to define an appropriate target variable as a proxy to operationalize the construct of interest. The choice of an appropriate proxy target variable is rarely self-evident in practice, requiring both domain knowledge and iterative data modeling. This process is inherently collaborative, involving both domain experts and data scientists. In this work, we explore how human-machine teaming can support this process by accelerating iterations while preserving human judgment. We study the impact of two human-machine teaming strategies on proxy construction: 1) relevance-first: humans leading the process by selecting relevant proxies, and 2) performance-first: machines leading the process by recommending proxies based on predictive performance. Based on a controlled user study of a proxy construction task (N = 20), we show that the performance-first strategy facilitated faster iterations and decision-making, but also biased users towards well-performing proxies that are misaligned with the application goal. Our study highlights the opportunities and risks of human-machine teaming in operationalizing machine learning target variables, yielding insights for future research to explore the opportunities and mitigate the risks.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > North Carolina (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (14 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.69)
- Education > Educational Setting (0.68)
Designs for Enabling Collaboration in Human-Machine Teaming via Interactive and Explainable Systems
Collaborative robots and machine learning-based virtual agents are increasingly entering the human workspace with the aim of increasing productivity and enhancing safety. Despite this, we show in a ubiquitous experimental domain, Overcooked-AI, that state-of-the-art techniques for human-machine teaming (HMT), which rely on imitation or reinforcement learning, are brittle and result in a machine agent that aims to decouple the machine and human's actions to act independently rather than in a synergistic fashion. To remedy this deficiency, we develop HMT approaches that enable iterative, mixed-initiative team development allowing end-users to interactively reprogram interpretable AI teammates. Our 50-subject study provides several findings that we summarize into guidelines. While all approaches underperform a simple collaborative heuristic (a critical, negative result for learning-based methods), we find that white-box approaches supported by interactive modification can lead to significant team development, outperforming white-box approaches alone, and that black-box approaches are easier to train and result in better HMT performance highlighting a tradeoff between explainability and interactivity versus ease-of-training.
Advancing Human-Machine Teaming: Concepts, Challenges, and Applications
Chen, Dian, Yoon, Han Jun, Wan, Zelin, Alluru, Nithin, Lee, Sang Won, He, Richard, Moore, Terrence J., Nelson, Frederica F., Yoon, Sunghyun, Lim, Hyuk, Kim, Dan Dongseong, Cho, Jin-Hee
Human-Machine Teaming (HMT) is revolutionizing collaboration across domains such as defense, healthcare, and autonomous systems by integrating AI-driven decision-making, trust calibration, and adaptive teaming. This survey presents a comprehensive taxonomy of HMT, analyzing theoretical models, including reinforcement learning, instance-based learning, and interdependence theory, alongside interdisciplinary methodologies. Unlike prior reviews, we examine team cognition, ethical AI, multi-modal interactions, and real-world evaluation frameworks. Key challenges include explainability, role allocation, and scalable benchmarking. We propose future research in cross-domain adaptation, trust-aware AI, and standardized testbeds. By bridging computational and social sciences, this work lays a foundation for resilient, ethical, and scalable HMT systems.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- Oceania > Australia > Queensland > Brisbane (0.04)
- (10 more...)
- Research Report (1.00)
- Overview (1.00)
- Transportation (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Consumer Health (1.00)
- (5 more...)
Human-Machine Teaming for UAVs: An Experimentation Platform
Moujtahid, Laila El, Gottipati, Sai Krishna, Mars, Clodéric, Taylor, Matthew E.
Full automation is often not achievable or desirable in critical systems with high-stakes decisions. Instead, human-AI teams can achieve better results. To research, develop, evaluate, and validate algorithms suited for such teaming, lightweight experimentation platforms that enable interactions between humans and multiple AI agents are necessary. However, there are limited examples of such platforms for defense environments. To address this gap, we present the Cogment human-machine teaming experimentation platform, which implements human-machine teaming (HMT) use cases that features heterogeneous multi-agent systems and can involve learning AI agents, static AI agents, and humans. It is built on the Cogment platform and has been used for academic research, including work presented at the ALA workshop at AAMAS this year [1]. With this platform, we hope to facilitate further research on human-machine teaming in critical systems and defense environments.
- North America > Canada > Quebec > Montreal (0.15)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- North America > Canada > Alberta (0.04)
Flexible and Inherently Comprehensible Knowledge Representation for Data-Efficient Learning and Trustworthy Human-Machine Teaming in Manufacturing Environments
Galetić, Vedran, Nottle, Alistair
Trustworthiness of artificially intelligent agents is vital for the acceptance of human-machine teaming in industrial manufacturing environments. Predictable behaviours and explainable (and understandable) rationale allow humans collaborating with (and building) these agents to understand their motivations and therefore validate decisions that are made. To that aim, we make use of G\"ardenfors's cognitively inspired Conceptual Space framework to represent the agent's knowledge using concepts as convex regions in a space spanned by inherently comprehensible quality dimensions. A simple typicality quantification model is built on top of it to determine fuzzy category membership and classify instances interpretably. We apply it on a use case from the manufacturing domain, using objects' physical properties obtained from cobots' onboard sensors and utilisation properties from crowdsourced commonsense knowledge available at public knowledge bases. Such flexible knowledge representation based on property decomposition allows for data-efficient representation learning of typically highly specialist or specific manufacturing artefacts. In such a setting, traditional data-driven (e.g., computer vision-based) classification approaches would struggle due to training data scarcity. This allows for comprehensibility of an AI agent's acquired knowledge by the human collaborator thus contributing to trustworthiness. We situate our approach within an existing explainability framework specifying explanation desiderata. We provide arguments for our system's applicability and appropriateness for different roles of human agents collaborating with the AI system throughout its design, validation, and operation.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (3 more...)
Thoughts on Human-Machine Teaming, AI and Changing Warfare…(Part I)
This is a 3-part treatment of an occasional paper I am finishing that addresses the future of warfare, developments in AI research, long-term AI concerns (ethics, morality and legality of humans and AI in warfare), strategic shifts and broader philosophical, even existential concerns that may be decades or perhaps a century away.
Do we trust artificial intelligence agents to mediate conflict? Not entirely: New study says we'll listen to virtual agents except when goings get tough
Researchers from USC and the University of Denver created a simulation in which a three-person team was supported by a virtual agent avatar on screen in a mission that was designed to ensure failure and elicit conflict. The study was designed to look at virtual agents as potential mediators to improve team collaboration during conflict mediation. But in the heat of the moment, will we listen to virtual agents? While some of researchers (Gale Lucas and Jonathan Gratch of the USC Viterbi School Engineering and the USC Institute for Creative Technologies who contributed to this study), had previously found that one-on-one human interactions with a virtual agent therapist yielded more confessions, in this study "Conflict Mediation in Human-Machine Teaming: Using a Virtual Agent to Support Mission Planning and Debriefing," team members were less likely to engage with a male virtual agent named "Chris" when conflict arose. Participating members of the team did not physically accost the device (as we have seen humans attack robots in viral social media posts), but rather were less engaged and less likely to listen to the virtual agent's input once failure ensued and conflict arose among team members. The study was conducted in a military academy environment in which 27 scenarios were engineered to test how the team that included a virtual agent would react to failure and the ensuring conflict.
How the Rubber Meets the Road in Human-Machine Teaming
Everywhere you turn today, machine learning and artificial intelligence are being hyped as both a menace to and the savior of the human race. This is perhaps especially true in cybersecurity. What these alluring terms usually mean is simply related to detailed statistical comparisons derived from massive data collections. Let's look at the terms themselves: At McAfee we are urging our customers to take a long and comprehensive view of human-machine teaming that looks beyond the current, cool-factor buzz. You can make it real, make it practical, and make it scalable, but what does that look like?