Goto

Collaborating Authors

 fitt


Fitts' List Revisited: An Empirical Study on Function Allocation in a Two-Agent Physical Human-Robot Collaborative Position/Force Task

Mol, Nicky, Prendergast, J. Micah, Abbink, David A., Peternel, Luka

arXiv.org Artificial Intelligence

Abstract--In this letter, we investigate whether classical function allocation--the principle of assigning tasks to either a human or a machine--holds for physical Human-Robot Collaboration, which is important for providing insights for Industry 5.0 to guide how to best augment rather than replace workers. This study empirically tests the applicability of Fitts' List within physical Human-Robot Collaboration, by conducting a user study (N=26, within-subject design) to evaluate four distinct allocations of position/force control between human and robot in an abstract blending task. We hypothesize that the function in which humans control the position achieves better performance and receives higher user ratings. When allocating position control to the human and force control to the robot, compared to the opposite case, we observed a significant improvement in preventing overblending. This was also perceived better in terms of physical demand and overall system acceptance, while participants experienced greater autonomy, more engagement and less frustration. An interesting insight was that the supervisory role (when the robot controls both position and force) was rated second best in terms of subjective acceptance. Another surprising insight was that if position control was delegated to the robot, the participants perceived much lower autonomy than when the force control was delegated to the robot. These findings empirically support applying Fitts' principles to static function allocation for physical collaboration, while also revealing important nuanced user experience trade-offs, particularly regarding perceived autonomy when delegating position control. Received 7 May 2025; accepted 25 October 2025.


An Active Inference Model of Mouse Point-and-Click Behaviour

Klar, Markus, Stein, Sebastian, Paterson, Fraser, Williamson, John H., Murray-Smith, Roderick

arXiv.org Artificial Intelligence

We explore the use of Active Inference (AIF) as a computational user model for spatial pointing, a key problem in Human-Computer Interaction (HCI). We present an AIF agent with continuous state, action, and observation spaces, performing one-dimensional mouse pointing and clicking. We use a simple underlying dynamic system to model the mouse cursor dynamics with realistic perceptual delay. In contrast to previous optimal feedback control-based models, the agent's actions are selected by minimizing Expected Free Energy, solely based on preference distributions over percepts, such as observing clicking a button correctly. Our results show that the agent creates plausible pointing movements and clicks when the cursor is over the target, with similar end-point variance to human users. In contrast to other models of pointing, we incorporate fully probabilistic, predictive delay compensation into the agent. The agent shows distinct behaviour for differing target difficulties without the need to retune system parameters, as done in other approaches. We discuss the simulation results and emphasize the challenges in identifying the correct configuration of an AIF agent interacting with continuous systems.


Using Fitts' Law to Benchmark Assisted Human-Robot Performance

Pan, Jiahe, Eden, Jonathan, Oetomo, Denny, Johal, Wafa

arXiv.org Artificial Intelligence

Shared control systems aim to combine human and robot abilities to improve task performance. However, achieving optimal performance requires that the robot's level of assistance adjusts the operator's cognitive workload in response to the task difficulty. Understanding and dynamically adjusting this balance is crucial to maximizing efficiency and user satisfaction. In this paper, we propose a novel benchmarking method for shared control systems based on Fitts' Law to formally parameterize the difficulty level of a target-reaching task. With this we systematically quantify and model the effect of task difficulty (i.e. size and distance of target) and robot autonomy on task performance and operators' cognitive load and trust levels. Our empirical results (N=24) not only show that both task difficulty and robot autonomy influence task performance, but also that the performance can be modelled using these parameters, which may allow for the generalization of this relationship across more diverse setups. We also found that the users' perceived cognitive load and trust were influenced by these factors. Given the challenges in directly measuring cognitive load in real-time, our adapted Fitts' model presents a potential alternative approach to estimate cognitive load through determining the difficulty level of the task, with the assumption that greater task difficulty results in higher cognitive load levels. We hope that these insights and our proposed framework inspire future works to further investigate the generalizability of the method, ultimately enabling the benchmarking and systematic assessment of shared control quality and user impact, which will aid in the development of more effective and adaptable systems.


Visual tracking brain computer interface

Huang, Changxing, Shi, Nanlin, Miao, Yining, Chen, Xiaogang, Wang, Yijun, Gao, Xiaorong

arXiv.org Artificial Intelligence

Brain-computer interfaces (BCIs) offer a way to interact with computers without relying on physical movements. Non-invasive electroencephalography (EEG)-based visual BCIs, known for efficient speed and calibration ease, face limitations in continuous tasks due to discrete stimulus design and decoding methods. To achieve continuous control, we implemented a novel spatial encoding stimulus paradigm and devised a corresponding projection method to enable continuous modulation of decoded velocity. Subsequently, we conducted experiments involving 17 participants and achieved Fitt's ITR of 0.55 bps for the fixed tracking task and 0.37 bps for the random tracking task. The proposed BCI with a high Fitt's ITR was then integrated into two applications, including painting and gaming. In conclusion, this study proposed a visual BCI-based control method to go beyond discrete commands, allowing natural continuous control based on neural activity.


The Data Paradox: Artificial Intelligence Needs Data; Data Needs AI

#artificialintelligence

Data is the fuel for AI. Artificial intelligence is a data hog; effectively building and deploying AI and machine learning systems require large data sets. "The development of a machine learning algorithm depends on large volumes of data, from which the learning process draws many entities, relationships, and clusters," says Philip Russom of TDWI. "To broaden and enrich the correlations made by the algorithm, machine learning needs data from diverse sources, in diverse formats, about diverse business processes." At the same time, AI itself can be instrumental in identifying and preparing the data needed to increase the value of AI-driven or analytics-driven systems.


New ML, AI features highlight Oracle Analytics Cloud update

#artificialintelligence

New machine learning tools and enhanced natural language processing capabilities are among the latest additions to Oracle Analytics Cloud. Tech giant Oracle -- now based in Austin, Texas, after moving its headquarters recently from its longtime base in Redwood City, Calif. Oracle Analytics Cloud is the business intelligence piece of Oracle's analytics platform -- which also includes Oracle Analytics Server and Oracle Fusion Analytics Warehouse -- and is made up of Oracle's traditional BI reporting tools, self-service BI capabilities, data visualization tools and augmented intelligence capabilities. Before an overhaul in June 2019 designed to reduce complexity, Oracle's analytics platform consisted of 18 different products. New machine learning (ML) capabilities in Oracle Analytics Cloud include Machine Learning Explain-Ability, a tool that enables business users to view complete details of how machine learning models calculate predictability to get insight into influencing factors.


Audio-Visual Communication in a Two Person Gross Manipulation Task

Parikh, Sarangi Patel (United States Naval Academy) | Esposito, Joel (United States Naval Academy) | Searock, Jeremy (United States Naval Academy)

AAAI Conferences

In order to design robots suited to engage in cooperative manipulation tasks with humans, we study human-human teams as they work together to move a heavy object across a room. We are interested in several questions. First, do two person, gross motion tasks follow the same sinusoidal pattern, one person fine motion tasks do? Does performance improve when audio or visual communication is permitted? How does performance correlate with an individual's perception of performance? Non-physiological, or performance based, studies of human-human cooperation can be divided into two categories: Haptic and Non-Haptic (audio, visual, etc). The first category, involves physical interaction through the object being manipulated via force, pressure, and tactile sensations (Jones and Sarter 2008), (Reed and Peshkin 2008). Most of the non-haptic experiments are virtual setups where individuals are moving an object together on a computer screen via two controllers (Basdogan, Ho, and Srinivasan 2000), (Sallnas and Zhai 2003). A survey on the role of communication between people appears in (Whitaker, 2003). The novelty of our work is to investigate non-haptic communication in haptic manipulation tasks.