autonomy
This Defense Company Made AI Agents That Blow Things Up
Scout AI is using technology borrowed from the AI industry to power lethal weapons--and recently demonstrated its explosive potential. Like many Silicon Valley companies today, Scout AI is training large AI models and agents to automate chores. The big difference is that instead of writing code, answering emails, or buying stuff online, Scout AI's agents are designed to seek and destroy things in the physical world with exploding drones. In a recent demonstration, held at an undisclosed military base in central California, Scout AI's technology was put in charge of a self-driving off-road vehicle and a pair of lethal drones. The agents used these systems to find a truck hiding in the area, and then blew it to bits using an explosive charge.
- North America > United States > California (0.55)
- Asia > China (0.05)
- North America > United States > Texas (0.05)
- (7 more...)
- Information Technology (1.00)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.96)
Everyone wants AI sovereignty. No one can truly have it.
No one can truly have it. The world is too interconnected for nations to go it alone. Governments plan to pour $1.3 trillion into AI infrastructure by 2030 to invest in "sovereign AI," with the premise being that countries should be in control of their own AI capabilities. The funds include financing for domestic data centers, locally trained models, independent supply chains, and national talent pipelines. This is a response to real shocks: covid-era supply chain breakdowns, rising geopolitical tensions, and the war in Ukraine. But the pursuit of absolute autonomy is running into reality.
Shared Autonomy with IDA: Interventional Diffusion Assistance
The rapid development of artificial intelligence (AI) has unearthed the potential to assist humans in controlling advanced technologies. Shared autonomy (SA) facilitates control by combining inputs from a human pilot and an AI copilot. In prior SA studies, the copilot is constantly active in determining the action played at each time step. This limits human autonomy that may have deleterious effects on performance. In general, the amount of helpful copilot assistance varies greatly depending on the task dynamics.
Mental Models of Autonomy and Sentience Shape Reactions to AI
Pauketat, Janet V. T., Shank, Daniel B., Manoli, Aikaterina, Anthis, Jacy Reese
Narratives about artificial intelligence (AI) entangle autonomy, the capacity to self-govern, with sentience, the capacity to sense and feel. AI agents that perform tasks autonomously and companions that recognize and express emotions may activate mental models of autonomy and sentience, respectively, provoking distinct reactions. To examine this possibility, we conducted three pilot studies (N = 374) and four preregistered vignette experiments describing an AI as autonomous, sentient, both, or neither (N = 2,702). Activating a mental model of sentience increased general mind perception (cognition and emotion) and moral consideration more than autonomy, but autonomy increased perceived threat more than sentience. Sentience also increased perceived autonomy more than vice versa. Based on a within-paper meta-analysis, sentience changed reactions more than autonomy on average. By disentangling different mental models of AI, we can study human-AI interaction with more precision to better navigate the detailed design of anthropomorphized AI and prompting interfaces.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Missouri (0.04)
- (15 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Media (1.00)
- Information Technology (1.00)
- Government (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.67)
High-Resolution Water Sampling via a Solar-Powered Autonomous Surface Vehicle
Mamani, Misael, Fernandez, Mariel, Luna, Grace, Limachi, Steffani, Apaza, Leonel, Montes-Dávalos, Carolina, Herrera, Marcelo, Salcedo, Edwin
Accurate water quality assessment requires spatially resolved sampling, yet most unmanned surface vehicles (USVs) can collect only a limited number of samples or rely on single-point sensors with poor representativeness. This work presents a solar-powered, fully autonomous USV featuring a novel syringe-based sampling architecture capable of acquiring 72 discrete, contamination-minimized water samples per mission. The vehicle incorporates a ROS 2 autonomy stack with GPS-RTK navigation, LiDAR and stereo-vision obstacle detection, Nav2-based mission planning, and long-range LoRa supervision, enabling dependable execution of sampling routes in unstructured environments. The platform integrates a behavior-tree autonomy architecture adapted from Nav2, enabling mission-level reasoning and perception-aware navigation. A modular 6x12 sampling system, controlled by distributed micro-ROS nodes, provides deterministic actuation, fault isolation, and rapid module replacement, achieving spatial coverage beyond previously reported USV-based samplers. Field trials in Achocalla Lagoon (La Paz, Bolivia) demonstrated 87% waypoint accuracy, stable autonomous navigation, and accurate physicochemical measurements (temperature, pH, conductivity, total dissolved solids) comparable to manually collected references. These results demonstrate that the platform enables reliable high-resolution sampling and autonomous mission execution, providing a scalable solution for aquatic monitoring in remote environments.
- North America > Canada (0.28)
- South America > Bolivia > La Paz Department > Pedro Domingo Murillo Province > La Paz (0.24)
- Asia > Malaysia (0.04)
- (5 more...)
- Water & Waste Management > Water Management > Water Supplies & Services (1.00)
- Government (1.00)
- Energy > Renewable > Solar (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.93)
Fitts' List Revisited: An Empirical Study on Function Allocation in a Two-Agent Physical Human-Robot Collaborative Position/Force Task
Mol, Nicky, Prendergast, J. Micah, Abbink, David A., Peternel, Luka
Abstract--In this letter, we investigate whether classical function allocation--the principle of assigning tasks to either a human or a machine--holds for physical Human-Robot Collaboration, which is important for providing insights for Industry 5.0 to guide how to best augment rather than replace workers. This study empirically tests the applicability of Fitts' List within physical Human-Robot Collaboration, by conducting a user study (N=26, within-subject design) to evaluate four distinct allocations of position/force control between human and robot in an abstract blending task. We hypothesize that the function in which humans control the position achieves better performance and receives higher user ratings. When allocating position control to the human and force control to the robot, compared to the opposite case, we observed a significant improvement in preventing overblending. This was also perceived better in terms of physical demand and overall system acceptance, while participants experienced greater autonomy, more engagement and less frustration. An interesting insight was that the supervisory role (when the robot controls both position and force) was rated second best in terms of subjective acceptance. Another surprising insight was that if position control was delegated to the robot, the participants perceived much lower autonomy than when the force control was delegated to the robot. These findings empirically support applying Fitts' principles to static function allocation for physical collaboration, while also revealing important nuanced user experience trade-offs, particularly regarding perceived autonomy when delegating position control. Received 7 May 2025; accepted 25 October 2025.
- Europe > Netherlands > South Holland > Delft (0.05)
- Europe > Germany (0.04)
- North America > United States > Texas > Williamson County > Round Rock (0.04)
- (2 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study (0.64)
- Research Report > New Finding (0.63)
Agentic UAVs: LLM-Driven Autonomy with Integrated Tool-Calling and Cognitive Reasoning
Unmanned Aerial Vehicles (UAVs) are increasingly used in defense, surveillance, and disaster response, yet most systems still operate at SAE Level 2 to 3 autonomy. Their dependence on rule-based control and narrow AI limits adaptability in dynamic and uncertain missions. Current UAV architectures lack context-aware reasoning, autonomous decision-making, and integration with external systems. Importantly, none make use of Large Language Model (LLM) agents with tool-calling for real-time knowledge access. This paper introduces the Agentic UAVs framework, a five-layer architecture consisting of Perception, Reasoning, Action, Integration, and Learning. The framework enhances UAV autonomy through LLM-driven reasoning, database querying, and interaction with third-party systems. A prototype built with ROS 2 and Gazebo combines YOLOv11 for object detection with GPT-4 for reasoning and a locally deployed Gemma 3 model. In simulated search-and-rescue scenarios, agentic UAVs achieved higher detection confidence (0.79 compared to 0.72), improved person detection rates (91% compared to 75%), and a major increase in correct action recommendations (92% compared to 4.5%). These results show that modest computational overhead can enable significantly higher levels of autonomy and system-level integration.
- Asia > Middle East > Saudi Arabia > Riyadh Province > Riyadh (0.04)
- Asia > Middle East > UAE (0.04)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Benchmarking LLM Agents for Wealth-Management Workflows
Modern work relies on an assortment of digital collaboration tools, yet routine processes continue to suffer from human error and delay. To address this gap, this dissertation extends TheAgentCompany with a finance-focused environment and investigates whether a general purpose LLM agent can complete representative wealth-management tasks both accurately and economically. This study introduces synthetic domain data, enriches colleague simulations, and prototypes an automatic task-generation pipeline. The study aims to create and assess an evaluation set that can meaningfully measure an agent's fitness for assistant-level wealth management work. We construct a benchmark of 12 task-pairs for wealth management assistants spanning retrieval, analysis, and synthesis/communication, with explicit acceptance criteria and deterministic graders. We seeded a set of new finance-specific data and introduced a high vs. low-autonomy variant of every task. The paper concluded that agents are limited less by mathematical reasoning and more so by end-to-end workflow reliability, and meaningfully affected by autonomy level, and that incorrect evaluation of models have hindered benchmarking.
- North America > United States (0.14)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Workflow (1.00)
- Research Report (0.82)
Bridging the Gap: Toward Cognitive Autonomy in Artificial Intelligence
Golilarz, Noorbakhsh Amiri, Penchala, Sindhuja, Rahimi, Shahram
Artificial intelligence has advanced rapidly across perception, language, reasoning, and multimodal domains. Yet despite these achievements, modern AI systems remain fundamentally limited in their ability to self-monitor, self-correct, and regulate their behavior autonomously in dynamic contexts. This paper identifies and analyzes seven core deficiencies that constrain contemporary AI models: the absence of intrinsic self-monitoring, lack of meta-cognitive awareness, fixed and non-adaptive learning mechanisms, inability to restructure goals, lack of representational maintenance, insufficient embodied feedback, and the absence of intrinsic agency. Alongside identifying these limitations, we also outline a forward-looking perspective on how AI may evolve beyond them through architectures that mirror neurocognitive principles. We argue that these structural limitations prevent current architectures, including deep learning and transformer-based systems, from achieving robust generalization, lifelong adaptability, and real-world autonomy. Drawing on a comparative analysis of artificial systems and biological cognition [7], and integrating insights from AI research, cognitive science, and neuroscience, we outline how these capabilities are absent in current models and why scaling alone cannot resolve them. We conclude by advocating for a paradigmatic shift toward cognitively grounded AI (cognitive autonomy) capable of self-directed adaptation, dynamic representation management, and intentional, goal-oriented behavior, paired with reformative oversight mechanisms [8] that ensure autonomous systems remain interpretable, governable, and aligned with human values.
- North America > United States > Alabama > Tuscaloosa County > Tuscaloosa (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Frankfurt (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.86)
- Information Technology > Artificial Intelligence > Cognitive Science > Cognitive Architectures (0.68)
Human-Robot Collaboration for the Remote Control of Mobile Humanoid Robots with Torso-Arm Coordination
Boguslavskii, Nikita, Genua, Lorena Maria, Li, Zhi
Personal use of this material is permitted. Abstract -- Recently, many humanoid robots have been increasingly deployed in various facilities, including hospitals and assisted living environments, where they are often remotely controlled by human operators. Their kinematic redundancy enhances reachability and manipulability, enabling them to navigate complex, cluttered environments and perform a wide range of tasks. However, this redundancy also presents significant control challenges, particularly in coordinating the movements of the robot's macro-micro structure (torso and arms). Therefore, we propose various human-robot collaborative (HRC) methods for coordinating the torso and arm of remotely controlled mobile humanoid robots, aiming to balance autonomy and human input to enhance system efficiency and task execution. The proposed methods include human-initiated approaches, where users manually control torso movements, and robot-initiated approaches, which autonomously coordinate torso and arm based on factors such as reachability, task goal, or inferred human intent. We conducted a user study with N=17 participants to compare the proposed approaches in terms of task performance, manipulability, and energy efficiency, and analyzed which methods were preferred by participants. Human-robot collaborative (HRC) control enables humans and robot autonomy to complement each other and improve overall robotic manipulation performance.
- Research Report > New Finding (0.49)
- Research Report > Experimental Study (0.49)