Goto

Collaborating Authors

 aura


AI robot brings emotional care to pets

FOX News

Tuya Smart introduces Aura, its first AI-powered companion robot for pets at CES 2026. The robot uses artificial intelligence to recognize behaviors and provide updates.


Meet Aura: Scientists develop robotic 'pet butler' that can feed and play with your animals while you're at work

Daily Mail - Science & tech

Bill and Hillary Clinton declare themselves ABOVE THE LAW as they defy Epstein subpoena with astonishing letter slamming Trump's'cruel agenda' Vicious whispers expose the uncomfortable truth about Tulsi Gabbard... and what it means for her future Lawyer shows why he's most forgiving husband in America after glamorous teacher wife, 25, cheated on him with boy, 17, in marital home Iran's Islamic rulers are teetering on collapse. America must give them a final shove. Here's what I am advising Trump's team: MARK DUBOWITZ Waitress wearing a crash helmet'who started fatal Swiss resort inferno' was among the 40 to die in the tragedy Restaurant server sparks outrage after'infuriating' move on customer's bill: '15% wasn't good enough, apparently' Jolene needs surgery to cure her crippling disease. Doctors say they're too busy... but approved euthanasia in ONE HOUR How much do you know about history's most infamous serial killer? Take our Jack the Ripper quiz in this week's The Crime Desk newsletter But now this tiny detail in this disgraceful new picture has made me realise she may be the most deluded woman in the world.


AURA: A Diagnostic Framework for Tracking User Satisfaction of Interactive Planning Agents

Kim, Takyoung, Singh, Janvijay, Mehri, Shuhaib, Acikgoz, Emre Can, Mukherjee, Sagnik, Bozdag, Nimet Beyza, Shashidhar, Sumuk, Tur, Gokhan, Hakkani-Tür, Dilek

arXiv.org Artificial Intelligence

The growing capabilities of large language models (LLMs) in instruction-following and context-understanding lead to the era of agents with numerous applications. Among these, task planning agents have become especially prominent in realistic scenarios involving complex internal pipelines, such as context understanding, tool management, and response generation. However, existing benchmarks predominantly evaluate agent performance based on task completion as a proxy for overall effectiveness. We hypothesize that merely improving task completion is misaligned with maximizing user satisfaction, as users interact with the entire agentic process and not only the end result. To address this gap, we propose AURA, an Agent-User inteRaction Assessment framework that conceptualizes the behavioral stages of interactive task planning agents. AURA offers a comprehensive assessment of agent through a set of atomic LLM evaluation criteria, allowing researchers and practitioners to diagnose specific strengths and weaknesses within the agent's decision-making pipeline. Our analyses show that agents excel in different behavioral stages, with user satisfaction shaped by both outcomes and intermediate behaviors. We also highlight future directions, including systems that leverage multiple agents and the limitations of user simulators in task planning.


AURA: Adaptive Unified Reasoning and Automation with LLM-Guided MARL for NextG Cellular Networks

Nourzad, Narjes, Zong, Mingyu, Krishnamachari, Bhaskar

arXiv.org Artificial Intelligence

Next-generation (NextG) cellular networks are expected to manage dynamic traffic while sustaining high performance. Large language models (LLMs) provide strategic reasoning for 6G planning, but their computational cost and latency limit real-time use. Multi-agent reinforcement learning (MARL) supports localized adaptation, yet coordination at scale remains challenging. We present AURA, a framework that integrates cloud-based LLMs for high-level planning with base stations modeled as MARL agents for local decision-making. The LLM generates objectives and subgoals from its understanding of the environment and reasoning capabilities, while agents at base stations execute these objectives autonomously, guided by a trust mechanism that balances local learning with external input. To reduce latency, AURA employs batched communication so that agents update the LLM's view of the environment and receive improved feedback. In a simulated 6G scenario, AURA improves resilience, reducing dropped handoff requests by more than half under normal and high traffic and lowering system failures. Agents use LLM input in fewer than 60\% of cases, showing that guidance augments rather than replaces local adaptability, thereby mitigating latency and hallucination risks. These results highlight the promise of combining LLM reasoning with MARL adaptability for scalable, real-time NextG network management.


AURA: Development and Validation of an Augmented Unplanned Removal Alert System using Synthetic ICU Videos

Seo, Junhyuk, Moon, Hyeyoon, Jung, Kyu-Hwan, Oh, Namkee, Kim, Taerim

arXiv.org Artificial Intelligence

Unplanned extubation (UE)--the unintended removal of an airway tube--remains a critical patient safety concern in intensive care units (ICUs), often leading to severe complications or death. Real-time UE detection has been limited, largely due to the ethical and privacy challenges of obtaining annotated ICU video data. We propose Augmented Unplanned Removal Alert (AURA), a vision-based risk detection system developed and validated entirely on a fully synthetic video dataset. By leveraging text-to-video diffusion, we generated diverse and clinically realistic ICU scenarios capturing a range of patient behaviors and care contexts. The system applies pose estimation to identify two high-risk movement patterns: collision, defined as hand entry into spatial zones near airway tubes, and agitation, quantified by the velocity of tracked anatomical keypoints. Expert assessments confirmed the realism of the synthetic data, and performance evaluations showed high accuracy for collision detection and moderate performance for agitation recognition. This work demonstrates a novel pathway for developing privacy-preserving, reproducible patient safety monitoring systems with potential for deployment in intensive care settings.


AURA: Autonomous Upskilling with Retrieval-Augmented Agents

Zhu, Alvin, Tanaka, Yusuke, Goldberg, Andrew, Hong, Dennis

arXiv.org Artificial Intelligence

Designing reinforcement learning curricula for agile robots traditionally requires extensive manual tuning of reward functions, environment randomizations, and training configurations. We introduce AURA (Autonomous Upskilling with Retrieval-Augmented Agents), a schema-validated curriculum reinforcement learning (RL) framework that leverages Large Language Models (LLMs) as autonomous designers of multi-stage curricula. AURA transforms user prompts into YAML workflows that encode full reward functions, domain randomization strategies, and training configurations. All files are statically validated before any GPU time is used, ensuring efficient and reliable execution. A retrieval-augmented feedback loop allows specialized LLM agents to design, execute, and refine curriculum stages based on prior training results stored in a vector database, enabling continual improvement over time. Quantitative experiments show that AURA consistently outperforms LLM-guided baselines in generation success rate, humanoid locomotion, and manipulation tasks. Ablation studies highlight the importance of schema validation and retrieval for curriculum quality. AURA successfully trains end-to-end policies directly from user prompts and deploys them zero-shot on a custom humanoid robot in multiple environments - capabilities that did not exist previously with manually designed controllers. By abstracting the complexity of curriculum design, AURA enables scalable and adaptive policy learning pipelines that would be complex to construct by hand. Project page: https://aura-research.org/


Gear News of the Week: There's Yet Another New AI Browser, and Fujifilm Debuts the X-T30 III

WIRED

Plus: Aura's new digital photo frame goes wireless, a mood-morphing watch, Wyze and TP-Link unveil solar-powered outdoor security cameras, and Intel will open "AI Experience Stores" in five cities. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. What are the odds that AI browsers launch in one week? OpenAI announced Atlas on Wednesday, a ChatGPT-powered Chromium browser, but a tiny startup called Nimo also debuted Nimo Infinity, a canvas-style AI browser with a generative user interface.


AURA: An Agent Autonomy Risk Assessment Framework

Chiris, Lorenzo Satta, Mishra, Ayush

arXiv.org Artificial Intelligence

As autonomous agentic AI systems see increasing adoption across organisations, persistent challenges in alignment, governance, and risk management threaten to impede deployment at scale. We present AURA (Agent aUtonomy Risk Assessment), a unified framework designed to detect, quantify, and mitigate risks arising from agentic AI. Building on recent research and practical deployments, AURA introduces a gamma-based risk scoring methodology that balances risk assessment accuracy with computational efficiency and practical considerations. AURA provides an interactive process to score, evaluate and mitigate the risks of running one or multiple AI Agents, synchronously or asynchronously (autonomously). The framework is engineered for Human-in-the-Loop (HITL) oversight and presents Agent-to-Human (A2H) communication mechanisms, allowing for seamless integration with agentic systems for autonomous self-assessment, rendering it interoperable with established protocols (MCP and A2A) and tools. AURA supports a responsible and transparent adoption of agentic AI and provides robust risk detection and mitigation while balancing computational resources, positioning it as a critical enabler for large-scale, governable agentic AI in enterprise environments.


Boosting Embodied AI Agents through Perception-Generation Disaggregation and Asynchronous Pipeline Execution

Zhang, Shulai, Xu, Ao, Chen, Quan, Zhao, Han, Cui, Weihao, Zheng, Ningxin, Lin, Haibin, Liu, Xin, Guo, Minyi

arXiv.org Artificial Intelligence

Embodied AI systems operate in dynamic environments, requiring seamless integration of perception and generation modules to process high-frequency input and output demands. Traditional sequential computation patterns, while effective in ensuring accuracy, face significant limitations in achieving the necessary "thinking" frequency for real-world applications. In this work, we present Auras, an algorithm-system co-designed inference framework to optimize the inference frequency of embodied AI agents. Auras disaggregates the perception and generation and provides controlled pipeline parallelism for them to achieve high and stable throughput. Faced with the data staleness problem that appears when the parallelism is increased, Auras establishes a public context for perception and generation to share, thereby promising the accuracy of embodied agents. Experimental results show that Auras improves throughput by 2.54x on average while achieving 102.7% of the original accuracy, demonstrating its efficacy in overcoming the constraints of sequential computation and providing high throughput.


AURA: Affordance-Understanding and Risk-aware Alignment Technique for Large Language Models

Adak, Sayantan, Chatterjee, Pratyush, Banerjee, Somnath, Hazra, Rima, Aditya, Somak, Mukherjee, Animesh

arXiv.org Artificial Intelligence

Present day LLMs face the challenge of managing affordance-based safety risks-situations where outputs inadvertently facilitate harmful actions due to overlooked logical implications. Traditional safety solutions, such as scalar outcome-based reward models, parameter tuning, or heuristic decoding strategies, lack the granularity and proactive nature needed to reliably detect and intervene during subtle yet crucial reasoning steps. Addressing this fundamental gap, we introduce AURA, an innovative, multi-layered framework centered around Process Reward Models (PRMs), providing comprehensive, step level evaluations across logical coherence and safety-awareness. Our framework seamlessly combines introspective self-critique, fine-grained PRM assessments, and adaptive safety-aware decoding to dynamically and proactively guide models toward safer reasoning trajectories. Empirical evidence clearly demonstrates that this approach significantly surpasses existing methods, significantly improving the logical integrity and affordance-sensitive safety of model outputs. This research represents a pivotal step toward safer, more responsible, and contextually aware AI, setting a new benchmark for alignment-sensitive applications.