Paphos
- North America > United States > Washington > King County > Seattle (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Middle East > Cyprus > Pafos > Paphos (0.04)
- Asia > Middle East > Jordan (0.04)
- Leisure & Entertainment > Games (0.67)
- Government > Tax (0.45)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > West Virginia (0.04)
- (4 more...)
- Health & Medicine > Therapeutic Area (0.45)
- Health & Medicine > Public Health (0.45)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Texas > Dallas County > Dallas (0.04)
- (3 more...)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Texas > Dallas County > Dallas (0.04)
- (3 more...)
- North America > United States > Maryland > Baltimore (0.04)
- North America > United States > Texas (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Middle East > Cyprus > Pafos > Paphos (0.04)
- Leisure & Entertainment > Games (1.00)
- Education (1.00)
- Government (0.67)
OSIL: Learning Offline Safe Imitation Policies with Safety Inferred from Non-preferred Trajectories
Burnwal, Returaj, Bhatt, Nirav Pravinbhai, Ravindran, Balaraman
This work addresses the problem of offline safe imitation learning (IL), where the goal is to learn safe and reward-maximizing policies from demonstrations that do not have per-timestep safety cost or reward information. In many real-world domains, online learning in the environment can be risky, and specifying accurate safety costs can be difficult. However, it is often feasible to collect trajectories that reflect undesirable or unsafe behavior, implicitly conveying what the agent should avoid. We refer to these as non-preferred trajectories. We propose a novel offline safe IL algorithm, OSIL, that infers safety from non-preferred demonstrations. We formulate safe policy learning as a Constrained Markov Decision Process (CMDP). Instead of relying on explicit safety cost and reward annotations, OSIL reformulates the CMDP problem by deriving a lower bound on reward maximizing objective and learning a cost model that estimates the likelihood of non-preferred behavior. Our approach allows agents to learn safe and reward-maximizing behavior entirely from offline demonstrations. We empirically demonstrate that our approach can learn safer policies that satisfy cost constraints without degrading the reward performance, thus outperforming several baselines.
- Asia > India > Tamil Nadu > Chennai (0.04)
- Europe > Middle East > Cyprus > Pafos > Paphos (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.34)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (14 more...)
ACM SIGAI Autonomous Agents Award 2026 open for nominations
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award. This award is made for excellence in research in the area of autonomous agents. It is intended to recognize researchers in autonomous agents whose current work is an important influence on the field. The award is an official ACM award, funded by an endowment created by ACM SIGAI from the proceeds of previous Autonomous Agents conferences. The recipient of the award will receive a monetary prize and a certificate, and will be invited to present a plenary talk at the AAMAS 2026 conference.
Enhancing the development of Cherenkov Telescope Array control software with Large Language Models
Kostunin, Dmitriy, Jones, Elisa, Sotnikov, Vladimir, Sotnikov, Valery, Golovachev, Sergo, Strube, Alexandre
We develop AI agents based on instruction-finetuned large language models (LLMs) to assist in the engineering and operation of the Cherenkov Telescope Array Observatory (CT AO) Control and Data Acquisition Software (ACADA). These agents align with project-specific documentation and codebases, understand contextual information, interact with external APIs, and communicate with users in natural language.
- Europe > Middle East > Cyprus > Pafos > Paphos (0.05)
- Europe > Italy > Sardinia > Cagliari (0.05)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.05)
FLAD: Federated Learning for LLM-based Autonomous Driving in Vehicle-Edge-Cloud Networks
Xiang, Tianao, Zhi, Mingjian, Bi, Yuanguo, Cai, Lin, Chen, Yuhao
Large Language Models (LLMs) have impressive data fusion and reasoning capabilities for autonomous driving (AD). However, training LLMs for AD faces significant challenges including high computation transmission costs, and privacy concerns associated with sensitive driving data. Federated Learning (FL) is promising for enabling autonomous vehicles (AVs) to collaboratively train models without sharing raw data. We present Federated LLM-based Autonomous Driving (FLAD), an FL framework that leverages distributed multimodal sensory data across AVs in heterogeneous environment. FLAD has three key innovations: (1) a cloud-edge-vehicle collaborative architecture that reduces communication delay and preserving data privacy; (2) an intelligent parallelized collaborative training with a communication scheduling mechanism that optimizes training efficiency, leveraging end-devices otherwise having insufficient resources for model training; and (3) a knowledge distillation method that personalizes LLM according to heterogeneous edge data. In addition, we prototype FLAD in a testbed with NVIDIA Jetsons, overcoming practical implementation challenges including CPU/GPU memory sharing in resource-constrained devices, dynamic model partitions, and fault-tolerant training.Extensive experimental evaluation demonstrates that FLAD achieves superior end-to-end AD performance while efficiently utilizing distributed vehicular resources, opening up new possibilities for future collaborative AD model training and knowledge sharing.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (18 more...)
- Transportation > Ground > Road (1.00)
- Information Technology (1.00)
- Automobiles & Trucks (1.00)