Goto

Collaborating Authors

 landmark


CQM: Curriculum Reinforcement Learning with a Quantized World Model

Neural Information Processing Systems

Recent curriculum Reinforcement Learning (RL) has shown notable progress in solving complex tasks by proposing sequences of surrogate tasks. However, the previous approaches often face challenges when they generate curriculum goals in a high-dimensional space.



CAOS: Conformal Aggregation of One-Shot Predictors

Waldron, Maja

arXiv.org Machine Learning

One-shot prediction enables rapid adaptation of pretrained foundation models to new tasks using only one labeled example, but lacks principled uncertainty quantification. While conformal prediction provides finite-sample coverage guarantees, standard split conformal methods are inefficient in the one-shot setting due to data splitting and reliance on a single predictor. We propose Conformal Aggregation of One-Shot Predictors (CAOS), a conformal framework that adaptively aggregates multiple one-shot predictors and uses a leave-one-out calibration scheme to fully exploit scarce labeled data. Despite violating classical exchangeability assumptions, we prove that CAOS achieves valid marginal coverage using a monotonicity-based argument. Experiments on one-shot facial landmarking and RAFT text classification tasks show that CAOS produces substantially smaller prediction sets than split conformal baselines while maintaining reliable coverage.


Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning

Neural Information Processing Systems

Goal-conditioned hierarchical reinforcement learning (HRL) has shown promising results for solving complex and long-horizon RL tasks. However, the action space of high-level policy in the goal-conditioned HRL is often large, so it results in poor exploration, leading to inefficiency in training. In this paper, we present HIerarchical reinforcement learning Guided by Landmarks (HIGL), a novel framework for training a high-level policy with a reduced action space guided by landmarks, i.e., promising states to explore. The key component of HIGL is twofold: (a) sampling landmarks that are informative for exploration and (b) encouraging the high level policy to generate a subgoal towards a selected landmark. For (a), we consider two criteria: coverage of the entire visited state space (i.e., dispersion of states) and novelty of states (i.e., prediction error of a state). For (b), we select a landmark as the very first landmark in the shortest path in a graph whose nodes are landmarks. Our experiments demonstrate that our framework outperforms prior-arts across a variety of control tasks, thanks to efficient exploration guided by landmarks.


Unsupervised Learning of Object Landmarks via Self-Training Correspondence

Neural Information Processing Systems

This paper addresses the problem of unsupervised discovery of object landmarks. We take a different path compared to that of existing works, based on 2 novel perspectives: (1) Self-training: starting from generic keypoints, we propose a self-training approach where the goal is to learn a detector that improves itself becoming more and more tuned to object landmarks.


ALIGN: A Vision-Language Framework for High-Accuracy Accident Location Inference through Geo-Spatial Neural Reasoning

Chowdhury, MD Thamed Bin Zaman, Hossain, Moazzem

arXiv.org Artificial Intelligence

ABSTRACT Reliable geospatial information on road accidents is vital for safety analysis and infrastructure planning, yet most low-and middle-income countries continue to face a critical shortage of accurate, location-specific crash data. Existing text-based geocoding tools perform poorly in multilingual and unstructured news environments, where incomplete place descriptions and mixed language (e.g. To address these limitations, this study introduces ALIGN (Accident Location Inference through Geo-Spatial Neural Reasoning) -- a vision-language framework that emulates human spatial reasoning to infer accident location coordinates directly from available textual and map-based cues. ALIGN integrates large language and vision-language model mechanisms within a multi-stage pipeline that performs optical character recognition, linguistic reasoning, and map-level verification through grid-based spatial scanning. The framework systematically evaluates each predicted location against contextual and visual evidence, ensuring interpretable, fine-grained geolocation outcomes without requiring model retraining. Applied to Bangla-language news data source, ALIGN demonstrates consistent improvements over traditional geoparsing methods, accurately identifying district-and sub-district-level crash sites. Beyond its technical contribution, the framework establishes a high accuracy foundation for automated crash mapping in data-scarce regions, supporting evidence-driven road-safety policymaking and the broader integration of multimodal artificial intelligence in transportation analytics. Hossain) 1. Introduction Accurate, fine-grained geospatial data is the bedrock of effective public safety policy, urban planning, and strategic response. For road safety, knowing the precise location of traffic crashes is essential for diagnosing high-risk black spots, deploying emergency services, and evaluating the impact of engineering interventions. While high-income nations increasingly rely on robust, integrated crash databases and vehicle telematics (Guo, Qian, & Shi, 2022; Szpytko & Nasan Agha, 2020), utilizing advanced methods such as deep learning on multi-vehicle trajectories (Yang et al., 2021), ensemble models integrating connected vehicle data (Yang et al., 2026), and 2 probe vehicle speed contour analysis (Wang et al., 2021), a significant'geospatial data desert' persists in most Low-and Middle-Income Countries (LMICs) (Mitra & Bhalla, 2023; Chang et al., 2020). This gap is particularly tragic given that these regions bear the overwhelming brunt of global road traffic fatalities. This research focuses on a low-resource country-Bangladesh, a nation that exemplifies this critical data-sparse challenge. The World Bank has estimated that the costs associated with traffic crashes can amount to as much as 5.1% of the country's Gross Domestic Product (World Bank, 2022).


A Hierarchical, Model-Based System for High-Performance Humanoid Soccer

Wang, Quanyou, Zhu, Mingzhang, Hou, Ruochen, Gillespie, Kay, Zhu, Alvin, Wang, Shiqi, Wang, Yicheng, Fernandez, Gaberiel I., Liu, Yeting, Togashi, Colin, Nam, Hyunwoo, Navghare, Aditya, Xu, Alex, Zhu, Taoyuanmin, Ahn, Min Sung, Alvarez, Arturo Flores, Quan, Justin, Hong, Ethan, Hong, Dennis W.

arXiv.org Artificial Intelligence

The development of athletic humanoid robots has gained significant attention as advances in actuation, sensing, and control enable increasingly dynamic, real-world capabilities. RoboCup, an international competition of fully autonomous humanoid robots, provides a uniquely challenging benchmark for such systems, culminating in the long-term goal of competing against human soccer players by 2050. This paper presents the hardware and software innovations underlying our team's victory in the RoboCup 2024 Adult-Sized Humanoid Soccer Competition. On the hardware side, we introduce an adult-sized humanoid platform built with lightweight structural components, high-torque quasi-direct-drive actuators, and a specialized foot design that enables powerful in-gait kicks while preserving locomotion robustness. On the software side, we develop an integrated perception and localization framework that combines stereo vision, object detection, and landmark-based fusion to provide reliable estimates of the ball, goals, teammates, and opponents. A mid-level navigation stack then generates collision-aware, dynamically feasible trajectories, while a centralized behavior manager coordinates high-level decision making, role selection, and kick execution based on the evolving game state. The seamless integration of these subsystems results in fast, precise, and tactically effective gameplay, enabling robust performance under the dynamic and adversarial conditions of real matches. This paper presents the design principles, system architecture, and experimental results that contributed to ARTEMIS's success as the 2024 Adult-Sized Humanoid Soccer champion.


A Formalism for Optimal Search with Dynamic Heuristics (Extended Version)

Christen, Remo, Pommerening, Florian, Büchner, Clemens, Helmert, Malte

arXiv.org Artificial Intelligence

While most heuristics studied in heuristic search depend only on the state, some accumulate information during search and thus also depend on the search history. Various existing approaches use such dynamic heuristics in $\mathrm{A}^*$-like algorithms and appeal to classic results for $\mathrm{A}^*$ to show optimality. However, doing so ignores the complexities of searching with a mutable heuristic. In this paper we formalize the idea of dynamic heuristics and use them in a generic algorithm framework. We study a particular instantiation that models $\mathrm{A}^*$ with dynamic heuristics and show general optimality results. Finally we show how existing approaches from classical planning can be viewed as special cases of this instantiation, making it possible to directly apply our optimality results.


Detection of Intoxicated Individuals from Facial Video Sequences via a Recurrent Fusion Model

Baroutian, Bita, Aghaei, Atefe, Moghaddam, Mohsen Ebrahimi

arXiv.org Artificial Intelligence

Abstract--Alcohol consumption is a significant public health concern and a major cause of accidents and fatalities worldwide. This study introduces a novel video-based facial sequence analysis approach dedicated to the detection of alcohol intoxication. The method integrates facial landmark analysis via a Graph Attention Network (GA T) with spatiotemporal visual features extracted using a 3D ResNet. These features are dynamically fused with adaptive prioritization to enhance classification performance. Additionally, we introduce a curated dataset comprising 3,542 video segments derived from 202 individuals to support training and evaluation. Our model is compared against two baselines: a custom 3D-CNN and a VGGFace+LSTM architecture. Experimental results show that our approach achieves 95.82% accuracy, 0.977 precision, and 0.97 recall, outperforming prior methods. The findings demonstrate the model's potential for practical deployment in public safety systems for non-invasive, reliable alcohol intoxication detection. Alcohol consumption remains a significant public safety challenge, particularly when it negatively affects cognitive functions, physical coordination, and judgment.


In-Context Sync-LoRA for Portrait Video Editing

Polaczek, Sagi, Patashnik, Or, Mahdavi-Amiri, Ali, Cohen-Or, Daniel

arXiv.org Artificial Intelligence

Editing portrait videos is a challenging task that requires flexible yet precise control over a wide range of modifications, such as appearance changes, expression edits, or the addition of objects. The key difficulty lies in preserving the subject's original temporal behavior, demanding that every edited frame remains precisely synchronized with the corresponding source frame. W e present Sync-LoRA, a method for editing portrait videos that achieves high-quality visual modifications while maintaining frame-accurate synchronization and identity consistency. Our approach uses an image-to-video diffusion model, where the edit is defined by modifying the first frame and then propagated to the entire sequence. T o enable accurate synchronization, we train an in-context LoRA using paired videos that depict identical motion trajectories but differ in appearance. These pairs are automatically generated and curated through a synchronization-based filtering process that selects only the most temporally aligned examples for training. This training setup teaches the model to combine motion cues from the source video with the visual changes introduced in the edited first frame. Trained on a compact, highly curated set of synchronized human portraits, Sync-LoRA generalizes to unseen identities and diverse edits (e.g., modifying appearance, adding objects, or changing backgrounds), robustly handling variations in pose and expression. Our results demonstrate high visual fidelity and strong temporal coherence, achieving a robust balance between edit fidelity and precise motion preservation.