Goto

Collaborating Authors

 plugin



An Efficient Dataset Condensation Plugin and Its Application to Continual Learning

Neural Information Processing Systems

Dataset condensation (DC) distills a large real-world dataset into a small synthetic dataset, with the goal of training a network from scratch on the latter that performs similarly to the former. State-of-the-art (SOTA) DC methods have achieved satisfactory results through techniques such as accuracy, gradient, training trajectory, or distribution matching. However, these works all perform matching in the high-dimension pixel spaces, ignoring that natural images are usually locally connected and have lower intrinsic dimensions, resulting in low condensation efficiency. In this work, we propose a simple-yet-efficient dataset condensation plugin that matches the raw and synthetic datasets in a low-dimensional manifold.



OpenAGI: When LLM Meets Domain Experts

Neural Information Processing Systems

Human Intelligence (HI) excels at combining basic skills to solve complex tasks. This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents, enabling them to harness expert models for complex task-solving towards Artificial General Intelligence (AGI). Large Language Models (LLMs) show promising learning and reasoning abilities, and can effectively use external models, tools, plugins, or APIs to tackle complex problems. In this work, we introduce OpenAGI, an open-source AGI research and development platform designed for solving multi-step, real-world tasks. Specifically, OpenAGI uses a dual strategy, integrating standard benchmark tasks for benchmarking and evaluation, and open-ended tasks including more expandable models, tools, plugins, or APIs for creative problem-solving. Tasks are presented as natural language queries to the LLM, which then selects and executes appropriate models. We also propose a Reinforcement Learning from Task Feedback (RLTF) mechanism that uses task results to improve the LLM's task-solving ability, which creates a self-improving AI feedback loop. While we acknowledge that AGI is a broad and multifaceted research challenge with no singularly defined solution path, the integration of LLMs with domain-specific expert models, inspired by mirroring the blend of general and specialized intelligence in humans, offers a promising approach towards AGI.


Agent-Kernel: A MicroKernel Multi-Agent System Framework for Adaptive Social Simulation Powered by LLMs

Mao, Yuren, Liu, Peigen, Wang, Xinjian, Ding, Rui, Miao, Jing, Zou, Hui, Qi, Mingjie, Luo, Wanxiang, Lai, Longbin, Wang, Kai, Qian, Zhengping, Yang, Peilun, Gao, Yunjun, Zhang, Ying

arXiv.org Artificial Intelligence

Multi-Agent System (MAS) developing frameworks serve as the foundational infrastructure for social simulations powered by Large Language Models (LLMs). However, existing frameworks fail to adequately support large-scale simulation development due to inherent limitations in adaptability, configurability, reliability, and code reusability. For example, they cannot simulate a society where the agent population and profiles change over time. To fill this gap, we propose Agent-Kernel, a framework built upon a novel society-centric modular microkernel architecture. It decouples core system functions from simulation logic and separates cognitive processes from physical environments and action execution. Consequently, Agent-Kernel achieves superior adaptability, configurability, reliability, and reusability. We validate the framework's superiority through two distinct applications: a simulation of the Universe 25 (Mouse Utopia) experiment, which demonstrates the handling of rapid population dynamics from birth to death; and a large-scale simulation of the Zhejiang University Campus Life, successfully coordinating 10,000 heterogeneous agents, including students and faculty.


Pre-Filtering Code Suggestions using Developer Behavioral Telemetry to Optimize LLM-Assisted Programming

Awad, Mohammad Nour Al, Ivanov, Sergey, Tikhonova, Olga

arXiv.org Artificial Intelligence

Abstract--Large Language Models (LLMs) are increasingly integrated into code editors to provide AI-powered code suggestions. Y et many of these suggestions are ignored, resulting in wasted computation, increased latency, and unnecessary interruptions. We introduce a lightweight pre-filtering model that predicts the likelihood of suggestion acceptance before invoking the LLM, using only real-time developer telemetry such as typing speed, file navigation, and editing activity. Deployed in a production-grade Visual Studio Code plugin over four months of naturalistic use, our approach nearly doubled acceptance rates (18.4% 34.2%) while suppressing 35% of low-value LLM calls. These findings demonstrate that behavioral signals alone can meaningfully improve both user experience and system efficiency in LLM-assisted programming, highlighting the value of timing-aware, privacy-preserving adaptation mechanisms. The filter operates solely on pre-invocation editor telemetry and never inspects code or prompts. Large Language Models (LLMs) have rapidly transformed the landscape of software development by enabling intelligent code completions, refactorings, and in-editor conversations. These capabilities are increasingly integrated into modern development environments, particularly through plugins for popular IDEs such as Visual Studio Code. However, despite their power, LLM-driven code suggestions often fail to align with developer intent in real-time, leading to low acceptance rates, disrupted workflows, and wasted computational resources [1].


Evaluating AI-Driven Automated Map Digitization in QGIS

Febrita, Diana

arXiv.org Artificial Intelligence

Map digitization is an important process that converts maps into digital formats that can be used for further analysis. This process typically requires a deep human involvement because of the need for interpretation and decision-making when translating complex features. With the advancement of artificial intelligence, there is an alternative to conducting map digitization with the help of machine learning techniques. Deepness, or Deep Neural Remote Sensing, is an advanced AI-driven tool designed and integrated as a plugin in QGIS application. This research focuses on assessing the effectiveness of Deepness in automated digitization. This study analyses AI-generated digitization results from Google Earth imagery and compares them with digitized outputs from OpenStreetMap (OSM) to evaluate performance.


A ROS2 Interface for Universal Robots Collaborative Manipulators Based on ur_rtde

Saccuti, Alessio, Monica, Riccardo, Aleotti, Jacopo

arXiv.org Artificial Intelligence

The Universal Robots RTDE communication interface is well-known in literature and it was used in several works. In [5] and [6] RTDE was adopted to control UR cobots. In [7], [8], and [9], the RTDE interface was used only for data acquisition. To facilitate the development of external applications for UR cobots, various higher-level software interfaces and drivers have been proposed based on RTDE. In addition to the official software interface by Universal Robots (ur_client_li-brary), a few alternatives have been developed by third-parties. One of these software interfaces is ur_rtde [4] by SDU Robotics, which was used in this work. Another similar interface is python-urx [10], which is a Python interface for tasks that do not require high control frequency.


Appendices for PLUGIn: A simple algorithm for inverting generative models with recovery guarantees A Some Results on Gaussian Matrices

Neural Information Processing Systems

Here we state some results on Gaussian Matrices, which will be used in the proofs later. The following theorem is the concentration of (Gaussian) measure inequality for Lipschitz functions. Here we only state a one-sided version, though it is more commonly stated with a two-sided one, i.e., The result follows since E k A k p m + p n (see, e.g., [31, Section 7.3]). In particular, the following Bernstein's Inequality [31, Section 2.8] holds: P First, we establish that Z ( u, v; w) has a mixed tail. Next, by induction on k (i.e., apply (12) with r = r ( i 1) The result then follows by induction.