Not enough data to create a plot.
Try a different view from the menu above.
Sun, Yu
Benchmarking Multi-Object Grasping
Chen, Tianze, Frumento, Ricardo, Pagnanelli, Giulia, Cei, Gianmarco, Keth, Villa, Gafarov, Shahaddin, Gong, Jian, Ye, Zihe, Baracca, Marco, D'Avella, Salvatore, Bianchi, Matteo, Sun, Yu
--In this work, we describe a multi-object grasping benchmark to evaluate the grasping and manipulation capabilities of robotic systems in both pile and surface scenarios. The benchmark introduces three robot multi-object grasping benchmarking protocols designed to challenge different aspects of robotic manipulation. These protocols are: 1) the Only-Pick-Once protocol, which assesses the robot's ability to efficiently pick multiple objects in a single attempt; 2) the Accurate pick-trnsferring protocol, which evaluates the robot's capacity to selectively grasp and transport a specific number of objects from a cluttered environment; and 3) the Pick-transferring-all protocol, which challenges the robot to clear an entire scene by sequentially grasping and transferring all available objects. These protocols are intended to be adopted by the broader robotics research community, providing a standardized method to assess and compare robotic systems' performance in multi-object grasping tasks. We establish baselines for these protocols using standard planning and perception algorithms on a Barrett hand, Robotiq parallel jar gripper, and the Pisa/IIT Softhand-2, which is a soft underactuated robotic hand. We discuss the results in relation to human performance in similar tasks we well. The authors are from the Robot Perception and Action Lab (RP AL) of Computer Science and Engineering Department, University of South Florida, Tampa, FL 33620, USA. The authors are with the Research Center "E. The author is with is with Rutgers University, New Brunswick, NJ 08901, USA. Related work was finished when Zihe Y e was a Master's student in the RP AL lab at USF. The author is with the Department of Excellence in Robotics & AI, Mechanical Intelligence Institute, Scuola Superiore Sant'Anna, Pisa, Italy.
Audio-Enhanced Vision-Language Modeling with Latent Space Broadening for High Quality Data Expansion
Sun, Yu, Li, Yin, Sun, Ruixiao, Liu, Chunhui, Zhou, Fangming, Jin, Ze, Wang, Linjie, Shen, Xiang, Hao, Zhuolin, Xiong, Hongyu
Transformer-based multimodal models are widely used in industrial-scale recommendation, search, and advertising systems for content understanding and relevance ranking. Enhancing labeled training data quality and cross-modal fusion significantly improves model performance, influencing key metrics such as quality view rates and ad revenue. High-quality annotations are crucial for advancing content modeling, yet traditional statistical-based active learning (AL) methods face limitations: they struggle to detect overconfident misclassifications and are less effective in distinguishing semantically similar items in deep neural networks. Additionally, audio information plays an increasing role, especially in short-video platforms, yet most pre-trained multimodal architectures primarily focus on text and images. While training from scratch across all three modalities is possible, it sacrifices the benefits of leveraging existing pre-trained visual-language (VL) and audio models. To address these challenges, we propose kNN-based Latent Space Broadening (LSB) to enhance AL efficiency and Vision-Language Modeling with Audio Enhancement (VLMAE), a mid-fusion approach integrating audio into VL models. This system deployed in production systems, leading to significant business gains.
InverseBench: Benchmarking Plug-and-Play Diffusion Priors for Inverse Problems in Physical Sciences
Zheng, Hongkai, Chu, Wenda, Zhang, Bingliang, Wu, Zihui, Wang, Austin, Feng, Berthy T., Zou, Caifeng, Sun, Yu, Kovachki, Nikola, Ross, Zachary E., Bouman, Katherine L., Yue, Yisong
Plug-and-play diffusion priors (PnPDP) have emerged as a promising research direction for solving inverse problems. However, current studies primarily focus on natural image restoration, leaving the performance of these algorithms in scientific inverse problems largely unexplored. To address this gap, we introduce \textsc{InverseBench}, a framework that evaluates diffusion models across five distinct scientific inverse problems. These problems present unique structural challenges that differ from existing benchmarks, arising from critical scientific applications such as optical tomography, medical imaging, black hole imaging, seismology, and fluid dynamics. With \textsc{InverseBench}, we benchmark 14 inverse problem algorithms that use plug-and-play diffusion priors against strong, domain-specific baselines, offering valuable new insights into the strengths and weaknesses of existing algorithms. To facilitate further research and development, we open-source the codebase, along with datasets and pre-trained models, at https://devzhk.github.io/InverseBench/.
STAR: A Foundation Model-driven Framework for Robust Task Planning and Failure Recovery in Robotic Systems
Sakib, Md Sadman, Sun, Yu
Modern robotic systems, deployed across domains from industrial automation to domestic assistance, face a critical challenge: executing tasks with precision and adaptability in dynamic, unpredictable environments. To address this, we propose STAR (Smart Task Adaptation and Recovery), a novel framework that synergizes Foundation Models (FMs) with dynamically expanding Knowledge Graphs (KGs) to enable resilient task planning and autonomous failure recovery. While FMs offer remarkable generalization and contextual reasoning, their limitations, including computational inefficiency, hallucinations, and output inconsistencies hinder reliable deployment. STAR mitigates these issues by embedding learned knowledge into structured, reusable KGs, which streamline information retrieval, reduce redundant FM computations, and provide precise, scenario-specific insights. The framework leverages FM-driven reasoning to diagnose failures, generate context-aware recovery strategies, and execute corrective actions without human intervention or system restarts. Unlike conventional approaches that rely on rigid protocols, STAR dynamically expands its KG with experiential knowledge, ensuring continuous adaptation to novel scenarios. To evaluate the effectiveness of this approach, we developed a comprehensive dataset that includes various robotic tasks and failure scenarios. Through extensive experimentation, STAR demonstrated an 86% task planning accuracy and 78% recovery success rate, showing significant improvements over baseline methods. The framework's ability to continuously learn from experience while maintaining structured knowledge representation makes it particularly suitable for long-term deployment in real-world applications.
Large-Scale AI in Telecom: Charting the Roadmap for Innovation, Scalability, and Enhanced Digital Experiences
Shahid, Adnan, Kliks, Adrian, Al-Tahmeesschi, Ahmed, Elbakary, Ahmed, Nikou, Alexandros, Maatouk, Ali, Mokh, Ali, Kazemi, Amirreza, De Domenico, Antonio, Karapantelakis, Athanasios, Cheng, Bo, Yang, Bo, Wang, Bohao, Fischione, Carlo, Zhang, Chao, Issaid, Chaouki Ben, Yuen, Chau, Peng, Chenghui, Huang, Chongwen, Chaccour, Christina, Thomas, Christo Kurisummoottil, Sharma, Dheeraj, Kalogiros, Dimitris, Niyato, Dusit, De Poorter, Eli, Mhanna, Elissa, Strinati, Emilio Calvanese, Bader, Faouzi, Abdeldayem, Fathi, Wang, Fei, Zhu, Fenghao, Fontanesi, Gianluca, Geraci, Giovanni, Zhou, Haibo, Purmehdi, Hakimeh, Ahmadi, Hamed, Zou, Hang, Du, Hongyang, Lee, Hoon, Yang, Howard H., Poli, Iacopo, Carron, Igor, Chatzistefanidis, Ilias, Lee, Inkyu, Pitsiorlas, Ioannis, Fontaine, Jaron, Wu, Jiajun, Zeng, Jie, Li, Jinan, Karam, Jinane, Gemayel, Johny, Deng, Juan, Frison, Julien, Huang, Kaibin, Qiu, Kehai, Ball, Keith, Wang, Kezhi, Guo, Kun, Tassiulas, Leandros, Gwenole, Lecorve, Yue, Liexiang, Bariah, Lina, Powell, Louis, Dryjanski, Marcin, Galdon, Maria Amparo Canaveras, Kountouris, Marios, Hafeez, Maryam, Elkael, Maxime, Bennis, Mehdi, Boudjelli, Mehdi, Dai, Meiling, Debbah, Merouane, Polese, Michele, Assaad, Mohamad, Benzaghta, Mohamed, Refai, Mohammad Al, Djerrab, Moussab, Syed, Mubeen, Amir, Muhammad, Yan, Na, Alkaabi, Najla, Li, Nan, Sehad, Nassim, Nikaein, Navid, Hashash, Omar, Sroka, Pawel, Yang, Qianqian, Zhao, Qiyang, Silab, Rasoul Nikbakht, Ying, Rex, Morabito, Roberto, Li, Rongpeng, Madi, Ryad, Ayoubi, Salah Eddine El, D'Oro, Salvatore, Lasaulce, Samson, Shalmashi, Serveh, Liu, Sige, Cherrared, Sihem, Chetty, Swarna Bindu, Dutta, Swastika, Zaidi, Syed A. R., Chen, Tianjiao, Murphy, Timothy, Melodia, Tommaso, Quek, Tony Q. S., Ram, Vishnu, Saad, Walid, Hamidouche, Wassim, Chen, Weilong, Liu, Xiaoou, Yu, Xiaoxue, Wang, Xijun, Shang, Xingyu, Wang, Xinquan, Cao, Xuelin, Su, Yang, Liang, Yanping, Deng, Yansha, Yang, Yifan, Cui, Yingping, Sun, Yu, Chen, Yuxuan, Pointurier, Yvan, Nehme, Zeinab, Nezami, Zeinab, Yang, Zhaohui, Zhang, Zhaoyang, Liu, Zhe, Yang, Zhenyu, Han, Zhu, Zhou, Zhuang, Chen, Zihan, Chen, Zirui, Shuai, Zitao
The rise of generative artificial intelligence (AI) as a novel frontier that uniquely merges advanced levels of intelligence with revolutionary user experiences is redefining the AI landscape for future cellular networks. In particular, the transition towards 6G systems has introduced a myriad of challenges inherent to their AI-native network design, requiring innovative solutions to enable real-time network orchestration, intelligent decision-making, and adaptive dynamic configurations. Meanwhile, the envisioned user experiences for 6G are growing increasingly complex, exceeding the capabilities offered by vintage wireless technologies and conventional AI solutions to satisfy their advanced demands. With its disruptive impact evident across diverse fields, generative AI possesses immense potential to tackle these challenges, leveraging its exceptional capabilities to manage complex tasks, operate autonomously, and adapt seamlessly to scenarios beyond its training domain. Remarkably, generative AI provides a transformative opportunity for telecom and cellular networks to bridge this defined gap in 6G systems, thereby shifting towards a new era with cutting-edge AI innovations across the different system and user levels.
CritiQ: Mining Data Quality Criteria from Human Preferences
Guo, Honglin, Lv, Kai, Guo, Qipeng, Liang, Tianyi, Xi, Zhiheng, Song, Demin, Zhang, Qiuyinzhe, Sun, Yu, Chen, Kai, Qiu, Xipeng, Gui, Tao
Language model heavily depends on high-quality data for optimal performance. Existing approaches rely on manually designed heuristics, the perplexity of existing models, training classifiers, or careful prompt engineering, which require significant expert experience and human annotation effort while introduce biases. We introduce CritiQ, a novel data selection method that automatically mines criteria from human preferences for data quality with only $\sim$30 human-annotated pairs and performs efficient data selection. The main component, CritiQ Flow, employs a manager agent to evolve quality criteria and worker agents to make pairwise judgments. We build a knowledge base that extracts quality criteria from previous work to boost CritiQ Flow. Compared to perplexity- and classifier- based methods, verbal criteria are more interpretable and possess reusable value. After deriving the criteria, we train the CritiQ Scorer to give quality scores and perform efficient data selection. We demonstrate the effectiveness of our method in the code, math, and logic domains, achieving high accuracy on human-annotated test sets. To validate the quality of the selected data, we continually train Llama 3.1 models and observe improved performance on downstream tasks compared to uniform sampling. Ablation studies validate the benefits of the knowledge base and the reflection process. We analyze how criteria evolve and the effectiveness of majority voting.
Inner Thinking Transformer: Leveraging Dynamic Depth Scaling to Foster Adaptive Internal Thinking
Chen, Yilong, Shang, Junyuan, Zhang, Zhenyu, Xie, Yanxi, Sheng, Jiawei, Liu, Tingwen, Wang, Shuohuan, Sun, Yu, Wu, Hua, Wang, Haifeng
Large language models (LLMs) face inherent performance bottlenecks under parameter constraints, particularly in processing critical tokens that demand complex reasoning. Empirical analysis reveals challenging tokens induce abrupt gradient spikes across layers, exposing architectural stress points in standard Transformers. Building on this insight, we propose Inner Thinking Transformer (ITT), which reimagines layer computations as implicit thinking steps. ITT dynamically allocates computation through Adaptive Token Routing, iteratively refines representations via Residual Thinking Connections, and distinguishes reasoning phases using Thinking Step Encoding. ITT enables deeper processing of critical tokens without parameter expansion. Evaluations across 162M-466M parameter models show ITT achieves 96.5\% performance of a 466M Transformer using only 162M parameters, reduces training data by 43.2\%, and outperforms Transformer/Loop variants in 11 benchmarks. By enabling elastic computation allocation during inference, ITT balances performance and efficiency through architecture-aware optimization of implicit thinking pathways.
BeamLoRA: Beam-Constraint Low-Rank Adaptation
Gu, Naibin, Zhang, Zhenyu, Liu, Xiyu, Fu, Peng, Lin, Zheng, Wang, Shuohuan, Sun, Yu, Wu, Hua, Wang, Weiping, Wang, Haifeng
Due to the demand for efficient fine-tuning of large language models, Low-Rank Adaptation (LoRA) has been widely adopted as one of the most effective parameter-efficient fine-tuning methods. Nevertheless, while LoRA improves efficiency, there remains room for improvement in accuracy. Herein, we adopt a novel perspective to assess the characteristics of LoRA ranks. The results reveal that different ranks within the LoRA modules not only exhibit varying levels of importance but also evolve dynamically throughout the fine-tuning process, which may limit the performance of LoRA. Based on these findings, we propose BeamLoRA, which conceptualizes each LoRA module as a beam where each rank naturally corresponds to a potential sub-solution, and the fine-tuning process becomes a search for the optimal sub-solution combination. BeamLoRA dynamically eliminates underperforming sub-solutions while expanding the parameter space for promising ones, enhancing performance with a fixed rank. Extensive experiments across three base models and 12 datasets spanning math reasoning, code generation, and commonsense reasoning demonstrate that BeamLoRA consistently enhances the performance of LoRA, surpassing the other baseline methods.
Proxy Prompt: Endowing SAM and SAM 2 with Auto-Interactive-Prompt for Medical Segmentation
Xinyi, Wang, Hongyu, Kang, Peishan, Wei, Li, Shuai, Sun, Yu, Lam, Sai Kit, Zheng, Yongping
In this paper, we aim to address the unmet demand for automated prompting and enhanced human-model interactions of SAM and SAM2 for the sake of promoting their widespread clinical adoption. Specifically, we propose Proxy Prompt (PP), auto-generated by leveraging non-target data with a pre-annotated mask. We devise a novel 3-step context-selection strategy for adaptively selecting the most representative contextual information from non-target data via vision mamba and selective maps, empowering the guiding capability of non-target image-mask pairs for segmentation on target image/video data. To reinforce human-model interactions in PP, we further propose a contextual colorization module via a dual-reverse cross-attention to enhance interactions between target features and contextual-embedding with amplifying distinctive features of user-defined object(s). Via extensive evaluations, our method achieves state-of-the-art performance on four public datasets and yields comparable results with fully-trained models, even when trained with only 16 image masks.
Curiosity-Driven Reinforcement Learning from Human Feedback
Sun, Haoran, Chai, Yekun, Wang, Shuohuan, Sun, Yu, Wu, Hua, Wang, Haifeng
Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences, but often at the cost of reduced output diversity. This trade-off between diversity and alignment quality remains a significant challenge. Drawing inspiration from curiosity-driven exploration in reinforcement learning, we introduce curiosity-driven RLHF (CD-RLHF), a framework that incorporates intrinsic rewards for novel states, alongside traditional sparse extrinsic rewards, to optimize both output diversity and alignment quality. We demonstrate the effectiveness of CD-RLHF through extensive experiments on a range of tasks, including text summarization and instruction following. Our approach achieves significant gains in diversity on multiple diversity-oriented metrics while maintaining alignment with human preferences comparable to standard RLHF. We make our code publicly available at https://github.com/ernie-research/CD-RLHF.