Goto

Collaborating Authors

 revision







Reviewer 1: Unclear about the evaluation for outer iterations; Does the number of aggregated tasks affect

Neural Information Processing Systems

Y es, the total complexity is proportional to the number of aggregated tasks. Add experiments to compare ANIL and MAML and w.r .t. the size B of samples: Why sample size in inner-loop is not taken into analysis, as Fallah et al. [4] does: This setting has also been considered in Rajeswaran et al. [24], Ji et al. [13]. Reviewer 2: Dependence on κ. iMAML depends on κ in contrast to poly (κ) of this work: Add an experiment to verify the tightness: Great point! W e will definitely add such an experiment in the revision. W e will clarify it in the revision.



07211688a0869d995947a8fb11b215d6-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all the anonymous reviewers for their constructive feedback. We address each comment as follows. R1-Q2:Just using the predicted mask to concat. R1-Q3:Refine the predicted mask with CRF . SEAM show that CRF ( vs CONT A) is only effective in the first round, i .


0561bc7ecba98e39ca7994f93311ba23-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for thoughtful feedback. "researchers working on pairwise comparisons and preference learning should find this paper to be interesting and Furthermore, we note that we also plan to make our code available as soon as the review period concludes. In our derivation, we pose the problem in a noiseless environment only for simplicity. For similar reasons, we also did not compare our method against algorithms utilizing different models of preference. As with any recommender system, practical considerations are important.


HalluClean: A Unified Framework to Combat Hallucinations in LLMs

Zhao, Yaxin, Zhang, Yu

arXiv.org Artificial Intelligence

Large language models (LLMs) have achieved impressive performance across a wide range of natural language processing tasks, yet they often produce hallucinated content that undermines factual reliability. To address this challenge, we introduce HalluClean, a lightweight and task-agnostic framework for detecting and correcting hallucinations in LLMgenerated text. HalluClean adopts a reasoning-enhanced paradigm, explicitly decomposing the process into planning, execution, and revision stages to identify and refine unsupported claims. It employs minimal task-routing prompts to enable zero-shot generalization across diverse domains, without relying on external knowledge sources or supervised detectors. We conduct extensive evaluations on five representative tasks--question answering, dialogue, summarization, math word problems, and contradiction detection. Experimental results show that HalluClean significantly improves factual consistency and outperforms competitive baselines, demonstrating its potential to enhance the trustworthiness of LLM outputs in real-world applications.