revision
- South America > Peru > Lima Department > Lima Province > Lima (0.04)
- North America > United States > Arizona (0.04)
- Europe > Switzerland (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.67)
Reviewer 1: Unclear about the evaluation for outer iterations; Does the number of aggregated tasks affect
Y es, the total complexity is proportional to the number of aggregated tasks. Add experiments to compare ANIL and MAML and w.r .t. the size B of samples: Why sample size in inner-loop is not taken into analysis, as Fallah et al. [4] does: This setting has also been considered in Rajeswaran et al. [24], Ji et al. [13]. Reviewer 2: Dependence on κ. iMAML depends on κ in contrast to poly (κ) of this work: Add an experiment to verify the tightness: Great point! W e will definitely add such an experiment in the revision. W e will clarify it in the revision.
0561bc7ecba98e39ca7994f93311ba23-AuthorFeedback.pdf
We thank the reviewers for thoughtful feedback. "researchers working on pairwise comparisons and preference learning should find this paper to be interesting and Furthermore, we note that we also plan to make our code available as soon as the review period concludes. In our derivation, we pose the problem in a noiseless environment only for simplicity. For similar reasons, we also did not compare our method against algorithms utilizing different models of preference. As with any recommender system, practical considerations are important.
HalluClean: A Unified Framework to Combat Hallucinations in LLMs
Large language models (LLMs) have achieved impressive performance across a wide range of natural language processing tasks, yet they often produce hallucinated content that undermines factual reliability. To address this challenge, we introduce HalluClean, a lightweight and task-agnostic framework for detecting and correcting hallucinations in LLMgenerated text. HalluClean adopts a reasoning-enhanced paradigm, explicitly decomposing the process into planning, execution, and revision stages to identify and refine unsupported claims. It employs minimal task-routing prompts to enable zero-shot generalization across diverse domains, without relying on external knowledge sources or supervised detectors. We conduct extensive evaluations on five representative tasks--question answering, dialogue, summarization, math word problems, and contradiction detection. Experimental results show that HalluClean significantly improves factual consistency and outperforms competitive baselines, demonstrating its potential to enhance the trustworthiness of LLM outputs in real-world applications.
- Asia > China > Heilongjiang Province > Harbin (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)