Goto

Collaborating Authors

 pseudolabel



LiftingWeakSupervisionToStructuredPrediction

Neural Information Processing Systems

For labels taking values in a finite metric space, we introduce techniques new to weak supervision based on pseudo-Euclidean embeddings andtensor decompositions, providing anearly-consistent noise rate estimator.


LiftingWeakSupervisionToStructuredPrediction

Neural Information Processing Systems

For labels taking values in a finite metric space, we introduce techniques new to weak supervision based on pseudo-Euclidean embeddings andtensor decompositions, providing anearly-consistent noise rate estimator.




Lifting Weak Supervision To Structured Prediction

Neural Information Processing Systems

Weak supervision (WS) is a rich set of techniques that produce pseudolabels by aggregating easily obtained but potentially noisy label estimates from various sources. WS is theoretically well-understood for binary classification, where simple approaches enable consistent estimation of pseudolabel noise rates. Using this result, it has been shown that downstream models trained on the pseudolabels have generalization guarantees nearly identical to those trained on clean labels. While this is exciting, users often wish to use WS for \emph{structured prediction}, where the output space consists of more than a binary or multi-class label set: e.g.




Unsupervised Hallucination Detection by Inspecting Reasoning Processes

Srey, Ponhvoan, Wu, Xiaobao, Luu, Anh Tuan

arXiv.org Artificial Intelligence

Unsupervised hallucination detection aims to identify hallucinated content generated by large language models (LLMs) without relying on labeled data. While unsupervised methods have gained popularity by eliminating labor-intensive human annotations, they frequently rely on proxy signals unrelated to factual correctness. This misalignment biases detection probes toward superficial or non-truth-related aspects, limiting generalizability across datasets and scenarios. To overcome these limitations, we propose IRIS, an unsupervised hallucination detection framework, leveraging internal representations intrinsic to factual correctness. IRIS prompts the LLM to carefully verify the truthfulness of a given statement, and obtain its contextualized embedding as informative features for training. Meanwhile, the uncertainty of each response is considered a soft pseudolabel for truthfulness. Experimental results demonstrate that IRIS consistently outperforms existing unsupervised methods. Our approach is fully unsupervised, computationally low cost, and works well even with few training data, making it suitable for real-time detection.