Goto

Collaborating Authors

 hallucination


The Bloomberg Terminal Is Getting an AI Makeover, Like It or Not

WIRED

WIRED spoke with Bloomberg's chief technology officer about the big, chatbot-style changes coming to the iconic platform for traders. For its famous intractability, the Bloomberg Terminal has long inspired devotion, bordering on obsession . Among traders, the ability to chart a path through the software's dizzying scrolls of numbers and text to isolate far-flung information is the mark of a seasoned professional. But as a greater mass of data is fed into the Terminal--not only earnings and asset prices, but weather forecasts, shipping logs, factory locations, consumer spending patterns, private loans, and so on--valuable information is being lost. "It has become more and more untenable," says Shawn Edwards, chief technology officer at Bloomberg.


Generative Score Inference for Multimodal Data

Tian, Xinyu, Shen, Xiaotong

arXiv.org Machine Learning

Accurate uncertainty quantification is crucial for making reliable decisions in various supervised learning scenarios, particularly when dealing with complex, multimodal data such as images and text. Current approaches often face notable limitations, including rigid assumptions and limited generalizability, constraining their effectiveness across diverse supervised learning tasks. To overcome these limitations, we introduce Generative Score Inference (GSI), a flexible inference framework capable of constructing statistically valid and informative prediction and confidence sets across a wide range of multimodal learning problems. GSI utilizes synthetic samples generated by deep generative models to approximate conditional score distributions, facilitating precise uncertainty quantification without imposing restrictive assumptions about the data or tasks. We empirically validate GSI's capabilities through two representative scenarios: hallucination detection in large language models and uncertainty estimation in image captioning. Our method achieves state-of-the-art performance in hallucination detection and robust predictive uncertainty in image captioning, and its performance is positively influenced by the quality of the underlying generative model. These findings underscore the potential of GSI as a versatile inference framework, significantly enhancing uncertainty quantification and trustworthiness in multimodal learning.


Senior European journalist suspended over AI-generated quotes

The Guardian

Peter Vandermeersch admitted using AI to'wrongly put words into people's mouths'. Peter Vandermeersch admitted using AI to'wrongly put words into people's mouths'. Mediahuis suspends Peter Vandermeersch, who says he'fell into trap of hallucinations', after investigation by newspaper where he was once editor-in-chief The publisher of the Dutch newspaper De Telegraaf and the Irish Independent has suspended one of its senior journalists after he admitted using AI to "wrongly put words into people's mouths". Peter Vandermeersch, the former head of the Irish operations at Mediahuis, said he "fell into the trap of hallucinations" - the term for AI-generated errors - when using the technology . Vandermeersch, a fellow of "journalism and society" at the European publishing group, has been suspended from his role.



AutomatedMulti-levelPreferenceforMLLMs

Neural Information Processing Systems

However, Is asingle comparison between superior and inferior responses sufficient for preference learning in MLLMs? Upon consideration, we find that a multi-level preference framework offers greater benefits for preference learning, primarily due to two main intuitive advantages.