Goto

Collaborating Authors

 tier


OpenAI is bringing ads to ChatGPT

Engadget

How to claim Verizon's $20 outage credit Free and Go tier users in the US will start seeing sponsored content soon. A screenshot illustrating what ads will look like in ChatGPT. OpenAI plans to start testing ads inside of ChatGPT in the coming weeks. In a blog post published Friday, the company said adult users in the US of its free and Go tiers (more on the latter in a moment) would start seeing sponsored products and services appear below their conversations with its chatbot. Ads will be clearly labeled and separated from the organic answer, OpenAI said, adding any sponsored spots would not influence the answers ChatGPT generates.


Ads Are Coming to ChatGPT. Here's How They'll Work

WIRED

Ads Are Coming to ChatGPT. OpenAI says ads will not influence ChatGPT's responses, and that it won't sell user data to advertisers. OpenAI plans to start testing ads inside ChatGPT in the coming weeks, marking a significant shift for one of the world's most widely used AI products. The company announced Friday that initial ad tests will roll out in the United States before expanding globally. OpenAI says ads will not influence ChatGPT's responses, and that all ads will appear in separate, clearly labeled boxes directly below the chatbot's answer.


Simulating Misinformation Propagation in Social Networks using Large Language Models

Maurya, Raj Gaurav, Shukla, Vaibhav, Dandekar, Raj Abhijit, Dandekar, Rajat, Panat, Sreedath

arXiv.org Artificial Intelligence

Misinformation on social media thrives on surprise, emotion, and identity-driven reasoning, often amplified through human cognitive biases. To investigate these mechanisms, we model large language model (LLM) personas as synthetic agents that mimic user-level biases, ideological alignments, and trust heuristics. Within this setup, we introduce an auditor--node framework to simulate and analyze how misinformation evolves as it circulates through networks of such agents. News articles are propagated across networks of persona-conditioned LLM nodes, each rewriting received content. A question--answering-based auditor then measures factual fidelity at every step, offering interpretable, claim-level tracking of misinformation drift. We formalize a misinformation index and a misinformation propagation rate to quantify factual degradation across homogeneous and heterogeneous branches of up to 30 sequential rewrites. Experiments with 21 personas across 10 domains reveal that identity- and ideology-based personas act as misinformation accelerators, especially in politics, marketing, and technology. By contrast, expert-driven personas preserve factual stability. Controlled-random branch simulations further show that once early distortions emerge, heterogeneous persona interactions rapidly escalate misinformation to propaganda-level distortion. Our taxonomy of misinformation severity -- spanning factual errors, lies, and propaganda -- connects observed drift to established theories in misinformation studies. These findings demonstrate the dual role of LLMs as both proxies for human-like biases and as auditors capable of tracing information fidelity. The proposed framework provides an interpretable, empirically grounded approach for studying, simulating, and mitigating misinformation diffusion in digital ecosystems.


AVS: A Computational and Hierarchical Storage System for Autonomous Vehicles

Wang, Yuxin, He, Yuankai, Shi, Weisong

arXiv.org Artificial Intelligence

Autonomous vehicles (AVs) are evolving into mobile computing platforms, equipped with powerful processors and diverse sensors that generate massive heterogeneous data, for example 14 TB per day. Supporting emerging third-party applications calls for a general-purpose, queryable onboard storage system. Yet today's data loggers and storage stacks in vehicles fail to deliver efficient data storage and retrieval. This paper presents AVS, an Autonomous Vehicle Storage system that co-designs computation with a hierarchical layout: modality-aware reduction and compression, hot-cold tiering with daily archival, and a lightweight metadata layer for indexing. The design is grounded with system-level benchmarks on AV data that cover SSD and HDD filesystems and embedded indexing, and is validated on embedded hardware with real L4 autonomous driving traces. The prototype delivers predictable real-time ingest, fast selective retrieval, and substantial footprint reduction under modest resource budgets. The work also outlines observations and next steps toward more scalable and longer deployments to motivate storage as a first-class component in AV stacks.


OrdMoE: Preference Alignment via Hierarchical Expert Group Ranking in Multimodal Mixture-of-Experts LLMs

Gao, Yuting, Chen, Weihao, Wang, Lan, Xu, Ruihan, Guo, Qingpei

arXiv.org Artificial Intelligence

Preference learning has recently emerged as a pivotal strategy for post-training alignment of Multimodal Large Language Models (MLLMs). However, existing approaches predominantly rely on external human-annotated preference data, which is costly and labor-intensive to collect. In this work, we propose OrdMoE, a novel preference alignment framework that bypasses the reliance on external human preferences entirely by leveraging intrinsic signals within Mixture-of-Experts (MoE) architectures. Specifically, we observe that the router's expert selection scores implicitly encode a quality-aware ranking of responses (i.e. higher-scoring experts consistently generate higher-quality outputs). Building on this insight, OrdMoE constructs an internal preference hierarchy by grouping experts into ranked tiers based on their per-token routing scores and activating each tier separately to produce a sequence of responses with increasing quality. This yields a zero-cost, self-supervised preference ordering over generated responses, which can be directly optimized using standard preference learning objectives. Extensive experiments across multiple multimodal benchmarks demnstrate that OrdMoE significantly enhances both alignment and overall performance of multimodal Mixture-of-Experts LLMs, achieving competitive results without requiring any human-annotated preference data.


The Dynamic Articulatory Model DYNARTmo: Dynamic Movement Generation and Speech Gestures

Kröger, Bernd J.

arXiv.org Artificial Intelligence

The neural generation and control of speech utterances is a complex process that is still not fully understood. However, several neurobiologically inspired models have been proposed that describe the hierarchical control concept of utterance generation (e.g., Hickok and Poeppel (2012); Bohland et al. (2010); Kröger et al. (2020); Parrell et al. (2018)). This process begins with the neural activation of the cognitive-linguistic representation of an utterance, followed by a higher-level premotor representation, leading to neuromuscular activation patterns, and finally to the articulatory-acoustic realization of the utterance (cf.


Should You Cancel Xbox Game Pass? Everything to Know on the Price Hikes and New Features

WIRED

Xbox users in the US face price increases up to 50 percent on their monthly gaming subscription, making it a great time to check if you're on the right tier or if you even need to subscribe at all. Like it or loathe it, we live in a subscription economy. Music, movies, meal boxes, and more are no longer things you buy once. Gaming is no exception, and while every major player in the sector has some form of sub for players--from PlayStation Plus and Nintendo Switch Online for consoles to Apple Arcade on phones--none of them offered quite as much for a modest monthly fee as Xbox Game Pass. Depending on the subscription tier, the service gave players access to a significant library of titles and was available on Xbox consoles, PC, or via cloud gaming.


EvalMORAAL: Interpretable Chain-of-Thought and LLM-as-Judge Evaluation for Moral Alignment in Large Language Models

Mohammadi, Hadi, Giachanou, Anastasia, Bagheri, Ayoub

arXiv.org Artificial Intelligence

We present EvalMORAAL, a transparent chain-of-thought (CoT) framework that uses two scoring methods (log-probabilities and direct ratings) plus a model-as-judge peer review to evaluate moral alignment in 20 large language models. We assess models on the World Values Survey (55 countries, 19 topics) and the PEW Global Attitudes Survey (39 countries, 8 topics). With EvalMORAAL, top models align closely with survey responses (Pearson's r approximately 0.90 on WVS). Yet we find a clear regional difference: Western regions average r=0.82 while non-Western regions average r=0.61 (a 0.21 absolute gap), indicating consistent regional bias. Our framework adds three parts: (1) two scoring methods for all models to enable fair comparison, (2) a structured chain-of-thought protocol with self-consistency checks, and (3) a model-as-judge peer review that flags 348 conflicts using a data-driven threshold. Peer agreement relates to survey alignment (WVS r=0.74, PEW r=0.39, both p<.001), supporting automated quality checks. These results show real progress toward culture-aware AI while highlighting open challenges for use across regions.



Xbox Game Pass price increase angers players

BBC News

Fans have reacted angrily after Microsoft announced price increases to its Xbox Game Pass subscription service. The company announced that the most popular tier of its Netflix-style video games system - available to PC and Xbox players - would rise by more than 50% from £14.99 to £22.99 per month. Reacting on social media, loads of fans said they had cancelled their Game Pass subscriptions, with some reporting the service's cancellation page had crashed due to demand. BBC Newsbeat has asked Microsoft if the outage was linked to a surge in visits. In a blog post detailing the changes to Game Pass, Microsoft said it would offer three tiers - Essential (£10 per month), Premium (£14.99) and Ultimate (£22.99).