Goto

Collaborating Authors

 Generative AI


Semi-crowdsourced Clustering with Deep Generative Models

Neural Information Processing Systems

We consider the semi-supervised clustering problem where crowdsourcing provides noisy information about the pairwise comparisons on a small subset of data, i.e., whether a sample pair is in the same cluster. We propose a new approach that includes a deep generative model (DGM) to characterize low-level features of the data, and a statistical relational model for noisy pairwise annotations on its subset. The two parts share the latent variables. To make the model automatically trade-off between its complexity and fitting data, we also develop its fully Bayesian variant. The challenge of inference is addressed by fast (natural-gradient) stochastic variational inference algorithms, where we effectively combine variational message passing for the relational part and amortized learning of the DGM under a unified framework. Empirical results on synthetic and real-world datasets show that our model outperforms previous crowdsourced clustering methods.


Where OpenAI's technology could show up in Iran

MIT Technology Review

Where OpenAI's technology could show up in Iran Three places to watch, from the margins of war to the center of combat. It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI's agreement allows for; Sam Altman said the military can't use his company's technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI's other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious . It's not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it's just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads).


Encyclopedia Britannica sues OpenAI for copyright and trademark infringement

Engadget

The encyclopedia company's lawsuit also said ChatGPT cannibalizes traffic to the Britannica and Merriam-Webster websites. OpenAI has been hit with another lawsuit. According to the lawsuit, ChatGPT generates made-up content or ' hallucinations ' and falsely attributes them to Encyclopedia Britannica. The lawsuit doesn't specify an amount for monetary damages, but Britannica is also seeking an injunction to prevent OpenAI from repeating these accusations. When reached out for comment, a spokesperson for OpenAI told Engadget that, ChatGPT helps enhance human creativity, advance scientific discovery and medical research, and enable hundreds of millions of people to improve their daily lives.


OpenAI's adult mode reportedly won't generate pornographic audio, images or video

Engadget

OpenAI's adult mode reportedly won't generate pornographic audio, images or video The company's own council on wellbeing and AI appears to be against the feature. OpenAI's forthcoming adult mode will allow users to engage in lewd conversations with ChatGPT, but not use the chatbot to generate explicit images, audio or video. In response to reporting from an OpenAI spokesperson characterized the upcoming release as capable of producing smut rather than pornography. OpenAI CEO Sam Altman first floated the idea of allowing people to use ChatGPT for erotica, saying the company wanted to treat adult users like adults. OpenAI originally planned to release adult mode at the start of 2026.


Tech companies are teaming up to combat scammers

Engadget

The Online Services Accord Against Scams was signed by major tech companies including Google, Microsoft and OpenAI. A coalition of Big Tech companies is working on a more comprehensive solution to combat online scams . As first reported by, Google, Microsoft, LinkedIn, Meta, Amazon, OpenAI, Adobe and Match Group announced the signing of the Online Services Accord Against Scams. The new agreement is meant to put up a united industry-wide front against online fraud and scams, particularly those from sophisticated criminal networks that use multiple platforms. According to the report, the measures will include adding fraud detection tools, introducing new user security features, and requiring more robust verification for financial transactions.


OpenAI reportedly plans to add Sora video generation to ChatGPT

Engadget

The company launched its Sora 2 model in September 2025 alongside a dedicated Sora app. OpenAI plans to add its Sora video generation model directly into ChatGPT, reports . The standalone Sora app was seen as a smash hit when it launched alongside Sora 2 in September 2025, but interest in the video generation app has fallen in the time since as users ran into limits on the amount and kinds of videos they could create. Adding Sora to the ChatGPT could give the model a second life, and ideally grow the ChatGPT app's weekly active users from the 900 million OpenAI reported in February, to a billion or more. According to, the standalone Sora app will stick around after the model is integrated, even though the app has fallen out of the App Store's top 100 free apps and only a small number of users reportedly share their videos publicly in the app.


OpenAI's Sora AI video generator is coming to ChatGPT soon

PCWorld

PCWorld reports that OpenAI plans to integrate its Sora video generator directly into ChatGPT, making AI video creation more accessible to users. This integration could lead to changes in ChatGPT's subscription plans and pricing structure due to the high costs of running video-based generative AI.


The Download: how AI is used for military targeting, and the Pentagon's war on Claude

MIT Technology Review

The Download: how AI is used for military targeting, and the Pentagon's war on Claude Plus: an ex-DOGE staffer has been accused of stealing social security data. The US military might use generative AI systems to rank targets and recommend which to strike first, according to a Defense Department official. A list of possible targets could first be fed into a generative AI system that the Pentagon is fielding for classified settings. Humans might then ask the system to analyze the information and prioritize the targets. They would then be responsible for checking and evaluating the results and recommendations. OpenAI's ChatGPT and xAI's Grok could soon be at the center of exactly these sorts of high-stakes military decisions.


Top AI ethics and policy issues of 2025 and what to expect in 2026

AIHub

This happened as generative and agentic systems became essential in key sectors worldwide. This feature highlights the major AI ethics and policy developments of 2025, and concludes with a forward-looking perspective on the ethical and policy challenges likely to shape 2026.


A defense official reveals how AI chatbots could be used for targeting decisions

MIT Technology Review

Though the US military's big data initiative Maven has sped up the planning of strikes for years, the comments suggest that generative AI is now adding a new interpretative layer to such deliberations. The US military might use generative AI systems to rank lists of targets and make recommendations--which would be vetted by humans--about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating. A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations.