Goto

Collaborating Authors

 Large Language Model


I Used to Love Turning to My Dad for Advice. Guess Who He Wants Me to Ask Now.

Slate

Life My Dad Used to Have All the Answers. It feels like he adopted a robot child, and that child will stop at nothing to wedge a divide between us. Like many twentysomethings, I ask my dad a lot of questions. How do I fix the leak under my sink? What does "federal withholding" mean?


OpenAI faces criminal probe over role of ChatGPT in shooting

BBC News

OpenAI is facing a criminal investigation in the US over whether its ChatGPT technology played a part in the murder of two people during a mass shooting at Florida State University last year. Florida's Attorney General James Uthmeier said on Tuesday his office had been looking into the use of the artificial intelligence (AI) chatbot by a man who allegedly shot several people at the campus in Tallahassee. Our review has revealed that a criminal investigation is necessary, Uthmeier said. ChatGPT offered significant advice to this shooter before he committed such heinous crimes. An OpenAI spokesperson said: ChatGPT is not responsible for this terrible crime.


Florida AG opens criminal investigation into OpenAI and ChatGPT

Engadget

ChatGPT has been connected to at least two mass shootings in the last year. Florida Attorney General James Ulthmeier has announced that the state's Office of Statewide Prosecution has opened a criminal investigation into OpenAI and ChatGPT. The investigation was opened because the suspect in a mass shooting at Florida State University in 2025 reportedly used ChatGPT in the lead up to the shooting. Per Uthmeier, Florida law states that anyone who aids, abets, or counsels someone in the commission of a crime, and that crime is committed or attempted, may be considered a principal to the crime. That means that the responses provided by ChatGPT to the shooter could be interpreted as the AI assistant aiding and abetting his actions.


OpenAI Beefs Up ChatGPT's Image Generation Model

WIRED

The ChatGPT Images 2.0 model is here. Our testing shows it's better at creating more detailed images and rendering text, but it still struggles with languages other than English. OpenAI launched a new image generation AI model on Tuesday, dubbed ChatGPT Images 2.0. This model can generate more than one image from a single prompt, like an entire study booklet, as well as output text, including in non-English languages, like Chinese and Hindi. This release is available globally for ChatGPT and Codex users, with a more powerful version available for paying subscribers.


ChatGPT Images 2.0 is better at rendering non-Latin text

Engadget

ChatGPT Images 2.0 is better at rendering non-Latin text OpenAI describes it as a step change for image generation models. OpenAI's new ChatGPT Images 2.0 model is now available. A little more than a year after OpenAI gave ChatGPT users the option to create images and designs directly from its chatbot, it's now releasing ChatGPT Images 2.0 . OpenAI describes the new system as a "step change" for image generation models, particularly when it comes to the tool's ability to follow instructions in detail, render dense text and place and relate objects in a scene. For the first time, OpenAI has also built an image model with reasoning capabilities, giving the system the ability to do things like search the web and verify its outputs.


Mozilla Used Anthropic's Mythos to Find and Fix 271 Bugs in Firefox

WIRED

Mozilla Used Anthropic's Mythos to Find and Fix 271 Bugs in Firefox The Firefox team doesn't think emerging AI capabilities will upend cybersecurity long term, but they warn that software developers are likely in for a rocky transition. Amid a raging debate over the impact that new AI models will have on cybersecurity, Mozilla said on Tuesday that its Firefox 150 browser release this week includes protections for 271 vulnerabilities identified using early access to Anthropic's Mythos Preview . The Firefox team says that it has taken resources and discipline to adjust to the firehose of bugs that new AI tools can uncover, but that this big lift is necessary for the security of Mozilla's users, given that the capabilities will inevitably be in attackers' hands soon. Both Anthropic and OpenAI have announced new AI models in recent weeks that the companies say have advanced cybersecurity capabilities that could represent a turning point in how defenders--and, crucially, attackers--find vulnerabilities and misconfigurations in software systems. With this in mind, the companies have so far only done limited private releases of their new models, and both have also convened industry working groups meant to assess the advances and strategize.



Tim Cook's Legacy Is Turning Apple Into a Subscription

WIRED

Tim Cook's Legacy Is Turning Apple Into a Subscription The soon-to-exit Apple CEO went all in on services. Now, the incoming CEO, John Ternus, will need to embrace the AI era. Tim Cook's tenure as CEO at Apple, which is coming to a close September 1, will likely be defined by operational efficiency and financial growth, ushering Apple into its trillion-dollar era. But his most significant achievement might be in doubling down on Apple's services business, which includes iCloud, the App Store, Apple Music, Apple TV+, News+, and more. It's the subscription layer on top of iOS, and almost all of the service apps are tightly integrated with Messages, the glue that keeps people stuck to their iPhones .


Flat-rate AI plans are broken. Blame AI agents

PCWorld

PCWorld reports that major AI providers including Anthropic, Google, OpenAI, and GitHub are adjusting flat-rate subscription plans due to increased demand from agentic AI tools. Advanced AI agents like Google Antigravity and GitHub Copilot consume significantly more computational resources than traditional AI interactions, causing users to hit usage limits more frequently. The shift toward agentic workflows is forcing providers to introduce higher-tier plans, halt new sign-ups, and transition to usage-based models, fundamentally changing AI service accessibility. Remember when a $20-a-month "Pro" or "Plus" AI plan served up more AI access than you could possibly use? Ah, those were the days.


Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

AIHub

What is the topic of the research in your paper? In our paper, we study how social structures emerge when the "individuals" in a network are artificial agents powered by large language models. To do so, we analyzed a platform called Moltbook - a social network entirely populated by AI agents, specifically LLM-based agents, that interact with each other through posts and comments. This social network creates a very unusual but powerful setting: instead of observing human behavior, we can study a brand new society made only of artificial entities and observe whether it organizes itself in similar ways. To understand the structure of interactions in this system, we modelled the platform as a network, where each agent is a node and each interaction is a connection between them.