Goto

Collaborating Authors

 parental control


'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI

TIME - Tech

'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. At least once a month, two-thirds of people who regularly use AI turn to their bots for advice on sensitive personal issues and emotional support. Many people now report trusting their chatbots more than their elected representatives, civil servants, faith leaders--and the companies building AI. That's according to data from 70 countries, gathered by the Collective Intelligence Project (CIP).


Mark Zuckerberg was initially opposed to parental controls for AI chatbots, according to legal filing

Engadget

Apple could unveil Gemini-powered Siri in Feb. Despite not wanting minors to have explicit conversations, Meta's CEO allegedly rejected this particular safety measure. Meta has faced some serious questions about how it allows its underage users to interact with AI-powered chatbots. Most recently, internal communications obtained by the New Mexico Attorney General's Office revealed that although Meta CEO Mark Zuckerberg was opposed to the chatbots having explicit conversations with minors, he also rejected the idea of placing parental controls on the feature. In its statement to the publication, Meta accused the New Mexico Attorney General of cherry picking documents to paint a flawed and inaccurate picture.


5 Things to Know Before Using an AI Browser

TIME - Tech

A smartphone shows the official website of ChatGPT Atlas. A smartphone shows the official website of ChatGPT Atlas. "It'd be really nice to have a service that was sort of just observing your life and proactively helping you when you needed it," said OpenAI CEO Sam Altman in a recent Q&A about OpenAI's plans. This vision is at the heart of a new crop of AI browsers, notably OpenAI's ChatGPT Atlas and Perplexity's Comet. AI browsers differ from traditional browsers in at least two important ways.


Why Character.AI's CEO Still Lets His 6-Year-Old Daughter Use the App

TIME - Tech

Welcome back to, TIME's new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? The chatbot platform, which allows users to chat with AIs that personify fictional characters, is the target of several lawsuits -- including one from Megan Garcia, a mother whose 14-year-old son died by suicide after becoming obsessed with one of the bots, which allegedly encouraged him to end his own life. In the wake of that lawsuit and others, last month Character.AI made a big announcement: it would ban users under 18 years old from having "open-ended conversations" with the chatbots on its platform. It was a huge pivot for a company that says Generations Z and Alpha make up the core of its more than 6 million daily active users, who spend an average of 70 to 80 minutes per day on the platform.


My chilling week on Roblox: sexually assaulted and shat on as a child avatar roaming the online world

The Guardian

Sarah Martin investigates the virtual world of the children's online game Roblox with the profile of an eight-year-old girl with parental control settings turned on. Sarah Martin investigates the virtual world of the children's online game Roblox with the profile of an eight-year-old girl with parental control settings turned on. In seven days my young alter ego is cyberbullied and attacked while exploring clubs, casinos and horror games, all with parental controls in place. Is the platform safe for children - or an'X-rated paedophile hellscape'? Wed 5 Nov 2025 09.00 ESTLast modified on Wed 5 Nov 2025 09.01 EST I am an eight-year-old girl, standing near-naked in a room full of strangers. As the room spins and zooms upon me and people glide around me, I clock my features.


OpenAI Completes Major Reorganization With 135 Billion Microsoft Stake

TIME - Tech

An illustration photo shows the OpenAI logo displayed on a smartphone with the Microsoft logo in the background in Chongqing, China on Aug. 27, 2025. An illustration photo shows the OpenAI logo displayed on a smartphone with the Microsoft logo in the background in Chongqing, China on Aug. 27, 2025. OpenAI has completed a restructuring, dividing itself into a nonprofit and for-profit entity, the company announced on Tuesday. The nonprofit arm, now called the OpenAI Foundation, will have a $130 billion stake in the for-profit enterprise, a public benefit corporation called OpenAI Group PBC. "The OpenAI Foundation and OpenAI Group will work in concert to advance solutions to hard problems and opportunities posed by AI progress," the company said in its blog post announcing the restructuring. "This includes making intelligence a tool that everyone can benefit from, building safe and aligned systems, turbocharging scientific discovery, and strengthening global cooperation and resilience."


OpenAI Removed Safeguards Before Teen's Suicide, Amended Lawsuit Claims

TIME - Tech

OpenAI Removed Safeguards Before Teen's Suicide, Amended Lawsuit Claims OpenAI relaxed safeguards that would have prevented ChatGPT from engaging in conversations about self-harm in the months leading up to the suicide of Adam Raine, an amended complaint filed by the family in the San Francisco County Superior Court on Wednesday alleges. The amendment changes the theory of the case from reckless indifference to intentional misconduct, according to the family's lawyers, which could raise the damages awarded to the family. The Raine family's lawyers will have to prove that OpenAI was aware of the risks posed by ChatGPT and disregarded them. The family has asked for a jury trial. In an interview with TIME, Jay Edelson, one of the Raine family's lawyers, says OpenAI relaxed safeguards in an "intentional decision" to "prioritize engagement."


Chatbots Are Becoming More Sexually Explicit in a Bid to Attract Usership and Paying Customers

TIME - Tech

The eighteen plus symbol (18+) appears on a smartphone screen, and the OpenAI logo displays as the background on a laptop screen in this photo illustration in Athens, Greece, on October 16, 2025. The eighteen plus symbol (18+) appears on a smartphone screen, and the OpenAI logo displays as the background on a laptop screen in this photo illustration in Athens, Greece, on October 16, 2025. In August, OpenAI CEO Sam Altman said on a podcast that he was "proud" that his company had not gotten "distracted" by putting features like a "sexbot avatar" into ChatGPT. But on Tuesday, he announced that adult users will be able to access explicit interactive experiences, marking a major shift in the company's practices. "In December, as we roll out age-gating more fully and as part of our'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman said in a post on X.


ChatGPT starts rolling out parental controls to protect teenage users

PCWorld

When you purchase through links in our articles, we may earn a small commission. Privacy is preserved as parents can't see the chat histories of their children's linked accounts. Earlier this month, OpenAI said it would be introducing parental controls for ChatGPT following an incident and lawsuit involving a teenager who allegedly used ChatGPT to plan and carry out his own suicide. That day is now here, with OpenAI rolling out ChatGPT parental controls . The feature allows parents to link their ChatGPT accounts with their child's account and customize ChatGPT's settings to create a safer, more age-appropriate experience for under-age users.


The Download: AI to detect child abuse images, and what to expect from our 2025 Climate Tech Companies to Watch list

MIT Technology Review

Plus: OpenAI's parental controls have come into force Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing. The Department of Homeland Security's Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco-based Hive AI for its software, which can identify whether a piece of content was AI-generated. The need to cut emissions and adapt to our warming world is growing more urgent. This year, we've seen temperatures reach record highs, as they have nearly every year for the last decade. Climate-fueled natural disasters are affecting communities around the world, costing billions of dollars.