Goto

Collaborating Authors

 Litigation


Roblox introduces age verification for teens

Mashable

Roblox is a popular digital space for kids and teens to congregate while playing their favorite video game, or in the platform's parlance, experience. Now teens ages 13 to 17 who want to access a special feature designed to make those hangouts even more fun will have to verify their age via a video selfie. Roblox announced the new requirement Thursday as part of a slate of safety and privacy measures. Once Roblox estimates the user's age -- via the AI-powered age verification product Persona -- and assigns a qualifying age group to their account, it allows them to take advantage of the new feature, called "Trusted Connections." Teen users can add each other as Trusted Connections, which allows them to communicate via voice and chat without filters.


AI chatbot 'MechaHitler' could be making content considered violent extremism, expert witness tells X v eSafety case

The Guardian

The chatbot embedded in Elon Musk's X that referred to itself as "MechaHitler" and made antisemitic comments last week could be considered terrorism or violent extremism content, an Australian tribunal has heard. But an expert witness for X has argued a large language model cannot be ascribed intent, only the user. The outburst came into focus at an administrative review tribunal hearing on Tuesday where X is challenging a notice issued by the eSafety commissioner, Julie Inman Grant, in March last year asking the platform to explain how it is taking action against terrorism and violent extremism (TVE) material. X's expert witness, RMIT economics professor Chris Berg, provided evidence to the case that it was an error to assume a large language model can produce such content, because it is the intent of the user prompting the large language model that is critical in defining what can be considered terrorism and violent extremism content. One of eSafety's expert witnesses, Queensland University of Technology law professor Nicolas Suzor, disagreed with Berg, stating it was "absolutely possible for chatbots, generative AI and other tools to have some role in producing so-called synthetic TVE".


Judges Don't Know What AI's Book Piracy Means

The Atlantic - Technology

More than 40 lawsuits have been filed against AI companies since 2022. Late last month, there were rulings on two of these cases, first in a lawsuit against Anthropic and, two days later, in one against Meta. Both of the cases were brought by book authors who alleged that AI companies had trained large language models using authors' work without consent or compensation. In each case, the judges decided that the tech companies were engaged in "fair use" when they trained their models with authors' books. Both judges said that the use of these books was "transformative"--that training an LLM resulted in a fundamentally different product that does not directly compete with those books.


OpenAI is about to launch a web browser, report claims

Mashable

OpenAI, the maker of AI assistant ChatGPT, is about to launch a web browser. This is according to a new report by Reuters, which claims the company is very close to being ready, with the launch planned "in the coming weeks." The browser, which is unnamed in the report, will heavily rely on use of AI, and will have a native chat interface for instant access to ChatGPT. OpenAI already has a search product called ChatGPT Search, which can be installed as a Chrome extension. It's likely to be integrated with the upcoming web browser as well.


xAI launches Grok 4, right after the AI chatbot spewed hate speech

Mashable

Elon Musk's AI company xAI has launched the new version of its AI assistant, Grok. The launch comes almost immediately after Grok went on an antisemitic tirade on X, spewing hate speech and praising Hitler. But forget about all that, despite the fact that it literally happened days ago (that, we presume, is xAI's reasoning). The new Grok, version 4, is "the world's most powerful AI model," according to xAI. In a livestream published late on Wednesday, xAI CEO Elon Musk praised Grok 4 for being smarter than "almost all graduate students, in all disciplines, simultaneously," though he did note that sometimes it "may lack common sense."


MyPillow CEOs lawyers fined for AI-generated court filing

Mashable

Lawyers for MyPillow CEO and election conspiracy theorist Mike Lindell have been fined after submitting a legal brief filled with AI-generated errors. It's yet another reminder that as exciting as AI technology may seem, it's still no substitute for actually putting in the work yourself. Colorado district court judge Nina Wang issued the penalties on Monday, finding that attorneys Christopher Kachouroff and Jennifer DeMaster of law firm McSweeney Cynkar and Kachouroff had violated federal civil procedure rules. Specifically, Wang found that the lawyers "were not reasonable in certifying that the claims, defenses, and other legal contentions contained in [the AI brief] were warranted by existing law." As such, Kachouroff and his firm have been fined 3,000, with another 3,000 fine issued to DeMaster.


OpenAI tests ChatGPT 'study together' feature

Mashable

OpenAI might be launching a studying tool for ChatGPT, according to eagle-eyed users. As reported by TechCrunch, ChatGPT users have noticed a new option in the tools dropdown menu called "Study together." Users who seemed to have access to the tool showed screenshots of a conversation that guides the user through a prompt like a mathematical equation or learning a new concept. The tool appears to use the Socratic method, where a teacher essentially helps a student learn by asking them questions about the problem and steering them towards the correct answer. If this tool gets a widespread release, it could become a popular study buddy for users that already rely on ChatGPT for many tasks.


Not Even Lawsuits Can Stop AI

Slate

Candice Lim and Kate Lindsay are joined by Slate senior tech editor Tony Ho Tran to parse through what Meta's victory in a recent AI lawsuit means for its users. Tools like ChatGPT are becoming more common at home and at work, but without protections, could threaten not just the creativity of artists, but anyone who posts online. As regulation lags behind, how can we protect ourselves? And how many of us are using AI without even knowing it? This podcast is produced by Daisy Rosario, Vic Whitley-Berry, Candice Lim, and Kate Lindsay.


Perplexity adds a Max tier just as expensive as its rivals

Mashable

Perplexity has added another subscription tier, one that the company calls its "most powerful." And it's got a price tag to match. Announced by the Nvidia- and Jeff Bezos-backed company in a Wednesday blog post, Perplexity Max is the new premium offering for the AI search engine. An upgrade for the existing Perplexity Pro tier, Max is now available on iOS and web app, coming soon to Android. As well as everything included in Pro, Max includes unlimited use of Labs (the AI tool that can generate projects like reports, presentations, and simple web apps in about 10 minutes), early access to new features like the upcoming Comet agentic search browser, access to advanced AI models like Anthropic's Claude Opus 4 and OpenAI o3-pro and "priority support" for these models. It's the same monthly price as the top tier of OpenAI's offering, ChatGPT Pro; in comparison, the most expensive tier of Google's AI, Gemini, is Google AI Ultra, which costs 249.99 per month.


Last call: Apple's 95 million Siri settlement claims end tomorrow - secure your payout ASAP

ZDNet

Think that Apple's Siri snooped on your private conversations in the past? If so, you may be able to snag a slice of the 95 million that Apple is paying out to settle a class-action lawsuit. But you have to act fast, as the deadline to submit your claim is tomorrow. Also: My favorite iPhone productivity feature just got a major upgrade with iOS 26 (and it's not Siri) A settlement page recently published in the case of Lopez v. Apple Inc. explains the steps and deadlines for people who want to make a claim. The settlement is geared toward current or former users of a Siri device in the US whose conversations with the voice assistant were captured by Apple or shared with third parties due to an "unintended Siri activation."