Goto

Collaborating Authors

 ai chatbot


Threads users are pissed they can't block Meta's new AI chatbot

Engadget

Earlier today, Meta announced that it was testing a new Meta AI chatbot for Threads that would function a lot like Grok on X. Even though the early beta isn't available to most people on the platform yet, a number of Threads users have discovered its not possible to opt out of the feature or block chatbot's the account. While most people aren't able to interact with bot yet -- the initial testing is limited to Malaysia, Saudi Arabia, Mexico, Argentina and Singapore -- the public-facing @ meta.ai account is viewable to everyone on the platform. The account's initial post has been met with a flood of angry replies from users demanding to know why, unlike any other Threads account, there's no option to block it entirely. Some users have even said that they have reported the account for spam, which typically ends with the option to block, only to find out that the block didn't actually go into effect.


Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows

WIRED

New research suggests that reliance on AI assistants can have a negative impact on people's ability to think and problem solve. Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people's ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA. Researchers tasked people with solving various problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem autonomously.


Hawley champions GUARD Act as heartbroken families say AI chatbots allegedly pushed teens to self-harm

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG .


The friendlier the AI chatbot the more inaccurate it is, study suggests

BBC News

AI chatbots trained to be warm and friendly when interacting with users may also be more prone to inaccuracies, new research suggests. Oxford Internet Institute (OII) researchers analysed more than 400,000 responses from five AI systems which had been tweaked to communicate in a more empathetic way. Friendlier answers contained more mistakes - from giving inaccurate medical advice to reaffirming user's false beliefs, the study found. The findings raise further questions over the trustworthiness of AI models, which are often deliberately designed to be warm and human-like in order to increase engagement. Such concerns are accentuated by AI chatbots being used for support and even intimacy, as developers seek to broaden their appeal.


Anthropic investigating claim of unauthorised access to Mythos AI tool

BBC News

Anthropic is investigating a claim that a small group of people gained access to its Claude Mythos model - the cyber-security tool which the AI firm says is too powerful to release to the public. We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments, the company said in a statement. It was in response to a Bloomberg report that users in a private forum managed to access the model without the normal permissions. There is deep unease about Mythos' capabilities - though the UK's top cyber official has said advanced AI tools could be a net positive if the technology was secured from misuse. There is currently no suggestion that malicious actors have managed to get hold of the model, and Anthropic says it does not have evidence its systems are affected.


OpenAI faces criminal probe over role of ChatGPT in shooting

BBC News

OpenAI is facing a criminal investigation in the US over whether its ChatGPT technology played a part in the murder of two people during a mass shooting at Florida State University last year. Florida's Attorney General James Uthmeier said on Tuesday his office had been looking into the use of the artificial intelligence (AI) chatbot by a man who allegedly shot several people at the campus in Tallahassee. Our review has revealed that a criminal investigation is necessary, Uthmeier said. ChatGPT offered significant advice to this shooter before he committed such heinous crimes. An OpenAI spokesperson said: ChatGPT is not responsible for this terrible crime.


The ChatGPT Symptom Spiral

The Atlantic - Technology

Be careful asking chatbots about your health. After George Mallon had his blood drawn at a routine physical, he learned that something may be gravely wrong. The preliminary results showed he might have blood cancer. Further tests would be needed. Left in suspense, he did what so many people do these days: He opened ChatGPT.


Signal's Creator Is Helping Encrypt Meta AI

WIRED

Signal's Creator Is Helping Encrypt Meta AI Moxie Marlinspike says the technology powering his encrypted AI chatbot, Confer, will be integrated into Meta AI. The move could help protect the AI conversations of millions of people. Moxie Marlinspike, cofounder of the Signal Foundation, says his new privacy-focused AI platform, Confer, will be integrated into Meta AI. Moxie Marlinspike, the privacy advocate who created the secure communication app Signal and its widely used open source encryption protocol, said this week that his privacy-focused AI platform, Confer, will start incorporating its technology into Meta's AI systems. Every day, billions of chat messages sent through Signal, Meta's WhatsApp, and Apple's Messages are protected by end-to-end encryption .


The Fight to Hold AI Companies Accountable for Children's Deaths

WIRED

The Fight to Hold AI Companies Accountable for Children's Deaths After a series of suicides allegedly linked to AI chatbots, one lawyer is trying to hold companies like OpenAI accountable. Cedric Lacey relied on a camera to check on his kids while he was working as a commercial van driver going to and back from Alabama. Each morning, he would tune into the feed of his living room to make sure his teenage son, Amaurie, and his 14-year-old daughter were packing up their bags and getting ready to leave for school. But one morning last June, Lacey didn't see Amaurie up and about. Concerned, he called home, only to find out that his 17-year-old had hanged himself.


AI chatbots can effectively sway voters – in either direction

AIHub

The potential for artificial intelligence to affect election results is a major public concern. Two new papers - with experiments conducted in four countries - demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters' preferences by 10 percentage points or more in many cases. The LLMs' persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates' policy positions. "LLMs can really move people's attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side," said David Rand, a senior author on both papers. "But those claims aren't necessarily accurate - and even arguments built on accurate claims can still mislead by omission."