Goto

Collaborating Authors

 samaritan


Parents could get alerts if children show acute distress while using ChatGPT

The Guardian

Parents could be alerted if their teenagers show acute distress while talking with ChatGPT, amid child safety concerns as more young people turn to AI chatbots for support and advice. The alerts are part of new protections for children using ChatGPT to be rolled out in the next month by OpenAI, which was last week sued by the family of a boy who took his own life after allegedly receiving "months of encouragement" from the system. Other new safeguards will include parents being able to link their accounts to those of their teenagers and controlling how the AI model responds to their child with "age-appropriate model behaviour rules". But internet safety campaigners said the steps did not go far enough and AI chatbots should not be on the market before they are deemed safe for young people. Adam Raine, 16, from California, killed himself in April after discussing a method of suicide with ChatGPT.


ChatGPT encouraged Adam Raine's suicidal thoughts. His family's lawyer says OpenAI knew it was broken

The Guardian

Adam Raine was just 16 when he started using ChatGPT for help with his homework. While his initial prompts to the AI chatbot were about subjects like geometry and chemistry – questions like: "What does it mean in geometry if it says Ry 1" – in just a matter of months he began asking about more personal topics. "Why is it that I have no happiness, I feel loneliness, perpetual boredom anxiety and loss yet I don't feel depression, I feel no emotion regarding sadness," he asked ChatGPT in the fall of 2024. Instead of urging Raine to seek mental health help, ChatGPT asked the teen whether he wanted to explore his feelings more, explaining the idea of emotional numbness to him. That was the start of a dark turn in Raine's conversations with the chatbot, according to a new lawsuit filed by his family against OpenAI and chief executive Sam Altman.


Teen killed himself after 'months of encouragement from ChatGPT', lawsuit claims

The Guardian

The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot. Open AI admitted its systems could "fall short" and said it would install "stronger guardrails around sensitive content and risky behaviors" for users under 18. The 500bn ( 372bn) San Francisco AI company said it would also introduce parental controls to allow parents "options to gain more insight into, and shape, how their teens use ChatGPT", but has yet to provide details about how these would work. Adam, from California, killed himself in April after what his family's lawyer called "months of encouragement from ChatGPT". The teenager's family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was "rushed to market … despite clear safety issues".


OpenAI whistleblower who died was being considered as witness against company

The Guardian

Balaji worked at OpenAI for nearly four years before quitting in August. He had been well-regarded by colleagues at the San Francisco company, where a co-founder this week called him one of OpenAI's strongest contributors who was essential to developing some of its products. "We are devastated to learn of this incredibly sad news and our hearts go out to Suchir's loved ones during this difficult time," said a statement from OpenAI. Balaji was found dead in his San Francisco apartment on 26 November in what police said "appeared to be a suicide. No evidence of foul play was found during the initial investigation."


Ofcom warns tech firms after chatbots imitate Brianna Ghey and Molly Russell

The Guardian

Ofcom has warned tech firms that content from chatbots impersonating real and fictional people could fall foul of the UK's new digital laws. The communications regulator issued the guidance after it emerged that users on the Character.AI platform had created avatars mimicking the deceased British teenagers Brianna Ghey and Molly Russell. Under pressure from digital safety campaigners to clarify the situation, Ofcom underlined that content created by user-made chatbots would come under the scope of the Online Safety Act. Without naming the US-based artificial intelligence firm Character.AI, Ofcom said a site or app that allowed users to create their own chatbots for other people to interact with would be covered by the act. "This includes services that provide tools for users to create chatbots that mimic the personas of real and fictional people, which can be submitted to a chatbot library for others to interact with," said Ofcom. In an open letter, Ofcom also said any user-to-user site or app – such as a social media platform or messaging app – that enabled people to share content generated by a chatbot on that site with others would also be in scope.


Mother says AI chatbot led her son to kill himself in lawsuit against its maker

The Guardian

The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death. Megan Garcia filed a civil suit against Character.ai, Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia. "A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," Garcia said in a press release.


Equilibrium-Invariant Embedding, Metric Space, and Fundamental Set of $2\times2$ Normal-Form Games

Marris, Luke, Gemp, Ian, Piliouras, Georgios

arXiv.org Artificial Intelligence

Equilibrium solution concepts of normal-form games, such as Nash equilibria, correlated equilibria, and coarse correlated equilibria, describe the joint strategy profiles from which no player has incentive to unilaterally deviate. They are widely studied in game theory, economics, and multiagent systems. Equilibrium concepts are invariant under certain transforms of the payoffs. We define an equilibrium-inspired distance metric for the space of all normal-form games and uncover a distance-preserving equilibrium-invariant embedding. Furthermore, we propose an additional transform which defines a better-response-invariant distance metric and embedding. To demonstrate these metric spaces we study $2\times2$ games. The equilibrium-invariant embedding of $2\times2$ games has an efficient two variable parameterization (a reduction from eight), where each variable geometrically describes an angle on a unit circle. Interesting properties can be spatially inferred from the embedding, including: equilibrium support, cycles, competition, coordination, distances, best-responses, and symmetries. The best-response-invariant embedding of $2\times2$ games, after considering symmetries, rediscovers a set of 15 games, and their respective equivalence classes. We propose that this set of game classes is fundamental and captures all possible interesting strategic interactions in $2\times2$ games. We introduce a directed graph representation and name for each class. Finally, we leverage the tools developed for $2\times2$ games to develop game theoretic visualizations of large normal-form and extensive-form games that aim to fingerprint the strategic interactions that occur within.


Cash for kills: why are people paying for coaches to get better at video games?

The Guardian

Eighteen months ago, Fabio Dores was making good money as a drag queen. Performing under the name Felicity Suxwell, he had a club residency and worked hen nights throughout the UK, attracting enough bookings to quit his day job at a lettings agency. Then lockdown came and everything shut down. Bored at home, he was browsing Facebook and spotted an advertisement for LegionFarm, an online video-game coaching platform that offered to match pro gamers with clients looking to improve their abilities. As a skilled player of battle royale hit Apex Legends, he applied to become a coach.


Nightmare Tinder date allegedly held woman captive for days before rescue

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. A woman was allegedly held captive in a California home for three days by a man she met on Tinder. "On July 12th, 2021 at approximately 16:59, Oakland Police Officers were dispatched to the 5400 Block of Fleming Avenue to investigate a report of a kidnapping," the Oakland Police Department said in a statement of the incident. "A preliminary investigation revealed that an adult female (Non-Oakland resident) was falsely imprisoned and sexually assaulted by her male partner."


The Threat of Artificial Intelligence

#artificialintelligence

The technologies referred to as "artificial intelligence" or "AI" are more momentous than most people realize. Their impact will be at least equal to, and may well exceed, that of electricity, the computer, and the internet. What's more, their impact will be massive and rapid, faster than what the internet has wrought in the past thirty years. Much of it will be wondrous, giving sight to the blind and enabling self-driving vehicles, for example, but AI-engendered technology may also devastate job rolls, enable an all- encompassing surveillance state, and provoke social upheavals yet unforeseen. The time we have to understand this fast-moving technology and establish principles for its governance is very short. The term "AI" was coined by a computer scientist in 1956.