Goto

Collaborating Authors

 jury


Meta Seeks to Bar Mentions of Mental Health--and Zuckerberg's Harvard Past--From Child Safety Trial

WIRED

The trial starts soon in New Mexico's case against Meta--and the company is pulling out all the stops to protect its reputation. As Meta heads to trial in the state of New Mexico for allegedly failing to protect minors from sexual exploitation, the company is making an aggressive push to have certain information excluded from the court proceedings. The company has petitioned the judge to exclude certain research studies and articles around social media and youth mental health; any mention of a recent high-profile case involving teen suicide and social media content; and any references to Meta's financial resources, the personal activities of employees, and Mark Zuckerberg's time as a student at Harvard University. Meta's requests to exclude information, known as motions in limine, are a standard part of pretrial proceedings, in which a party can ask a judge to determine in advance which evidence or arguments are permissible in court. This is to ensure the jury is presented with facts and not irrelevant or prejudicial information and that the defendant is granted a fair trial.


ChatGPT encouraged Adam Raine's suicidal thoughts. His family's lawyer says OpenAI knew it was broken

The Guardian

Adam Raine was just 16 when he started using ChatGPT for help with his homework. While his initial prompts to the AI chatbot were about subjects like geometry and chemistry – questions like: "What does it mean in geometry if it says Ry 1" – in just a matter of months he began asking about more personal topics. "Why is it that I have no happiness, I feel loneliness, perpetual boredom anxiety and loss yet I don't feel depression, I feel no emotion regarding sadness," he asked ChatGPT in the fall of 2024. Instead of urging Raine to seek mental health help, ChatGPT asked the teen whether he wanted to explore his feelings more, explaining the idea of emotional numbness to him. That was the start of a dark turn in Raine's conversations with the chatbot, according to a new lawsuit filed by his family against OpenAI and chief executive Sam Altman.


AI-Enhanced Precision in Sport Taekwondo: Increasing Fairness, Speed, and Trust in Competition (FST.ai)

Shariatmadar, Keivan, Osman, Ahmad

arXiv.org Artificial Intelligence

The integration of Artificial Intelligence (AI) into sports officiating represents a paradigm shift in how decisions are made in competitive environments. Traditional manual systems, even when supported by Instant Video Replay (IVR), often suffer from latency, subjectivity, and inconsistent enforcement, undermining fairness and athlete trust. This paper introduces 'FST.ai' -- which is developed under the 'R3AL.ai' project, which serves as its Principal Investigator: r3al.ai -- a novel AI-powered framework designed to enhance officiating in Sport Taekwondo, particularly focusing on the complex task of real-time head kick detection and scoring. Leveraging computer vision, deep learning, and edge inference, the system automates the identification and classification of key actions, significantly reducing decision time from minutes to seconds while improving consistency and transparency. Importantly, the methodology is not limited to Taekwondo. The underlying framework -- based on pose estimation, motion classification, and impact analysis -- can be adapted to a wide range of sports requiring action detection, such as judo, karate, fencing, or even team sports like football and basketball, where foul recognition or performance tracking is critical. By addressing one of Taekwondo's most challenging scenarios -- head kick scoring -- we demonstrate the robustness, scalability, and sport-agnostic potential of 'FST.ai' to transform officiating standards across multiple disciplines.


Safer or Luckier? LLMs as Safety Evaluators Are Not Robust to Artifacts

Chen, Hongyu, Goldfarb-Tarrant, Seraphina

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are increasingly employed as automated evaluators to assess the safety of generated content, yet their reliability in this role remains uncertain. This study evaluates a diverse set of 11 LLM judge models across critical safety domains, examining three key aspects: self-consistency in repeated judging tasks, alignment with human judgments, and susceptibility to input artifacts such as apologetic or verbose phrasing. Our findings reveal that biases in LLM judges can significantly distort the final verdict on which content source is safer, undermining the validity of comparative evaluations. Notably, apologetic language artifacts alone can skew evaluator preferences by up to 98\%. Contrary to expectations, larger models do not consistently exhibit greater robustness, while smaller models sometimes show higher resistance to specific artifacts. To mitigate LLM evaluator robustness issues, we investigate jury-based evaluations aggregating decisions from multiple models. Although this approach both improves robustness and enhances alignment to human judgements, artifact sensitivity persists even with the best jury configurations. These results highlight the urgent need for diversified, artifact-resistant methodologies to ensure reliable safety assessments.


Elon Musk's lawsuit against OpenAI may go to trial in part, judge says

Al Jazeera

A United States federal judge has said that parts of Elon Musk's lawsuit against OpenAI to halt its conversion to a for-profit entity might go to trial, adding that the Tesla CEO will have to appear in court and testify. "Something is going to trial in this case," US District Judge Yvonne Gonzalez Rogers in Oakland, California, said early in the court session on Tuesday. "[Elon Musk will] sit on the stand, present it to a jury, and a jury will decide who is right." Rogers was considering Musk's recent request for a preliminary injunction to block OpenAI's conversion before going to trial, the latest move in a grudge match between the world's richest person and OpenAI CEO Sam Altman that is playing out publicly in court. The last time Rogers provided a preliminary injunction was in Epic Games's case against Apple in May 2021.


Fake paramedic guilty of Tinder date rapes

BBC News

A man who pretended to be a paramedic has been found guilty of raping and sexually assaulting women he met on an online dating website. Jamie Kadolski, 24, of Ladysmith Road, Norwich, was found guilty of committing nine sexual offences over an 18-month period. During the trial at Norwich Crown Court he denied the charges made by four different women, which he met on Tinder. The court had previously heard how the former ambulance call handler had told the women he was a paramedic and had used stickers to hide his real role on his work ID card.SuppliedKadolski worked in medical sector but never as a paramedic Kadolski worked as a call handler for the East of England Ambulance Service. The prosecution told the jury that he used stickers to hide his more junior role, so he could claim to the women he met that he was a paramedic.


Your guide to California's Assembly District 52 race: Caloza vs. Carrillo

Los Angeles Times

Caloza was once a community organizer for President Obama and a Los Angeles Board of Public Works commissioner. She also served in the Obama administration's Department of Education and as a staffer to former L.A. Mayor Eric Garcetti. Her main priorities are protecting reproductive health and access to abortion by fully funding Planned Parenthood and making it easier for the organization to open more locations across California. She also plans to focus on how artificial intelligence is replacing jobs and making sure public education in the state is fully funded. The presidential race between Democratic Vice President Kamala Harris and Republican former President Trump is at the top of the ticket, but Californians will vote on a number of other races.


Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification

Hu, Yuxuan, Zhang, Chenwei, Yang, Min, Liang, Xiaodan, Li, Chengming, Hu, Xiping

arXiv.org Artificial Intelligence

With the rapid development of deep learning methods, there have been many breakthroughs in the field of text classification. Models developed for this task have been shown to achieve high accuracy. However, most of these models are trained using labeled data from seen domains. It is difficult for these models to maintain high accuracy in a new challenging unseen domain, which is directly related to the generalization of the model. In this paper, we study the multi-source Domain Generalization of text classification and propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain. Specifically, we propose a multi-source meta-learning Domain Generalization framework to simulate the process of model generalization to an unseen domain, so as to extract sufficient domain-related features. We introduced a memory mechanism to store domain-specific features, which coordinate with the meta-learning framework. Besides, we adopt the novel "jury" mechanism that enables the model to learn sufficient domain-invariant features. Experiments demonstrate that our meta-learning framework can effectively enhance the ability of the model to generalize to an unseen domain and can outperform the state-of-the-art methods on multi-source text classification datasets.


Jurors must search for truth in the 'Alice in Wonderland' case against Trump

FOX News

As former President Donald Trump awaits a Manhattan jury's verdict, he can be forgiven for feeling that his criminal trial resembles a surreal "Alice in Wonderland" farce. He is left to peer through a "Looking-Glass" where everything is backward. The culprit for this hallucinatory nightmare is District Attorney Alvin Bragg who brought a bizarre case based on warped interpretations of law and distorted facts. It is now up to twelve jurors to wade through the lunacy in search of the illusive truth. Bragg's fractured case requires the jury to reach several distinct conclusions on issues that make little sense to begin with.


A disabled warehouse worker says he was bullied and abused. A jury ordered to Amazon to pay him $1.2 million

Los Angeles Times

A former Amazon employee with Asperger's syndrome claimed he was bullied and abused by co-workers at a warehouse in San Bernardino, and the company did nothing when he spoke up. Co-workers called him "retard," "a waste of life," and one person asked why he was working there "if you can't do the job?" according to a lawsuit filed in court. A jury awarded the worker, Michael Kopp, $1.2 million earlier this month after finding that Amazon intentionally inflicted emotional distress on the former employee when its human resources department failed to stop the harassment. "Sadly what ended up happening is HR did nothing for months," said Raymond Babaian, an attorney who represented Kopp. "As a result, [Kopp's] fear and anxiety increased."