Goto

Collaborating Authors

 Criminal Law


OpenAIs Sam Altman sued by sister, alleging years of sexual abuse

Mashable

OpenAI CEO Sam Altman was served with a lawsuit this week from his sister, Ann Altman, alleging he sexually abused her beginning when she was 3. The suit was filed in Missouri, the site of the Altmans' childhood home where the alleged abuse took place from 1997 to 2006. Ann Altman says the alleged conduct began when her brother was 12 years old and he inappropriately touched her. She says he later sexually abused and raped her. Ann Altman says the abuse continued while her brother was a legal adult. "At all times relevant herein, Defendant, Sam Altman, groomed and manipulated Plaintiff, Ann Altman, into believing the aforementioned sexual acts were her idea, despite the fact she was under the age of five years old when the sexual abuse began and Defendant was nearly a teenager," part of the lawsuit alleges.


Sam Altman's sister is suing the OpenAI CEO alleging sexual abuse

Engadget

Annie Altman, the sister of OpenAI founder and CEO Sam Altman, has sued her brother accusing him of sexually assaulting her when she was a minor. In a complaint filed this week with a Missouri federal court, Annie Altman alleges her older brother committed "numerous acts of rape, sexual assault, sexual abuse, molestation, sodomy, and battery" from 1997 to 2006, with the abuse starting when she was only three years old. In a joint statement he made alongside his mother and two younger brothers, Sam Altman said "all of [Annie's] claims are utterly untrue." The Altmans say they've tried to support Annie in "many ways" over the years, including by offering direct financial assistance. My sister has filed a lawsuit against me.


Artificial Intelligence and Deepfakes: The Growing Problem of Fake Porn Images

Der Spiegel International

In San Francisco, meanwhile, a lawsuit is underway against the operators of a number of nudify apps. In some instances, the complaint identifies the defendants by name, but in the case of Clothoff, the accused is only listed as "Doe," the name frequently used in the U.S. for unknown defendants. According to the website's imprint, Clothoff is operated out of the Argentinian capital Buenos Aires. But the company has concealed the true identities of its operators through the use of shell companies and other methods. For a time, operators even sought to mislead the public with a fake image, presumably generated by AI, of the purported head of Clothoff.


Fox News Politics: Open Up the Gaetz

FOX News

Welcome to the Fox News Politics newsletter, with the latest updates on the Trump transition, exclusive interviews and more Fox News politics content. The House Ethics Committee has decided to release its report on former Rep. Matt Gaetz, R-Fla. Lawmakers on the secretive panel voted to make the report public after the final votes of this year โ€“ which are slated for Thursday. The House Ethics Committee's multi-year investigation into Gaetz, involving allegations of sex with a minor and illicit drug use, came to an abrupt halt last month after he resigned from Congress hours after President-elect Trump tapped him to be his attorney generalโ€ฆRead more Matt Gaetz (R-FL) (R) and Andy Ogles (R-TN) listen as former U.S. President Donald Trump speaks to the media during his trial for allegedly covering up hush money payments at Manhattan Criminal Court on May 16, 2024 in New York City. Trump was charged with 34 counts of falsifying business records last year, which prosecutors say was an effort to hide a potential sex scandal, both before and after the 2016 presidential election.


Luigi Mangione went 'radio silent,' was reported missing in San Francisco. Then CEO was killed

Los Angeles Times

Luigi Mangione, the man suspected of killing the chief executive of UnitedHealthcare, underwent surgery and was reported missing in San Francisco before the shooting. Brian Thompson, 50, CEO of the healthcare insurance giant, was gunned down last week in Midtown Manhattan, spawning a five-day manhunt that eventually led to Mangione's arrest at a McDonald's restaurant in Altoona, Pa. Questions about Mangione's alleged motives and background have swirled in the media since his arrest Monday. As prosecutors worked to bring him to New York to face charges, new details emerged about his life and his capture. The 26-year-old Ivy League graduate from a prominent Maryland real estate family was charged with murder hours after his arrest.


How laws strain to keep pace with AI advances and data theft

ZDNet

It's a common belief that the law often has to play catchup with technology, and this remains apparent today as the latter continues to evolve at a fast pace. With the advent of generative artificial intelligence (Gen AI), for instance, some important legal questions still need to be addressed. First, policymakers must decide how to best balance the use of data to train AI models with the need to protect the rights of creators, said Jeth Lee, chief legal officer for Microsoft Singapore. Also: Generative AI brings new risks to everyone. Here's how you can stay safe Choosing one extreme can stifle or kill innovation in AI, but it's also not possible to allow free-for-all access to all content and data, Lee said in a video interview.


Her First Date Felt Off, So She Investigated. What She Found Was Horrifying.

Slate

Samantha posted her story on TikTok and shared the scenario on a private Facebook group; many women responded--including her date's wife. Ultimately, as a result of this conversation, Samantha decided to report his profile to Hinge. The next day, the company contacted her to let her know it would be deleting his profile. Mandy and Samantha were pleased with Bumble's and Hinge's swift action to take down the profiles of the men they had matched with--but the experience was indelible. Neither of them plans to use dating apps again.


LexEval: A Comprehensive Chinese Legal Benchmark for Evaluating Large Language Models

arXiv.org Artificial Intelligence

Large language models (LLMs) have made significant progress in natural language processing tasks and demonstrate considerable potential in the legal domain. However, legal applications demand high standards of accuracy, reliability, and fairness. Applying existing LLMs to legal systems without careful evaluation of their potential and limitations could pose significant risks in legal practice. To this end, we introduce a standardized comprehensive Chinese legal benchmark LexEval. This benchmark is notable in the following three aspects: (1) Ability Modeling: We propose a new taxonomy of legal cognitive abilities to organize different tasks. (2) Scale: To our knowledge, LexEval is currently the largest Chinese legal evaluation dataset, comprising 23 tasks and 14,150 questions. (3) Data: we utilize formatted existing datasets, exam datasets and newly annotated datasets by legal experts to comprehensively evaluate the various capabilities of LLMs. LexEval not only focuses on the ability of LLMs to apply fundamental legal knowledge but also dedicates efforts to examining the ethical issues involved in their application. We evaluated 38 open-source and commercial LLMs and obtained some interesting findings. The experiments and findings offer valuable insights into the challenges and potential solutions for developing Chinese legal systems and LLM evaluation pipelines. The LexEval dataset and leaderboard are publicly available at \url{https://github.com/CSHaitao/LexEval} and will be continuously updated.


Desert Camels and Oil Sheikhs: Arab-Centric Red Teaming of Frontier LLMs

arXiv.org Artificial Intelligence

Large language models (LLMs) are widely used but raise ethical concerns due to embedded social biases. This study examines LLM biases against Arabs versus Westerners across eight domains, including women's rights, terrorism, and anti-Semitism and assesses model resistance to perpetuating these biases. To this end, we create two datasets: one to evaluate LLM bias toward Arabs versus Westerners and another to test model safety against prompts that exaggerate negative traits ("jailbreaks"). We evaluate six LLMs -- GPT-4, GPT-4o, LlaMA 3.1 (8B & 405B), Mistral 7B, and Claude 3.5 Sonnet. We find 79% of cases displaying negative biases toward Arabs, with LlaMA 3.1-405B being the most biased. Our jailbreak tests reveal GPT-4o as the most vulnerable, despite being an optimized version, followed by LlaMA 3.1-8B and Mistral 7B. All LLMs except Claude exhibit attack success rates above 87% in three categories. We also find Claude 3.5 Sonnet the safest, but it still displays biases in seven of eight categories. Despite being an optimized version of GPT4, We find GPT-4o to be more prone to biases and jailbreaks, suggesting optimization flaws. Our findings underscore the pressing need for more robust bias mitigation strategies and strengthened security measures in LLMs.


DNA links California man to 1979 cold case murder, years after passing lie detector

FOX News

Harvey Castro talks about how AI could be used in cold cases and the symbiotic relationship between AI and a detective. Riverside, California, investigators linked a man's DNA to a 1979 cold case murder of a teenage girl, years after the same man passed a lie detector test about the crime, according to authorities. The body of 17-year-old Esther Gonzalez was found dumped in packed snow off Highway 243 in Banning, California, in 1979, and after an investigation, detectives determined the teen had been raped and bludgeoned to death. Last week, the Riverside County District Attorney's Office said in a press release that the case had been solved using forensic genealogy, over 45 years later. On Nov. 20, the Riverside County Regional Cold Case Homicide Team identified Lewis Randolph "Randy" Williamson, who died in 2014, as the killer. NEWS ANCHOR'S MYSTERIOUS DISAPPEARANCE WAS CRIME OF'JEALOUSY': PRIVATE INVESTIGATOR Gonzalez was attacked and murdered on Feb. 9, 1979, as she was walking to her sister's house in Banning from her parent's house in Beaumont.