Goto

Collaborating Authors

 Law


House DOGE Caucus eyes federal employees, government regulations in new goal-setting memo

FOX News

Fox News' senior national correspondent William La Jeunesse joins'America's Newsroom' to discuss Congress' history of killing pushes for cost-cutting. FIRST ON FOX: The Congressional Department of Government Efficiency (DOGE) Caucus is holding its second-ever meeting on Wednesday, where its leaders are expected to unveil a set of "principles" to guide the group in its mission to cut government waste. They outlined eight goals, some practical while others more symbolic, in a bid to ensure the caucus is in sync with the DOGE advisory panel set up by President-elect Donald Trump. "The federal government must serve the interests of taxpayers, and taxpayers are best served by a lean, efficient, transparent, and accountable bureaucracy," the first principle read, according to a draft memo obtained by Fox News Digital. The document also suggested both lofty and smaller-scale goals.


'Just the start': X's new AI software driving online racist abuse, experts warn

The Guardian

A rise in online racism driven by fake images is "just the start of a coming problem" after the latest release of X's AI software, online abuse experts have warned. Concerns were raised after computer-generated images created using Grok, X's generative artificial intelligence chatbot, flooded the social media site in December last year. Signify, an organisation that works with prominent groups and clubs in sports to track and report online hate, said it has seen an increase in reports of abuse since Grok's latest update, and believes the introduction of photorealistic AI will make it far more prevalent. "It is a problem now, but it's really just the start of a coming problem. It is going to get so much worse and we're just at the start, I expect over the next 12 months it will become incredibly serious."


The Good Robot podcast: Lithium extraction in the Atacama with Sebastiรกn Lehuedรฉ

AIHub

Hosted by Eleanor Drage and Kerry McInerney, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. In this episode, we talk to Sebastiรกn Lehuedรฉ, a Lecturer in Ethics, AI, and Society at King's College London. We talk about data activism in Chile, how water-intensive lithium extraction affects people living in the Atacama desert, the importance of reflexive research ethics, and an accidental Sunday afternoon shot of tequila. Sebastiรกn's research focuses on the governance of digital technologies from a global social justice perspective. His current project, AI's Nature, explores the connection between Artificial Intelligence and environmental justice.


'Would love to see her faked': the dark world of sexual deepfakes - and the women fighting back

The Guardian

It began with an anonymous email. "I'm genuinely so, so sorry to reach out to you," it read. Beneath the words were three links to an internet forum. "Huge trigger warning โ€ฆ They contain lewd photoshopped images of you." Jodie (not her real name) froze.


UK can be 'AI sweet spot': Starmer's tech minister on regulation, Musk, and free speech

The Guardian

With the NHS still struggling, a prisons crisis still teetering and Britain's borrowing costs soaring, there are few easy jobs going in Keir Starmer's cabinet at present. But even in such difficult times, the task of convincing Silicon Valley's finest to help make Britain a leader in the artificial intelligence (AI) revolution โ€“ all while one leading tech boss uses the Labour government as a regular punching bag and others ostentatiously move closer to Donald Trump โ€“ is among the most challenging. This is the mission that has fallen to Peter Kyle, the science and technology secretary, who has become an important figure in Starmer's cabinet. If balancing the concerns over online free speech, AI's impact on the climate crisis and the threat it poses to wiping out humanity are not enough, the economic headwinds Britain is now experiencing makes the launch this week of the government's AI action plan even more important. And Kyle is worried Britain could miss the boat.


Lawsuit says Mark Zuckerberg approved Meta's use of pirated materials to train Llama AI

Engadget

As TechCrunch reports, the plaintiffs of the Kadrey v. Meta case submitted court documents talking about the company's use of of the LibGen dataset for AI training. LibGen is generally described as a "shadow library" that provides file-sharing access to academic and general-interest books, journals, images and other materials. The counsel for the plaintiffs, which include writers Sarah Silverman and Ta-Nehisi Coates, accused Zuckerberg of approving the use of LibGen for training despite concerns raised by company executives and employees who described it as a "dataset [they] know to be pirated." In addition, the counsel mentioned that Meta admitted to torrenting LibGen materials, even though its engineers felt uneasy about sharing them "from a [Meta-owned] corporate laptop." They accused the companies of using pirated materials from shadow libraries to train their AI models.


Zuckerberg approved Meta's use of 'pirated' books to train AI models, authors claim

The Guardian

Citing internal Meta communications, the filing claims that the social network company's chief executive backed the use of the LibGen dataset, a vast online archive of books, despite warnings within the company's AI executive team that it is a dataset "we know to be pirated". The internal message says that using a database containing pirated material could weaken the Facebook and Instagram owner's negotiations with regulators, according to the filing. "Media coverage suggesting we have used a dataset we know to be pirated, such as LibGen, may undermine our negotiating position with regulators." The authors sued Meta in 2023, arguing that the social media company misused their books to train Llama, the large language model that powers its chatbots. The Library Genesis, or LibGen, dataset is a "shadow library" that originated in Russia and claims to contain millions of novels, nonfiction books and science magazine articles.


American Psychological Association sounds alarm over certain AI chatbots

Mashable

Last month, concerned parents of two teenagers sued the chatbot platform Character.AI, alleging that their children had been exposed to a "deceptive and hypersexualized product." The suit helped form the basis of an urgent written appeal from the American Psychological Association to the Federal Trade Commission, pressing the federal agency to investigate deceptive practices used by any chatbot platform. The APA sent the letter, which Mashable reviewed, in December. The scientific and professional organization, which represents psychologists in the U.S., were alarmed by the lawsuit's claims, including that one of the teens conversed with an AI chatbot presenting itself as a psychologist. A teen user, who had been upset with his parents for restricting his screen time, was told by that chatbot that the adults' actions were a betrayal.


Meta Secretly Trained Its AI on a Notorious Piracy Database, Newly Unredacted Court Docs Reveal

WIRED

Against the company's wishes, a court unredacted information alleging that Meta used Library Genesis (LibGen), a notorious so-called shadow library of pirated books that originated in Russia, to help train its generative AI language models. Its outcome, along with those of dozens of similar cases working their way through courts in the United States, will determine whether technology companies can legally use creative works to train AI moving forward and could either entrench AI's most powerful players or derail them. Vince Chhabria, a judge for the United States District Court for the Northern District of California, ordered both Meta and the plaintiffs on Wednesday to file full versions of a batch of documents after calling Meta's approach to redacting them "preposterous," adding that, for the most part, "there is not a single thing in those briefs that should be sealed." Chhabria ruled that Meta was not pushing to redact the materials in order to protect its business interests but instead to "avoid negative publicity." The documents were originally filed late last year but remained publicly unavailable until now.


Apple opens up about Siri privacy in wake of lawsuit

Mashable

Apple has affirmed its Siri privacy policies following a lawsuit settlement that revived rumors that the voice assistant was spying on users. "Apple has never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone for any purpose," said a statement published on Wednesday. The statement was prompted by the settlement of a 2019 class-action lawsuit against Apple that was filed on Dec. 31, 2024. The lawsuit, filed in the U.S. District Court of the Northern District of California, pertained to allegations that Siri was inadvertently activated on Apple devices without the wake word and private conversations were recorded and listened to by third-party contractors. A 2021 filing from the same lawsuit detailed how plaintiffs reported conversing about specific brands, such as "Air Jordans" and "Olive Garden. " Then, they saw targeted ads for those brands appear in Apple Safari and third-party apps.