Goto

Collaborating Authors

 signatory


How California's New AI Law Protects Whistleblowers

TIME - Tech

Booth is a reporter at TIME. Governor Gavin Newsom speaks at Google about preparing students and workers for the next generation of technology, in San Francisco, California, on August 7, 2025. Governor Gavin Newsom speaks at Google about preparing students and workers for the next generation of technology, in San Francisco, California, on August 7, 2025. Booth is a reporter at TIME. CEOs of the companies racing to build smarter AI--Google DeepMind, OpenAI, xAI, and Anthropic--have been clear about the stakes.


Paul McCartney and Dua Lipa among artists urging Starmer to rethink AI copyright plans

The Guardian

"We will lose an immense growth opportunity if we give our work away at the behest of a handful of powerful overseas tech companies and with it our future income, the UK's position as a creative powerhouse, and any hope that the technology of daily life will embody the values and laws of the United Kingdom," the letter says. Urging parliamentarians on all sides of the political spectrum and in both houses to support the change, the letter says: "We urge you to vote in support of the UK creative industries. Supporting us supports the creators of the future. Our work is not yours to give away." Spanning the worlds of music, theatre, film, literature, art and media, the more than 400 signatories include Elton John, Kazuo Ishiguro, Annie Lennox, Rachel Whiteread, Jeanette Winterson, the National Theatre and the News Media Association, which represents more than 800 news titles including the Guardian.


OpenAI Wants to Go For-Profit. Experts Say Regulators Should Step In

TIME - Tech

In the latest development in an ongoing struggle over OpenAI's future direction--and potentially the future of artificial intelligence itself--dozens of prominent figures are urging the Attorneys General of California and Delaware to block OpenAI's controversial plan to convert from its unique nonprofit-controlled structure to a for-profit company. In a letter made public April 23, signatories including "AI Godfather" Geoffrey Hinton, Harvard legal professor Lawrence Lessig, and several former OpenAI researchers argue the move represents a fundamental betrayal of OpenAI's founding mission. "The proposed restructuring would eliminate essential safeguards, effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritize shareholder returns," the letter's authors write. It lands as OpenAI faces immense pressure from the other side: failing to implement the restructure by the end of the year could cost the company 20 billion and hamstring future fundraising. OpenAI was founded in 2015 as a non-profit, with its stated mission being to ensure that artificial general intelligence (AGI) "benefits all of humanity" rather than advancing "the private gain of any person."


Don't gift our work to AI billionaires: Mark Haddon, Michal Rosen and other creatives urge government

The Guardian

More than 2,000 people, including leading creative names such as Mark Haddon, Axel Scheffler, Benji Davies and Michael Rosen, have signed a letter published in the Observer today calling on the government to keep the legal safeguards that offer artists and writers the prospect of a sustainable income. John predicted the proposal "would devastate our creative community", while helping "powerful foreign technology companies". The signatories say they understand the government aim of boosting growth, but describe themselves as "staring in astonishment" at Whitehall's eagerness "to hastily wrap our live's work in attractive paper as a welcome gift to automated competitors". "Imagine asking ChatGPT to generate your child's artwork instead of asking the child. It's a horrible thought, isn't it?" said children's book author and illustrator Ged Adamson.


Keras Sig: Efficient Path Signature Computation on GPU in Keras 3

Genet, Rémi, Inzirillo, Hugo

arXiv.org Artificial Intelligence

In this paper we introduce Keras Sig a high-performance pythonic library designed to compute path signature for deep learning applications. Entirely built in Keras 3, \textit{Keras Sig} leverages the seamless integration with the mostly used deep learning backends such as PyTorch, JAX and TensorFlow. Inspired by Kidger and Lyons (2021),we proposed a novel approach reshaping signature calculations to leverage GPU parallelism. This adjustment allows us to reduce the training time by 55\% and 5 to 10-fold improvements in direct signature computation compared to existing methods, while maintaining similar CPU performance. Relying on high-level tensor operations instead of low-level C++ code, Keras Sig significantly reduces the versioning and compatibility issues commonly encountered in deep learning libraries, while delivering superior or comparable performance across various hardware configurations. We demonstrate through extensive benchmarking that our approach scales efficiently with the length of input sequences and maintains competitive performance across various signature parameters, though bounded by memory constraints for very large signature dimensions.


Thom Yorke and Julianne Moore join thousands of creatives in AI warning

The Guardian

Abba's Björn Ulvaeus, the actor Julianne Moore, the Radiohead singer Thom Yorke are among 10,500 signatories of a statement from the creative industries warning artificial intelligence companies that unlicensed use of their work is a "major, unjust threat" to artists' livelihoods. "The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted," reads the statement. Thousands of creative professionals from the worlds of literature, music, film, theatre and television have given their backing to the statement, with authors including Kazuo Ishiguro, Ann Patchett, and Kate Mosse, musicians including the Cure's Robert Smith as well as the composer Max Richter and actors including Kevin Bacon, Rosario Dawson and F Murray Abraham. The organiser of the letter, the British composer and former AI executive Ed Newton-Rex, said people who make a living from creative work are "very worried" about the situation. "There are three key resources that generative AI companies need to build AI models: people, compute, and data. They spend vast sums on the first two – sometimes a million dollars per engineer, and up to a billion dollars per model. But they expect to take the third – training data – for free," he said.


The US, UK, EU and other major nations have signed a landmark global AI treaty

Engadget

The United States, United Kingdom, European Union, and several other countries have signed an AI safety treaty laid out by the Council of Europe (COE), an international standards and human rights organization. This landmark treaty, known as the Framework Convention on artificial intelligence and human rights, democracy, and the rule of law, opened for signature in Vilnius, Lithuania. It is the first legally binding international agreement aimed at ensuring that AI systems align with democratic values. The treaty focuses on three main areas: protecting human rights (including privacy and preventing discrimination), safeguarding democracy, and upholding the rule of law. It also provides a legal framework covering the entire lifecycle of AI systems, promoting innovation, and managing potential risks.


OpenAI Is Just Facebook Now

The Atlantic - Technology

Investors led by Microsoft pressured OpenAI to reinstate Altman, which it did within days, alongside vague promises to be more responsible. Then, last month, the company disbanded the internal group tasked with safety research, known as the "superalignment team." Some of the team's most prominent members publicly resigned, including its head, Jan Leike, who posted on X that "over the past years, safety culture and processes have taken a backseat to shiny products." Fortune reported that OpenAI did not provide anywhere near the resources it had initially, publicly promised for safety research. Saunders, who also worked on superalignment, said he resigned when he "lost hope a few months before Jan did."


Former OpenAI, Google and Anthropic workers are asking AI companies for more whistleblower protections

Engadget

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic have signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. "So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public," the letter, which was published on Tuesday, says. "Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues." The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman called the provision "genuinely embarrassing" and claims it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees. The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo.


First companies sign up to AI safety standards on eve of Seoul summit

The Guardian

The first 16 companies have signed up to the voluntary artificial intelligence safety standards introduced at the Bletchley Park summit, Rishi Sunak has said on the eve of the event's follow-up in Seoul. But the standards have faced criticism for lacking teeth, with signatories committing only to voluntarily "work toward" information sharing, "invest" in cybersecurity and "prioritise" research into societal risks. "These commitments ensure the world's leading AI companies will provide transparency and accountability on their plans to develop safe AI," Sunak said. "It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology." Included in the 16 are Zhipu.ai, The presence of signatories from countries that have been less willing to bind national champions to safety regulation is a benefit of the lighter touch, the government says.