Plotting

 TIME - Tech


With Letter to Trump, Evangelical Leaders Join the AI Debate

TIME - Tech

Rodriguez, the President of the National Hispanic Christian Leadership Conference, spoke at Trump's first presidential inauguration in 2017. Moore, who is also the founder of the public relations firm Kairos, served on Trump's Evangelical executive board during his first presidential candidacy. The letter is a sign of growing ties between religious and AI safety groups, which share some of the same worries. It was shared with journalists by representatives of the Future of Life Institute--an AI safety organization that campaigns to reduce what it sees as the existential risk posed by advanced AI systems. The world's biggest tech companies now all believe that it is possible to create so-called "artificial general intelligence"--a form of AI that can do any task better than a human expert. Some researchers have even invoked this technology in religious terms--for example, OpenAI's former chief scientist Ilya Sutskever, a mystical figure who famously encouraged colleagues to chant "feel the AGI" at company gatherings.


Pope Leo's Name Carries a Warning About the Rise of AI

TIME - Tech

With his name choice and speech, Leo XIV firmly marks AI as a defining challenge facing our world today. But also embedded in the name is a potential path forward. Leo XIII, during his papacy, laid out a vision for protecting workers against tech-induced consolidation, including minimum wage laws and trade unions. His ideas soon gained influence and were implemented in government policies around the world. While it's still unclear what specific guidance Leo XIV may issue on artificial intelligence, history suggests the implications of his crusade could be profound.


What to Know About the Apple Class Action Lawsuit Settlement--and How You Can File a Claim

TIME - Tech

Apple users--specifically those who use Siri through products such as Macbooks, iPhones, and Apple TVs--may be entitled to make a claim after Apple's class action lawsuit settlement, worth 95 million dollars, regarding the voice-activated assistant. The settlement comes from a lawsuit filed in 2021 by Californian Fumiko Lopez, who claimed that Apple, via Siri, conducted "unlawful and intentional interception and recording of individuals' confidential communications without their consent and subsequent unauthorized disclosure of those communications." "Apple intentionally, willfully, and knowingly violated consumers' privacy rights, including within the sanctity of consumers' own homes where they have the greatest expectation of privacy," the lawsuit stated. "Plaintiffs and Class Members would not have bought their Siri Devices, or would have paid less for them, if they had known Apple was intercepting, recording, disclosing, and otherwise misusing their conversations without consent or authorization." In 2019, Apple published a statement titled "Improving Siri's privacy protections," in which they said they hadn't "been fully living up" to their "high ideals" and vowed to issue improvements.


Trump is Rewriting How the U.S. Treats AI Chip Exports--and the Stakes Are Enormous

TIME - Tech

Early this year the Chinese company Deepseek revealed that it had developed a very powerful model mostly using Nvidia chips obtained before the Biden administration closed an export loophole in 2023, heightening the intensity of the race. Last week, the Trump administration ripped up those rules, with a spokesperson calling them "overly complex, bureaucratic" and saying they "would stymie American innovation." They then switched to a new tack: linking countries' access to AI chips with larger trade negotiations. Transitioning to a negotiation-based approach, the administration argued, could allow for more flexibility from country-to-country and allow Trump to secure key business concessions from Middle Eastern partners. Business and governments in the Middle East have massive ambitions for AI, aiming to position themselves at the forefront of this emerging technology.


Why This Artist Isn't Afraid of AI's Role in the Future of Art

TIME - Tech

As AI enters the workforce and seeps into all facets of our lives at unprecedented speed, we're told by leaders across industries that if you're not using it, you're falling behind. Yet when AI's use in art enters the conversation, some retreat in discomfort, shunning it as an affront to the very essence of art. This ongoing debate continues to create disruptions among artists. AI is fundamentally changing the creative process, and its purpose, significance, and influence are subjective to one's own values--making its trajectory hard to predict, and even harder to confront. Miami-based Panamanian photographer Dahlia Dreszer stands out as an optimist and believer in AI's powers.


Inside the First Major U.S. Bill Tackling AI Harms--and Deepfake Abuse

TIME - Tech

Here's what the bill aims to achieve, and how it crossed many hurdles en route to becoming law. The Take It Down Act was borne out of the suffering--and then activism--of a handful of teenagers. In October 2023, 14-year-old Elliston Berry of Texas and 15-year-old Francesca Mani of New Jersey each learned that classmates had used AI software to fabricate nude images of them and female classmates. The tools that had been used to humiliate them were relatively new: products of the generative AI boom in which virtually any image could be created with the click of a button. Pornographic and sometimes violent deepfake images of Taylor Swift and others soon spread across the internet.


Exclusive: Trump Pushes Out AI Experts Hired By Biden

TIME - Tech

The Trump administration has laid out its own ambitious goals for recruiting more tech talent. On April 3, Russell Vought, Trump's Director of the Office of Management and Budget, released a 25-page memo for how federal leaders were expected to accelerate the government's use of AI. "Agencies should focus recruitment efforts on individuals that have demonstrated operational experience in designing, deploying, and scaling AI systems in high-impact environments," Vought wrote. Putting that into action will be harder than it needed to be, says Deirdre Mulligan, who directed the National Artificial Intelligence Initiative Office in the Biden White House. "The Trump Administration's actions have not only denuded the government of talent now, but I'm sure that for many folks, they will think twice about whether or not they want to work in government," Mulligan says. "It's really important to have stability, to have people's expertise be treated with the level of respect it ought to be and to have people not be wondering from one day to the next whether they're going to be employed."


OpenAI Wants to Go For-Profit. Experts Say Regulators Should Step In

TIME - Tech

In the latest development in an ongoing struggle over OpenAI's future direction--and potentially the future of artificial intelligence itself--dozens of prominent figures are urging the Attorneys General of California and Delaware to block OpenAI's controversial plan to convert from its unique nonprofit-controlled structure to a for-profit company. In a letter made public April 23, signatories including "AI Godfather" Geoffrey Hinton, Harvard legal professor Lawrence Lessig, and several former OpenAI researchers argue the move represents a fundamental betrayal of OpenAI's founding mission. "The proposed restructuring would eliminate essential safeguards, effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritize shareholder returns," the letter's authors write. It lands as OpenAI faces immense pressure from the other side: failing to implement the restructure by the end of the year could cost the company 20 billion and hamstring future fundraising. OpenAI was founded in 2015 as a non-profit, with its stated mission being to ensure that artificial general intelligence (AGI) "benefits all of humanity" rather than advancing "the private gain of any person."


Exclusive: Every AI Datacenter Is Vulnerable to Chinese Espionage, Report Says

TIME - Tech

The unredacted report was circulated inside the Trump White House in recent weeks, according to its authors. TIME viewed a redacted version ahead of its public release. The White House did not respond to a request for comment. Today's top AI datacenters are vulnerable to both asymmetrical sabotage--where relatively cheap attacks could disable them for months--and exfiltration attacks, in which closely guarded AI models could be stolen or surveilled, the report's authors warn. "You could end up with dozens of datacenter sites that are essentially stranded assets that can't be retrofitted for the level of security that's required," says Edouard Harris, one of the authors of the report.


Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears

TIME - Tech

OpenAI, in an email to TIME on Monday, wrote that its newest models, the o3 and o4-mini, were deployed with an array of biological-risk related safeguards, including blocking harmful outputs. The company wrote that it ran a thousand-hour red-teaming campaign in which 98.7% of unsafe bio-related conversations were successfully flagged and blocked. "We value industry collaboration on advancing safeguards for frontier models, including in sensitive domains like virology," a spokesperson wrote. "We continue to invest in these safeguards as capabilities grow." Inglesby argues that industry self-regulation is not enough, and calls for lawmakers and political leaders to strategize a policy approach to regulating AI's bio risks.