Goto

Collaborating Authors

 TIME - Tech


The Next Tech Backlash Will Be About Hygiene

TIME - Tech

For centuries it was biology that made humans sick. Today, it is often stress. So argues Dr Gabor Matรฉ about the unrecognized toll that "normal" modern life has on your mental and physical health. Dr. Matรฉ's research, which struck a chord in 2023, invites reflection on the roll out of generative AI into daily life in 2024. As half of British teens report feeling addicted to social media, and as the U.S. surgeon general offers a rare caution against its health risks, the infusion of generative AI into social media appears to threaten our basic hygiene, meaning "the conditions or practices conducive to maintaining health and preventing disease."


AI Isn't Our Election Safety Problem, Disinformation Is

TIME - Tech

This election cycle will be the first exposed to generative artificial intelligence--the technology behind popular apps like ChatGPT that enables even non-experts to create fake, but realistic-looking text, video, and audio perfectly suited for political manipulation. At the same time, a number of the major social-media companies have retreated from some of their prior commitments to promote "election integrity." The November election is also the first that will register the impact of the enormous popularity of TikTok, which uses a recommendation algorithm that some experts believe is particularly suited to spreading misinformation. Let's start with the rise of generative AI, which allows virtually anyone to produce persuasive text, imagery, or sound based on relatively simple natural-language prompts. In January, Facebook circulated a fake AI-generated image of Donald Trump sitting next to Jeffrey Epstein on the disgraced financier and sex offender's private jet.


The E.U. Has Passed the World's First Comprehensive AI Law

TIME - Tech

AI-generated deepfake pictures, video or audio of existing people, places or events must be labeled as artificially manipulated. There's extra scrutiny for the biggest and most powerful AI models that pose "systemic risks," which include OpenAI's GPT4 -- its most advanced system -- and Google's Gemini. The EU says it's worried that these powerful AI systems could "cause serious accidents or be misused for far-reaching cyberattacks." They also fear generative AI could spread "harmful biases" across many applications, affecting many people. Companies that provide these systems will have to assess and mitigate the risks; report any serious incidents, such as malfunctions that cause someone's death or serious harm to health or property; put cybersecurity measures in place; and disclose how much energy their models use. Brussels first suggested AI regulations in 2019, taking a familiar global role in ratcheting up scrutiny of emerging industries, while other governments scramble to keep up. In the U.S., President Joe Biden signed a sweeping executive order on AI in October that's expected to be backed up by legislation and global agreements. In the meantime, lawmakers in at least seven U.S. states are working on their own AI legislation.


Exclusive: U.S. Must Move 'Decisively' to Avert 'Extinction-Level' Threat From AI, Government-Commissioned Report Says

TIME - Tech

The U.S. government must move "quickly and decisively" to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an "extinction-level threat to the human species," says a report commissioned by the U.S. government published on Monday. "Current frontier AI development poses urgent and growing risks to national security," the report, which TIME obtained ahead of its publication, says. "The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons." AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.


Employees at Top AI Labs Fear Safety Is an Afterthought, Report Says

TIME - Tech

Workers at some of the world's leading AI companies harbor significant concerns about the safety of their work and the incentives driving their leadership, a report published on Monday claimed. The report, commissioned by the State Department and written by employees of the company Gladstone AI, makes several recommendations for how the U.S. should respond to what it argues are significant national security risks posed by advanced AI. Read More: Exclusive: U.S. Must Move'Decisively' To Avert'Extinction-Level' Threat from AI, Government-Commissioned Report Says The report's authors spoke with more than 200 experts for the report, including employees at OpenAI, Google DeepMind, Meta and Anthropic--leading AI labs that are all working towards "artificial general intelligence," a hypothetical technology that could perform most tasks at or above the level of a human. The authors shared excerpts of concerns that employees from some of these labs shared with them privately, without naming the individuals or the specific company that they work for. OpenAI, Google, Meta and Anthropic did not immediately respond to requests for comment. "We have served, through this project, as a de-facto clearing house for the concerns of frontier researchers who are not convinced that the default trajectory of their organizations would avoid catastrophic outcomes," Jeremie Harris, the CEO of Gladstone and one of the authors of the report, tells TIME. One individual at an unspecified AI lab shared worries with the report's authors that the lab has what the report characterized as a "lax approach to safety" stemming from a desire to not slow down the lab's work to build more powerful systems.


Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems

TIME - Tech

A study published Tuesday provides a newly-developed way to measure whether an AI model contains potentially hazardous knowledge, along with a technique for removing the knowledge from an AI system while leaving the rest of the model relatively intact. Together, the findings could help prevent AI models from being used to carry out cyberattacks and deploy bioweapons. The study was conducted by researchers from Scale AI, an AI training data provider, and the Center for AI Safety, a nonprofit, along with a consortium of more than 20 experts in biosecurity, chemical weapons, and cybersecurity. The subject matter experts generated a set of questions that, taken together, could assess whether an AI model can assist in efforts to create and deploy weapons of mass destruction. The researchers from the Center for AI Safety, building on previous work that helps to understand how AI models represent concepts, developed the "mind wipe" technique.


OpenAI Says Musk Agreed the ChatGPT Maker Should Become a For-Profit Company

TIME - Tech

Elon Musk supported making OpenAI a for-profit company, the ChatGPT maker said, attacking a lawsuit from the wealthy investor who has accused the artificial intelligence business of betraying its founding goal to benefit humanity as it pursued profits instead. In its first response since the Tesla CEO sued last week, OpenAI vowed to get the claim thrown out and released emails from Musk, escalating the feud between the San Francisco-based company and the billionaire that bankrolled its creation years ago. "The mission of OpenAI is to ensure AGI benefits all of humanity, which means both building safe and beneficial AGI and helping create broadly distributed benefits," OpenAI said in a blog post late Tuesday from five company executives and computer scientists, including CEO Sam Altman. "We intend to move to dismiss all of Elon's claims." AGI refers to artificial general intelligence, which are general purpose AI systems that can perform just as well as -- or even better than -- humans in a wide variety of tasks.


What an American Approach to AI Regulation Should Look Like

TIME - Tech

As the world grapples with how to regulate artificial intelligence, Washington faces a unique dilemma: how to secure America's position as the global AI leader, while guarding against AI's possible risks? Although any country seeking to regulate AI must balance regulation and innovation, this task is especially hard for the United States because we have more to lose. The United Kingdom, European Union, and China all have formidable AI companies, but U.S. firms dominate the field, propelled by our uniquely open innovation ecosystem. This dominance was on display recently, which saw OpenAI release Sora, a powerful new text-to-video platform, and Google introduce Gemini 1.5, its next-generation AI model that can absorb requests more than 30 times the size of its predecessor. If these trends continue, and AI proves the game-changer that many expect--surrendering U.S. leadership is not an option.


Why Elon Musk Is Suing OpenAI and Sam Altman

TIME - Tech

The fallout from the OpenAI board's failed attempt to fire CEO Sam Altman last November took an unexpected turn on Thursday, in events that could have a significant bearing on the future of the company and the wider world of artificial intelligence. Elon Musk filed a lawsuit against OpenAI in a San Francisco court, alleging that Altman and co-founder Greg Brockman have violated OpenAI's founding mission to develop AI safely and for the benefit of humanity. The billionaire owner of SpaceX and X (formerly Twitter) co-founded OpenAI alongside Altman and Brockman back in 2015, but stepped away from the company in 2018. Musk disagreed with Altman and Brockman's plan to turn OpenAI from a non-profit to a for-profit company, and before stepping down, reportedly mounted an unsuccessful bid to install himself as CEO. Musk is suing Altman, Brockman, and several of OpenAI's business entities for breach of contract, breach of fiduciary duty, and unfair business practices, seeking unspecified damages above 105,000.


Elon Musk Sues OpenAI, Sam Altman for Breaching Firm's Founding Mission

TIME - Tech

Elon Musk sued OpenAI and its Chief Executive Officer Sam Altman, alleging they violated the artificial intelligence startup's founding mission by putting profit ahead of benefiting humanity. The 52-year-old billionaire, who was a co-founder of OpenAI but no longer has a stake, said in a lawsuit filed late Thursday in San Francisco that the company's close relationship with Microsoft Corp. has undermined its original mission of creating open-source technology that wouldn't be subject to corporate priorities. Musk, who is also CEO of Tesla Inc., has been among the most outspoken about the dangers of AI and artificial general intelligence, or AGI. The release of OpenAI's ChatGPT more than a year ago popularized advances in AI technology and raised concerns about the risks surrounding the race to develop AGI, where computers are as smart as an average human. "To this day, OpenAI Inc.'s website continues to profess that its charter is to ensure that AGI'benefits all of humanity,'" the lawsuit said.