Goto

Collaborating Authors

 TIME - Tech


When AI Automates Relationships

TIME - Tech

As we assess the risks of AI, we are overlooking a crucial threat. Critics commonly highlight three primary hazards--job disruption, bias, and surveillance/privacy. We hear that AI will cause many people to lose their jobs, from dermatologists to truck drivers to marketers. We hear how AI turns historical correlations into predictions that enforce inequality, so that sentencing algorithms predict more recidivism for Black men than white ones. We hear that apps help authorities watch people, such as Amazon tracking which drivers look away from the road.


Exclusive: Renowned Experts Pen Support for California's Landmark AI Safety Bill

TIME - Tech

On August 7, a group of renowned professors co-authored a letter urging key lawmakers to support a California AI bill as it enters the final stages of the state's legislative process. In a letter shared exclusively with TIME, Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig, and Stuart Russell argue that the next generation of AI systems pose "severe risks" if "developed without sufficient care and oversight," and describe the bill as the "bare minimum for effective regulation of this technology." The bill, titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was introduced by Senator Scott Wiener in February of this year. It requires AI companies training large-scale models to conduct rigorous safety testing for potentially dangerous capabilities and implement comprehensive safety measures to mitigate risks. "There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers," the four experts write.


Hong Kong Testing ChatGPT-Style Tool After OpenAI Took Steps to Block Access

TIME - Tech

Hong Kong's government is testing the city's own ChatGPT -style tool for its employees, with plans to eventually make it available to the public, its innovation minister said after OpenAI took extra steps to block access from the city and other unsupported regions. Secretary for Innovation, Technology and Industry Sun Dong said on a Saturday radio show that his bureau was trying out the artificial intelligence program, whose Chinese name translates to "document assistance application for civil servants," to further improve its capabilities. He plans to have it available for the rest of the government this year. The program was developed by a generative AI research and development center led by the Hong Kong University of Science and Technology in collaboration with several other universities. Sun said the model would provide functions like graphics and video design in the future.


What We Know About the New U.K. Government's Approach to AI

TIME - Tech

When the U.K. hosted the world's first AI Safety Summit last November, Rishi Sunak, the then Prime Minister, said the achievements at the event would "tip the balance in favor of humanity." At the two-day event, held in the cradle of modern computing, Bletchley Park, AI labs committed to share their models with governments before public release, and 29 countries pledged to collaborate on mitigating risks from artificial intelligence. It was part of the Sunak-led Conservative government's effort to position the U.K. as a leader in artificial intelligence governance, which also involved establishing the world's first AI Safety Institute--a government body tasked with evaluating models for potentially dangerous capabilities. While the U.S. and other allied nations subsequently set up their own similar institutes, the U.K. institute boasts 10 times the funding of its American counterpart. Eight months later, on July 5, after a landslide loss to the Labour Party, Sunak left office and the newly elected Prime Minister Keir Starmer began forming his new government.


Republicans' Vow to Repeal Biden's AI Executive Order Has Some Experts Worried

TIME - Tech

On June 8, Republicans adopted a new party platform ahead of a possible second term for former President Donald Trump. Buried among the updated policy positions on abortion, immigration, and crime, the document contains a provision that has some artificial intelligence experts worried: it vows to scrap President Joe Biden's executive order on AI. "We will repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology," the platform reads. Biden's executive order on AI, signed last October, sought to tackle threats the new technology could pose to civil rights, privacy, and national security, while promoting innovation and competition and the use of AI for public services. It requires developers of the most powerful AI systems to share their safety test results with the U.S. government and calls on federal agencies to develop guidelines for the responsible use of AI domains such as criminal justice and federal benefits programs. Read More: Why Biden's AI Executive Order Only Goes So Far Carl Szabo, vice president of industry group NetChoice, which counts Google, Meta, and Amazon among its members, welcomes the possibility of the executive order's repeal, saying, "It would be good for Americans and innovators."


Microsoft Quits OpenAI Board Seat Amid Antitrust Scrutiny of AI Partnerships

TIME - Tech

Microsoft has relinquished its seat on the board of OpenAI, saying its participation is no longer needed because the ChatGPT maker has improved its governance since being roiled by boardroom chaos last year. In a Tuesday letter, Microsoft confirmed it was resigning, "effective immediately," from its role as an observer on the artificial intelligence company's board. "We appreciate the support shown by OpenAI leadership and the OpenAI board as we made this decision," the letter said. The surprise departure comes amid intensifying scrutiny from antitrust regulators of the powerful AI partnership. Microsoft has reportedly invested 13 billion in OpenAI.


A Driverless Car in China Hit a Pedestrian. Social Media Users Are Siding With the Car

TIME - Tech

A driverless ride-hailing car in China hit a pedestrian, and people on social media are taking the carmaker's side, because the person was reportedly crossing against the light. The operator of the vehicle, Chinese tech giant Baidu, said in a statement to Chinese media that the car began moving when the light turned green and had minor contact with the pedestrian. The person was taken to a hospital where an examination found no obvious external injuries, Baidu said. The incident on Sunday in the city of Wuhan highlights the challenge that autonomous driving faces in complex situations, the Chinese financial news outlet Yicai said. It quoted an expert saying the technology may have limitations when dealing with unconventional behavior such as other vehicles or pedestrians that violate traffic laws.


Exclusive: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

TIME - Tech

A large majority of American voters are skeptical of the argument that the U.S. should race ahead to build ever more powerful artificial intelligence, unconstrained by domestic regulations, in an effort to compete with China, according to new polling shared exclusively with TIME. The findings indicate that American voters disagree with a common narrative levied by the tech industry, in which CEOs and lobbyists have repeatedly argued the U.S. must tread carefully with AI regulation in order to not hand the advantage to their geopolitical rival. And they reveal a startling level of bipartisan consensus on AI policy, with both Republicans and Democrats in support of the government placing some limits on AI development in favor of safety and national security. According to the poll, 75% of Democrats and 75% of Republicans believe that "taking a careful controlled approach" to AI--by preventing the release of tools that terrorists and foreign adversaries could use against the U.S.--is preferable to "moving forward on AI as fast as possible to be the first country to get extremely powerful AI." A majority of voters support more stringent security practices at AI companies, and are worried about the risk of China stealing their most powerful models, the poll shows.


'We're Living in a Nightmare:' Inside the Health Crisis of a Texas Bitcoin Town

TIME - Tech

On an evening in December 2023, 43-year-old small business owner Sarah Rosenkranz collapsed in her home in Granbury, Texas and was rushed to the emergency room. Her heart pounded 200 beats per minute; her blood pressure spiked into hypertensive crisis; her skull throbbed. "It felt like my head was in a pressure vise being crushed," she says. "That pain was worse than childbirth." Rosenkranz's migraine lasted for five days. Doctors gave her several rounds of IV medication and painkiller shots, but nothing seemed to knock down the pain, she says. This was odd, especially because local doctors were similarly vexed when Indigo, Rosenkranz's 5-year-old daughter, was taken to urgent care earlier that year, screaming that she felt a "red beam behind her eardrums." It didn't occur to Sarah that these symptoms could be linked. But in January 2024, she walked into a town hall in Granbury and found a room full of people worn thin from strange, debilitating illnesses.


Meta Has Been Ordered to Stop Mining Brazilian Personal Data to Train Its AI

TIME - Tech

Brazil's national data protection authority has ordered Meta to halt the use of data originating from the country to train its AI models. Meta's current privacy policy enables the company to use data from its platforms, including Facebook, Instagram, and WhatsApp to train its artificial intelligence models. However, that practice will no longer be permitted in Brazil after its national data protection authority gave the company five days to change its policy on Tuesday. Brazil said the company will need to confirm it has stopped using the data or face a daily non-compliance fine of 50,000 Brazilian Reals (almost 9000), citing "the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects." Meta said it was "disappointed" with the Brazilian authority's decision, saying it was a "step backward for innovation."