Not enough data to create a plot.
Try a different view from the menu above.
TIME - Tech
The Economist Breaking Ranks to Warn of AI's Transformative Power
Technologists tend to predict that the economic impacts of their creations will be unprecedented--and this is especially true when it comes to artificial intelligence. Last year, Elon Musk predicted that continued advances in AI would render human labor obsolete. OpenAI CEO Sam Altman has written that AI will inevitably continue the shift in economic power from labor to capital and create "phenomenal wealth." Jensen Huang, CEO of semiconductor design firm Nvidia, has compared AI's development and deployment to a "new industrial revolution." But while the technologists are bullish on the economic impacts of AI, members of that other technocratic priesthood with profound influence over public life--the economists--are not.
U.K. to Criminalize Creating Sexually Explicit Deepfake Images
The U.K. will criminalize the creation of sexually explicit deepfake images as part of plans to tackle violence against women. People convicted of creating such deepfakes without consent, even if they don't intend to share the images, will face prosecution and an unlimited fine under a new law, the Ministry of Justice said in a statement. Sharing the images could also result in jail. Rapid developments in artificial intelligence have led to the rise of the creation and dissemination of deepfake images and videos. The U.K. has classified violence against women and girls as a national threat, which means the police must prioritize tackling it, and this law is designed to help them clamp down on a practice that is increasingly being used to humiliate or distress victims.
The AI That Could Heal a Divided Internet
In the 1990s and early 2000s, technologists made the world a grand promise: new communications technologies would strengthen democracy, undermine authoritarianism, and lead to a new era of human flourishing. But today, few people would agree that the internet has lived up to that lofty goal. Today, on social media platforms, content tends to be ranked by how much engagement it receives. Over the last two decades politics, the media, and culture have all been reshaped to meet a single, overriding incentive: posts that provoke an emotional response often rise to the top. Efforts to improve the health of online spaces have long focused on content moderation, the practice of detecting and removing bad content.
The Huge Risks From AI In an Election Year
On the eve of New Hampshire's primary election, a flood of robocalls exhorted Democratic voters to sit out a write-in campaign supporting President Joe Biden during the state's presidential primary. An AI-generated voice on the line matched the uncanny cadence and signature catchphrase-- ("malarkey!")--characteristic to Biden. From that call to fake creations envisioning a cascade of calamities under Biden's watch to AI deepfakes of a Slovakian candidate for country leader pondering vote rigging and raising beer prices, AI is making its mark on elections worldwide. Against this backdrop, governments and several tech companies are taking some steps to mitigate risks--European lawmakers just approved a watershed law, and as recently as February tech companies signed a pledge at the Munich Security Conference. But much more needs to be done to protect American democracy.
Exclusive: Google Workers Revolt Over 1.2 Billion Contract With Israel
In midtown Manhattan on March 4, Google's managing director for Israel, Barak Regev, was addressing a conference promoting the Israeli tech industry when a member of the audience stood up in protest. "I am a Google Cloud software engineer, and I refuse to build technology that powers genocide, apartheid, or surveillance," shouted the protester, wearing an orange t-shirt emblazoned with a white Google logo. The Google worker, a 23-year-old software engineer named Eddie Hatfield, was booed by the audience and quickly bundled out of the room, a video of the event shows. After a pause, Regev addressed the act of protest. "One of the privileges of working in a company which represents democratic values is giving space for different opinions," he told the crowd.
China Is Using AI to Sow Disinformation and Stoke Discord Across Asia and the U.S., Microsoft Reports
Faking a political endorsement in Taiwan ahead of its crucial January election, sharing memes to amplify outrage over Japan's disposal of nuclear wastewater, and spreading conspiracy theories that claim the U.S. government was behind Hawaii's wildfire and Kentucky's train derailment last year. These are just some of the ways that China's influence operations have ramped up their use of artificial intelligence to sow disinformation and stoke discord worldwide over the last seven months, according to a new report released Friday by Microsoft Threat Intelligence. Microsoft has observed notable trends from state-backed actors, the report said, "that demonstrate not only doubling down on familiar targets, but also attempts to use more sophisticated influence techniques to achieve their goals." In particular, Chinese influence actors "experimented with new media" and "continued to refine AI-generated or AI-enhanced content." Among the operations highlighted in the report was a "a notable uptick in content featuring Taiwanese political figures ahead of the January 13 presidential and legislative elections."
Google Considers Charging for AI-Powered Search Results, New Report Says
Google is considering charging for new premium artificial intelligence-powered search features, according to a Financial Times report that cites three people familiar with the matter. This includes looking at options such as adding certain AI-powered search features to its premium subscription services, which offer the company's Gemini AI assistant in Gmail and Google Docs, the newspaper reported. Google's free search engine would remain so, and ads would continue even for subscribers. In response to an inquiry about the report, a Google spokesperson tells TIME in an email: "We're not working on or considering an ad-free search experience. As we've done many times before, we'll continue to build new premium capabilities and services to enhance our subscription offerings across Google. We don't have anything to announce right now." "For years, we've been reinventing search to help people access information in the way that's most natural to them," the statement also said.
Side Hustle or Scam? What to Know About Data Annotation Work
On TikTok, Reddit, and elsewhere, posts are popping up from users claiming they're making 20 per hour--or more--completing small tasks in their spare time on sites such as DataAnnotation.tech, As companies have rushed to build AI models, the demand for "data annotation" and "data labeling" work has increased. Workers complete tasks such as writing and coding, which tech companies then use to develop artificial intelligence systems, which are trained using large numbers of example data points. Some models require all of their input data to be labeled by humans, a technique referred to as "supervised learning." And while "unsupervised learning," in which AI models are fed unlabeled data, is becoming increasingly popular, AI systems trained using unsupervised learning still often require a final step involving data labeled by humans.
U.S., U.K. Announce Partnership to Safety Test AI Models
The U.K. and U.S. governments announced Monday they will work together in safety testing the most powerful artificial intelligence models. An agreement, signed by Michelle Donelan, the U.K. Secretary of State for Science, Innovation and Technology, and U.S. Secretary of Commerce Gina Raimondo, sets out a plan for collaboration between the two governments. "I think of [the agreement] as marking the next chapter in our journey on AI safety, working hand in glove with the United States government," Donelan told TIME in an interview at the British Embassy in Washington, D.C. on Monday. "I see the role of the United States and the U.K. as being the real driving force in what will become a network of institutes eventually." The U.K. and U.S. AI Safety Institutes were established just one day apart, around the inaugural AI Safety Summit hosted by the U.K. government at Bletchley Park in November 2023.
Retired Admiral William McRaven on Why U.S. Leadership Matters
Retired Navy Adm. William McRaven's nearly 40-year career in the U.S. military has spanned everything from deployments as a Navy SEAL, hunting down high-value targets overseas, commanding U.S Special Operations forces in Iraq and Afghanistan, and advising Presidents George W. Bush and Barack Obama. But McRaven is best known for planning and overseeing the 2011 raid that ended with the death of Osama bin Laden. In December that year, McRaven was named as a runner-up for TIME's Person of the Year for his role in the operation. "There is nobody in the U.S. government that thinks we can kill our way to victory, certainly not the special-operations guys," he told TIME in 2011, "but what happens is, by capturing and killing some of these high-value targets, we buy space and time for the rest of the government to work." After retiring from the U.S. military in 2014, McRaven served as the chancellor of the University of Texas System and has written several books on leadership.