TIME - Tech
How Digital Technology Can Help the U.N. Achieve Its 2030 Agenda
As world leaders gather in New York City for the United Nations General Assembly, there's a lot to get done, with just six years left to achieve the bold ambitions laid out for the world's 2030 agenda. When world governments agreed to the 2030 plan back in 2015, a decade and a half seemed like plenty of time to achieve the 17 Sustainable Development Goals (SDGs) designed to create a more prosperous, safe and fair global society. While amazing progress has been made, we are in danger of falling short. I believe the U.N.'s goals can be attained through a collaborative commitment to make digital networks available to everybody in the world. Mobility, broadband and the cloud are the infrastructure of 21st century life and everybody should have that opportunity.
India Is Emerging as a Key Player in the Global AI Race
As Asia's richest man, Mukesh Ambani, addressed his shareholders during a much-anticipated yearly address last Thursday, he also unveiled "JioBrain," a suite of artificial intelligence (AI) tools and applications that he says will transform a spate of businesses in energy, textiles, telecommunications and more that form his multinational conglomerate, Reliance Industries. "By perfecting JioBrain within Reliance, we will create a powerful AI service platform that we can offer to other enterprises as well," Ambani said during his speech. The Reliance Chairman's latest offering comes as India emerges as a crucial player in the global AI ecosystem, boasting a high-powered IT industry worth 250 billion, which serves many of the world's banks, manufacturers and firms. As the world's most populous country, India also has a robust workforce population with nearly 5 million programmers at a time when AI talent is in short supply globally, with analysts predicting that India's AI services could be worth 17 billion by 2027, according to a recent report by Nasscom and BCG. Puneet Chandok, the President of Microsoft India & South Asia, points to research that finds India has one of the highest AI adoption rates among knowledge workers, with 92% using generative AI at work--significantly higher than the global average of 75%.
How We Chose the TIME100 Most Influential People in AI 2024
As we were finishing this year's TIME100 AI, I had two conversations, with two very different TIME100 AI honorees, that made clear the stakes of this technological transformation. Sundar Pichai, who joined Google in 2004 and became CEO of the world's fourth most valuable company nine years ago, told me that introducing the company's billions of users to artificial intelligence through Google's products amounts to "one of the biggest improvements we've done in 20 years." Speaking that same day, Meredith Whittaker, a former Google employee and critic of the company who, as the president of Signal, has become one of the world's most influential advocates for privacy, expressed alarm at the dangers posed by the fact that so much of the AI revolution depends on the infrastructure and decisions of only a handful of big players in tech. Our purpose in creating the TIME100 AI is to put leaders like Pichai and Whittaker in dialogue and to open up their views to TIME's readers. That is why we are excited to share with you the second edition of the TIME100 AI.
AI May Not Steal Many Jobs After All. It May Just Make Workers More Efficient
Imagine a customer-service center that speaks your language, no matter what it is. Alorica, a company in Irvine, California, that runs customer-service centers around the world, has introduced an artificial intelligence translation tool that lets its representatives talk with customers who speak 200 different languages and 75 dialects. So an Alorica representative who speaks, say, only Spanish can field a complaint about a balky printer or an incorrect bank statement from a Cantonese speaker in Hong Kong. Alorica wouldn't need to hire a rep who speaks Cantonese. Such is the power of AI.
What Google's Antitrust Defeat Means for AI
Google has officially been named a monopoly. On Aug. 5, a federal judge charged the tech giant with illegally using its market power to harm rival search engines, marking the first antitrust defeat for a major internet platform in more than 20 years--and thereby calling into question the business practices of Silicon Valley's most powerful companies. Many experts have speculated the landmark decision will make judges more receptive to antitrust action in other ongoing cases against the Big Tech platforms, especially with regards to the burgeoning AI industry. Today, the AI ecosystem is dominated by many of the same companies that the government is challenging in court, and those companies are using the same tactics to entrench their power in AI markets. Judge Amit Mehta's ruling in the Google case centered on the massive sums of money the company paid firms like Apple and Samsung to make its search engine the default on their smartphones and browsers.
Exclusive: New Research Finds Stark Global Divide in Ownership of Powerful AI Chips
When we think of the "cloud," we often imagine data floating invisibly in the ether. But the reality is far more tangible: the cloud is located in huge buildings called data centers, filled with powerful, energy-hungry computer chips. Those chips, particularly graphics processing units (GPUs), have become a critical piece of infrastructure for the world of AI, as they are required to build and run powerful chatbots like ChatGPT. As the number of things you can do with AI grows, so does the geopolitical importance of high-end chips--and where they are located in the world. The U.S. and China are competing to amass stockpiles, with Washington enacting sanctions aimed at preventing Beijing from buying the most cutting-edge varieties.
California's Draft AI Law Would Protect More than Just People
Few places in the world have more to gain from a flourishing AI industry than California. Few also have more to lose if the public's trust in the industry were suddenly shattered. In May, the California Senate passed SB 1047, a piece of AI safety legislation, in a vote of 32 to one, helping ensure the safe development of large-scale AI systems through clear, predictable, common-sense safety standards. The bill is now slated for a state assembly vote this week and, if signed into law by Governor Gavin Newsom, would represent a significant step in protecting California citizens and the state's burgeoning AI industry from malicious use. Late Monday, Elon Musk shocked many by announcing his support for the bill in a post on X. "This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill," he wrote.
How 'Friendshoring' Made Southeast Asia Pivotal to the AI Revolution
Employees entering Intel's advanced PG8 foundry on the Malaysian island of Penang must take elaborate safety precautions. First, staff don blue shoe coverings, followed by a hairnet, plastic hood, facemask, bunny suit, latex gloves, and eye goggles. Finally, plastic boots are placed over those already-covered shoes with a special strap tucked into the wearer's socks to "ground" them. For it's not just a stray hair or skin flake that can be deadly to Intel's latest artificial intelligence (AI) semiconductor chips--even the static shock from an unsuspecting pinky can measure 10,000 volts and fry their delicate circuitry. "Static is a unit killer," says Phynthamilkumaran Siea Dass, Intel's director of assembly test manufacturing in Penang, as he leads TIME through interlocked doors into PG8's cleanroom.
Exclusive: Workers at Google DeepMind Push Company to Drop Military Contracts
Nearly 200 workers inside Google DeepMind, the company's AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google's own AI rules. The letter is a sign of a growing dispute within Google between at least some workers in its AI division--which has pledged to never work on military technology--and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States. The signatures represent some 5% of DeepMind's overall headcount--a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand. The DeepMind letter, dated May 16 of this year, begins by stating that workers are "concerned by recent reports of Google's contracts with military organizations."
How Will.i.am Is Trying to Reinvent Radio With AI
Will.i.am has been embracing innovative technology for years. Now he is using artificial intelligence in an effort to transform how we listen to the radio. The musician, entrepreneur and tech investor has launched RAiDiO.FYI, a set of interactive radio stations themed around topics like sport, pop culture, and politics. Each station is fundamentally interactive: tune in and you'll be welcomed by name by an AI host "live from the ether," the Black Eyed Peas frontman tells TIME. Hosts talk about their given topic before playing some music.