Plotting

 TIME - Tech


OpenAI's New Ad Shows 'Reasoning' AI Making Basic Errors

TIME - Tech

OpenAI released its most advanced AI model yet, called o1, for paying users on Thursday. The launch kicked off the company's "12 Days of OpenAI" event--a dozen consecutive releases to celebrate the holiday season. OpenAI has touted o1's "complex reasoning" capabilities, and announced on Thursday that unlimited access to the model would cost 200 per month. In the video the company released to show the model's strengths, a user uploads a picture of a wooden birdhouse and asks the model for advice on how to build a similar one. The model "thinks" for a short period and then spits out what on the surface appears to be a comprehensive set of instructions. Close examination reveals the instructions to be almost useless.


What Donald Trump's Win Means For AI

TIME - Tech

When Donald Trump was last President, ChatGPT had not yet been launched. Now, as he prepares to return to the White House after defeating Vice President Kamala Harris in the 2024 election, the artificial intelligence landscape looks quite different. AI systems are advancing so rapidly that some leading executives of AI companies, such as Anthropic CEO Dario Amodei and Elon Musk, the Tesla CEO and a prominent Trump backer, believe AI may become smarter than humans by 2026. Others offer a more general timeframe. In an essay published in September, OpenAI CEO Sam Altman said, "It is possible that we will have superintelligence in a few thousand days," but also noted that "it may take longer."


The Gap Between Open and Closed AI Models Might Be Shrinking. Here's Why That Matters

TIME - Tech

Today's best AI models, like OpenAI's ChatGPT and Anthropic's Claude, come with conditions: their creators control the terms on which they are accessed to prevent them being used in harmful ways. This is in contrast with'open' models, which can be downloaded, modified, and used by anyone for almost any purpose. A new report by non-profit research organization Epoch AI found that open models available today are about a year behind the top closed models. "The best open model today is on par with closed models in performance, but with a lag of about one year," says Ben Cottier, lead researcher on the report. Meta's Llama 3.1 405B, an open model released in July, took about 16 months to match the capabilities of the first version of GPT-4.


How AI Is Being Used to Respond to Natural Disasters in Cities

TIME - Tech

The number of people living in urban areas has tripled in the last 50 years, meaning when a major natural disaster such as an earthquake strikes a city, more lives are in danger. Meanwhile, the strength and frequency of extreme weather events has increased--a trend set to continue as the climate warms. That is spurring efforts around the world to develop a new generation of earthquake monitoring and climate forecasting systems to make detecting and responding to disasters quicker, cheaper, and more accurate than ever. On Nov. 6, at the Barcelona Supercomputing Center in Spain, the Global Initiative on Resilience to Natural Hazards through AI Solutions will meet for the first time. The new United Nations initiative aims to guide governments, organizations, and communities in using AI for disaster management.


Inside the New Nonprofit AI Initiatives Seeking to Aid Teachers and Farmers in Rural Africa

TIME - Tech

Over the past year, rural farmers in Malawi have been seeking advice about their crops and animals from a generative AI chatbot. These farmers ask questions in Chichewa, their native tongue, and the app, Ulangizi, responds in kind, using conversational language based on information taken from the government's agricultural manual. "In the past we could wait for days for agriculture extension workers to come and address whatever problems we had on our farms," Maron Galeta, a Malawian farmer, told Bloomberg. "Just a touch of a button we have all the information we need." The nonprofit behind the app, Opportunity International, hopes to bring similar AI-based solutions to other impoverished communities.


How We Picked the Best Inventions of 2024

TIME - Tech

Every year for over two decades, TIME editors have highlighted the most impactful new products and ideas in TIME's Best Inventions issue. To compile this year's list, we solicited nominations from TIME's editors and correspondents around the world, and through an online application process, paying special attention to growing fields--such as health care, AI, and green energy. We then evaluated each contender on a number of key factors, including originality, efficacy, ambition, and impact. The result is a list of 200 groundbreaking inventions (and 50 special mention inventions)--including the world's largest computer chip, a humanoid robot joining the workforce, and a bioluminescent houseplant--that are changing how we live, work, play, and think about what's possible.


A Robot for Lash Extensions

TIME - Tech

Getting lash extensions can be an uncomfortable process, involving lying with tape under your eyes on a bed for two hours. Chief technology officer Nathan Harding co-founded Luum Lash when he realized it could be improved by using robots. Luum swaps sharp application instruments for soft-tipped plastic tools, uses a safety mechanism to detach instruments from the machine before they poke a client, and employs machine learning to apply lashes more efficiently and precisely. An appointment that usually takes two to three hours takes one and a half with Luum. Luum lash artists, primarily working from the Lash Lab in Oakland, Calif., can see "up to four times the clients" daily as they could operate without the robot, says CEO Jo Lawson.


Some Top AI Labs Have 'Very Weak' Risk Management, Study Finds

TIME - Tech

Some of the world's top AI labs suffer from inadequate safety measures--and the worst offender is Elon Musk's xAI, according to a new study. The French nonprofit SaferAI released its first ratings Wednesday evaluating the risk-management practices of top AI companies. Simรฉon Campos, the founder of SaferAI, says the purpose of the ratings is to develop a clear standard for how AI companies are handling risk as these nascent systems grow in power and usage. AI systems have already shown their ability to anonymously hack websites or help people develop bioweapons. Governments have been slow to put frameworks in place: a California bill to regulate the AI industry there was just vetoed by Governor Gavin Newsom.


Gavin Newsom Blocks Contentious AI Safety Bill in California

TIME - Tech

California Governor Gavin Newsom has vetoed what would have become one of the most comprehensive policies governing the safety of artificial intelligence in the U.S. The bill would've been among the first to hold AI developers accountable for any severe harm caused by their technologies. It drew fierce criticism from some prominent Democrats and major tech firms, including ChatGPT creator OpenAI and venture capital firm Andreessen Horowitz, who warned it could stall innovation in the state. Newsom described the legislation as "well-intentioned" but said in a statement that it would've applied "stringent standards to even the most basic functions." Regulation should be based on "empirical evidence and science," he said, pointing to his own executive order on AI and other bills he's signed that regulate the technology around known risks such as deepfakes. The debate around California's SB 1047 bill highlights the challenge that lawmakers around the world are facing in controlling the risks of AI while also supporting the emerging technology.


OpenAI Chief Technology Officer Mira Murati and Two Other Top Execs Leave Company

TIME - Tech

A high-ranking executive at OpenAI who served a few days as its interim CEO during a period of turmoil last year said she's leaving the artificial intelligence company. Mira Murati, OpenAI's chief technology officer, said in a written statement Wednesday that, after much reflection, she has "made the difficult decision to leave OpenAI." "I'm stepping away because I want to create the time and space to do my own exploration," she said. Two other top executives are also on their way out, CEO Sam Altman announced later Wednesday. The decisions by Murati, as well as OpenAI's Chief Research Officer Bob McGrew and another research leader, Barret Zoph, were made "independently of each other and amicably," Altman said in a note to employees he shared on social media.