The European Union today agreed on the details of the AI Act, a far-reaching set of rules for the people building and using artificial intelligence. It's a milestone law that, lawmakers hope, will create a blueprint for the rest of the world. After months of debate about how to regulate companies like OpenAI, lawmakers from the EU's three branches of government--the Parliament, Council and Commission--spent more than 36 hours in total--thrashing out the new legislation between Wednesday afternoon and Friday evening. Lawmakers were under pressure to strike a deal before the EU election campaign starts in the new year. "The EU AI Act is a global first," said European Commission President Ursula von der Leyen on X. "[It is] a unique legal framework for the development of AI you can trust.
Steven Johnson has written 13 books, on topics ranging from a London cholera outbreak to the value of video games. He's been a television presenter and a podcast host. He's a keynote speaker who doesn't have to call himself that in his LinkedIn profile. And for over a year now, he's been a full-time employee of Google, a status that's clear when he badges me into the search giant's Chelsea offices in New York to show me what his team has been creating. It's called NotebookLM, and the easiest way to think of it is as an AI collaborator with access to all your materials that sits on your metaphorical shoulder to guide you through your project.
The history of artificial intelligence has been punctuated by periods of so-called "AI winter," when the technology seemed to meet a dead end and funding dried up. Each one has been accompanied by proclamations that making machines truly intelligent is just too darned hard for humans to figure out. Google's release of Gemini, claimed to be a fundamentally new kind of AI model and the company's most powerful to date, suggests that a new AI winter isn't coming anytime soon. In fact, although the 12 months since ChatGPT launched have been a banner year for AI, there is good reason to think that the current AI boom is only getting started. OpenAI didn't have high expectations when it launched the "low key research preview" called ChatGPT in November 2022.
The biggest fight of the generative AI revolution is headed to the courtroom--and no, it's not about the latest boardroom drama at OpenAI. Book authors, artists, and coders are challenging the practice of teaching AI models to replicate their skills using their own work as a training manual. But as image generators and other tools have proven able to impressively mimic works in their training data, and the scale and value of training data has become clear, creators are increasingly crying foul. At LiveWIRED in San Francisco, the 30th anniversary event for WIRED magazine, two leaders of that nascent resistance sparred with a defender of the rights of AI companies to develop the technology unencumbered. From left to right: WIRED senior writer Kate Knibbs discussed creators' rights and AI with Mike Masnick, Mary Rasenberger, and Matthew Butterick at LiveWIRED in San Francisco,.
OpenAI cofounder Reid Hoffman says the company is better off with Sam Altman restored as CEO, and he was shocked that board members he used to serve alongside would think otherwise. Hoffman, who left OpenAI's board in March after cofounding the competitor Inflection AI, offered his first comments on the recent chaos at OpenAI on stage at WIRED's LiveWIRED 30th anniversary event in San Francisco on Tuesday. "Surprise would be an understatement," he said about his reaction to learning of Altman's firing. After employees and investors revolted, Altman got his job back days later. "We are in a much better place for the world to have Sam as CEO. He's very competent in that," said Hoffman, who with Elon Musk and other wealthy tech luminaries formed the earliest vision for OpenAI when it was founded in 2015.
Google just launched its Gemini AI model. Want to try it out for free? A version of the model, called Gemini Pro, is available inside of the Bard chatbot right now. Also, anyone with a Pixel 8 Pro can use a version of Gemini in their AI-suggested text replies with WhatsApp now and with Gboard in the future. Only a sliver of Gemini is currently available.
Demis Hassabis has never been shy about proclaiming big leaps in artificial intelligence. Most notably, he became famous in 2016 after a bot called AlphaGo taught itself to play the complex and subtle board game Go with superhuman skill and ingenuity. Today, Hassabis says his team at Google has made a bigger step forward--for him, the company, and hopefully the wider field of AI. Gemini, the AI model announced by Google today, he says, opens up an untrodden path in AI that could lead to major new breakthroughs. "As a neuroscientist as well as a computer scientist, I've wanted for years to try and create a kind of new generation of AI models that are inspired by the way we interact and understand the world, through all our senses," Hassabis told WIRED ahead of the announcement today.
Increasing talk of artificial intelligence developing with potentially dangerous speed is hardly slowing things down. A year after OpenAI launched ChatGPT and triggered a new race to develop AI technology, Google today revealed an AI project intended to reestablish the search giant as the world leader in AI. Gemini, a new type of AI model that can work with text, images, and video, could be the most important algorithm in Google's history after PageRank, which vaulted the search engine into the public psyche and created a corporate giant. An initial version of Gemini starts to roll out today inside Google's chatbot Bard for the English language setting. It will be available in more than 170 countries and territories.
The ChatGPT developer's new board of directors and its briefly fired but now-restored CEO, Sam Altman, said last week that they're trying to fix the unusual corporate structure that allowed four board members to trigger a near-death experience for the company. The startup was founded in 2015 as a nonprofit, but it develops AI inside a capped-profit subsidiary answerable to the nonprofit's board, which is charged with ensuring that the technology is "broadly beneficial" to humanity. To stabilize this unusual structure, OpenAI could take pointers from longer-lived companies with a similar arrangement--including introducing a second board to help balance its founding mission with its for-profit pursuit of returns for investors. OpenAI deferred comment for this story to new board chair Bret Taylor. The veteran tech executive told WIRED in a statement that the board is focused on overseeing an independent review of the recent crisis and enhancing governance.
When the board of OpenAI suddenly fired the company's CEO last month, it sparked speculation that board members were rattled by the breakneck pace of progress in artificial intelligence and the possible risks of seeking to commercialize the technology too quickly. Robust Intelligence, a startup founded in 2020 to develop ways to protect AI systems from attack, says that some existing risks need more attention. Working with researchers from Yale University, Robust Intelligence has developed a systematic way to probe large language models (LLMs), including OpenAI's prized GPT-4 asset, using "adversarial" AI models to discover "jailbreak" prompts that cause the language models to misbehave. While the drama at OpenAI was unfolding, the researchers warned OpenAI of the vulnerability. They say they have yet to receive a response.