Natural Language
Chatbots will be able to teach children TWICE as fast as teachers in the next 10 years, says the 'godfather of AI'
Chatbots will be able to teach children more than twice as fast as teachers can within the next decade, the so-called godfather of AI has predicted. Geoffrey Hinton, who won a Nobel Prize for his work on the technology, also claimed AI personal tutors would'be much more efficient and less boring'. Speaking at Gitex Europe, the British computer scientist said: 'It's not there yet, but it's coming, and so we'll get much better education at many levels.' AI personal tutors are already being trialled in UK schools, with the technology now able to talk directly to the student and adapt lesson plans to their knowledge level. The government has already funnelled millions of pounds into AI education initiatives – though it has claimed the technology will'absolutely not' replace teachers.
Amazons latest AI shopping feature produces quick audio product summaries
Amazon is aiming to make shopping just a bit easier. This week, Amazon launched a new generative AI feature that produces short audio summaries, detailing everything you need to know about a product. The audio descriptions, which Amazon is calling "hear the highlights", are created from on-page product summaries, reviews, and information from other websites, crafting short snippets that deliver everything you need to know about a product. The product summaries are now available on a limited number of items on Amazon and for US customers only. To access "Hear the highlights", you can do so in the Amazon app.
On the Benefits of Public Representations for Private Transfer Learning under Distribution Shift
Public pretraining is a promising approach to improve differentially private model training. However, recent work has noted that many positive research results studying this paradigm only consider in-distribution tasks, and may not apply to settings where there is distribution shift between the pretraining and finetuning data--a scenario that is likely when finetuning private tasks due to the sensitive nature of the data. In this work, we show empirically across three tasks that even in settings with large distribution shift, where both zero-shot performance from public data and training from scratch with private data give unusably weak results, public features can in fact improve private training accuracy by up to 67% over private training from scratch. We provide a theoretical explanation for this phenomenon, showing that if the public and private data share a low-dimensional representation, public representations can improve the sample complexity of private training even if it is impossible to learn the private task from the public data alone. Altogether, our results provide evidence that public data can indeed make private training practical in realistic settings of extreme distribution shift.
iPhone design guru and OpenAI chief promise an AI device revolution
Everything over the last 30 years, according to Sir Jony Ive, has led to this moment: a partnership between the iPhone designer and the developer of ChatGPT. Ive has sold his hardware startup, io, to OpenAI and will take on creative and design leadership across the merged businesses. "I have a growing sense that everything I have learned over the last 30 years has led me to this place, to this moment," he says in a video announcing the 6.4bn ( 4.8bn) deal. The main aim will be to move on from Ive's signature achievement designing Apple's most successful product, as well as the iPod, iPad and Apple Watch. The British-born designer has already developed a prototype io device, and one of its users is OpenAI's chief executive, Sam Altman.
AI Is Eating Data Center Power Demand--and It's Only Getting Worse
AI's energy use already represents as much as 20 percent of global data-center power demand, research published Thursday in the journal Joule shows. That demand from AI, the research states, could double by the end of this year, comprising nearly half of all total data-center electricity consumption worldwide, excluding the electricity used for bitcoin mining. The new research is published in a commentary by Alex de Vries-Gao, the founder of Digiconomist, a research company that evaluates the environmental impact of technology. De Vries-Gao started Digiconomist in the late 2010s to explore the impact of bitcoin mining, another extremely energy-intensive activity, would have on the environment. Looking at AI, he says, has grown more urgent over the past few years because of the widespread adoption of ChatGPT and other large language models that use massive amounts of energy. According to his research, worldwide AI energy demand is now set to surpass demand from bitcoin mining by the end of this year.
Google made an AI content detector - join the waitlist to try it
Fierce competition among some of the world's biggest tech companies has led to a profusion of AI tools that can generate humanlike prose and uncannily realistic images, audio, and video. While those companies promise productivity gains and an AI-powered creativity revolution, fears have also started to swirl around the possibility of an internet that's so thoroughly muddled by AI-generated content and misinformation that it's impossible to tell the real from the fake. Many leading AI developers have, in response, ramped up their efforts to promote AI transparency and detectability. Most recently, Google announced the launch of its SynthID Detector, a platform that can quickly spot AI-generated content created by one of the company's generative models: Gemini, Imagen, Lyria, and Veo. Originally released in 2023, SynthID is a technology that embeds invisible watermarks -- a kind of digital fingerprint that can be detected by machines but not by the human eye -- into AI-generated images.
Anthropic's latest Claude AI models are here - and you can try one for free today
Since its founding in 2021, Anthropic has quickly become one of the leading AI companies and a worthy competitor to OpenAI, Google, and Microsoft with its Claude models. Building on this momentum, the company held its first developer conference, Thursday, -- Code with Claude -- which showcased what the company has done so far and where it is going next. Also: I let Google's Jules AI agent into my code repo and it did four hours of work in an instant Anthropic used the event stage to unveil two highly anticipated models, Claude Opus 4 and Claude Sonnet 4. Both offer improvements over their preceding models, including better performance in coding and reasoning. Beyond that, the company launched new features and tools for its models that should improve the user experience. Keep reading to learn more about the new models.
A United Arab Emirates Lab Announces Frontier AI Projects--and a New Outpost in Silicon Valley
A United Arab Emirates (UAE) academic lab today launched an artificial intelligence world model and agent, two large language models (LLMs) and a new research center in Silicon Valley as it ramps up its investment in the cutting-edge field. The UAE's Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) revealed an AI world model called PAN, which can be used to build physically realistic simulations for testing and honing the performance of AI agents. Eric Xing, President and Professor of MBZUAI and a leading AI researcher, revealed the models and lab at the Computer History Museum in Mountain View, California today. The UAE has made big investments in AI in recent years under the guidance of Sheikh Tahnoun bin Zayed al Nahyan, the nation's tech-savvy national security advisor and younger brother of president Mohamed bin Zayed Al Nahyan. Xing says the UAE's new center in Sunnyvale, California, will help the nation tap into the world's most concentrated source of AI knowledge and talent.
DOGE Used Meta AI Model to Review Emails From Federal Workers
Elon Musk's so-called Department of Government Efficiency (DOGE) used artificial intelligence from Meta's Llama model to comb through and analyze emails from federal workers. Materials viewed by WIRED show that DOGE affiliates within the Office of Personnel Management (OPM) tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous "Fork in the Road" email that was sent across the government in late January. The email offered deferred resignation to anyone opposed to changes the Trump administration was making to its federal workforce, including an enforced return to office policy, downsizing, and a requirement to be "loyal." To leave their position, recipients merely needed to reply with the word "resign." This email closely mirrored one that Musk sent to Twitter employees shortly after he took over the company in 2022.
Google's New AI Puts Breasts on Minors--And J. D. Vance
Sorry to tell you this, but Google's new AI shopping tool appears eager to give J. D. Vance breasts. This week, at its annual software conference, Google released an AI tool called Try It On, which acts as a virtual dressing room: Upload images of yourself while shopping for clothes online, and Google will show you what you might look like in a selected garment. Curious to play around with the tool, we began uploading images of famous men--Vance, Sam Altman, Abraham Lincoln, Michelangelo's David, Pope Leo XIV--and dressed them in linen shirts and three-piece suits. But when we tested a number of articles designed for women on these famous men, the tool quickly adapted: Whether it was a mesh shirt, a low-cut top, or even just a T-shirt, Google's AI rapidly spun up images of the vice president, the CEO of OpenAI, and the vicar of Christ with breasts. It's not just men: When we uploaded images of women, the tool repeatedly enhanced their décolletage or added breasts that were not visible in the original images.