taxnodes:Technology: AI-Alerts
Senators Want ChatGPT-Level AI to Require a Government License
The US government should create a new body to regulate artificial intelligence--and restrict work on language models like OpenAI's GPT-4 to companies granted licenses to do so. That's the recommendation of a bipartisan duo of senators, Democrat Richard Blumenthal and Republican Josh Hawley, who launched a legislative framework yesterday to serve as a blueprint for future laws and influence other bills before Congress. Under the proposal, developing face recognition and other "high risk" applications of AI would also require a government license. To obtain one, companies would have to test AI models for potential harm before deployment, disclose instances when things go wrong after launch, and allow audits of AI models by an independent third party. The framework also proposes that companies should publicly disclose details of the training data used to create an AI model, and that people harmed by AI get a right to bring the company that created it to court.
'Dr. Google' meets its match: Dr. ChatGPT
As a fourth-year ophthalmology resident at Emory University School of Medicine, Dr. Riley Lyons' biggest responsibilities include triage: When a patient comes in with an eye-related complaint, Lyons must make an immediate assessment of its urgency. He often finds patients have already turned to "Dr. Online, Lyons said, they are likely to find that "any number of terrible things could be going on based on the symptoms that they're experiencing." So, when two of Lyons' fellow ophthalmologists at Emory came to him and suggested evaluating the accuracy of the AI chatbot ChatGPT in diagnosing eye-related complaints, he jumped at the chance. In June, Lyons and his colleagues reported in medRxiv, an online publisher of preliminary health science studies, that ChatGPT compared quite well to human doctors who reviewed the same symptoms -- and performed vastly better than the symptom checker on the popular health website WebMD. And despite the much-publicized "hallucination" problem known to ...
'Robo-Taxi Takeover' Hits Speed Bumps
Self-driving cars are hitting city streets like never before. In August the California Public Utilities Commission (CPUC) granted two companies, Cruise and Waymo, permits to run fleets of driverless robo taxis 24/7 in San Francisco and to charge passengers fares for those rides. This was just the latest in a series of green lights that have allowed progressively more leeway for autonomous vehicles (AVs) in the city in recent years. Almost immediately, widely publicized accounts emerged of Cruise vehicles behaving erratically. One blocked the road outside a large music festival, another got stuck in wet concrete and another even collided with a fire truck.
The Download: promising new batteries, and how to regulate AI
The news: One of the leading companies offering alternatives to lithium batteries for the grid has just received a nearly $400 million loan from the US Department of Energy. Eos Energy makes zinc-halide batteries, which the firm hopes could one day be used to store renewable energy at a lower cost than is possible with existing lithium-ion batteries. What they're made of: Eos's batteries are primarily made from zinc, the fourth most produced metal in the world, and use a water-based electrolyte (the liquid that moves charge around in a battery) instead of organic solvent. This makes them more stable than lithium-ion cells, and means they won't catch fire. Why it matters: While the cost of lithium-ion batteries has plummeted over the past decade, there's a growing need for even cheaper options.
How We Chose the TIME100 Most Influential People in AI
What is unique about AI is also what is most feared and celebrated--its ability to match some of our own skills, and then to go further, accomplishing what humans cannot. AI's capacity to model itself on human behavior has become its defining feature. Yet behind every advance in machine learning and large language models are, in fact, people--both the often obscured human labor that makes large language models safer to use, and the individuals who make critical decisions on when and how to best use this technology. Reporting on people and influence is what TIME does best. That led us to the TIME100 AI.
Newsom tells California government to deepen, guide use of AI
The advent of generative AI, which includes chatbots such as OpenAI's ChatGPT and Google's Bard, has triggered concern that the technology could replace jobs, leading governments around the world to scramble to understand AI tools and respond. Prominent AI companies say they welcome regulation but have also lobbied against some approaches, saying strict laws could stifle the tech's development. There are also signs that consumer usage of generative AI tools is slowing, raising questions of how long the boom will last.
Robots Are Already Killing People
The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned--human workers determined that it was not going fast enough. And so 25-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams's head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.
The Download: how Yale University has prepared for ChatGPT, and schools' AI reckoning
Back-to-school season always feels like a reset moment. However, the big topic this time around seems to be the same thing that defined the end of last year: ChatGPT and other large language models. Last winter and spring brought so many headlines about AI in the classroom, with some panicked schools going as far as to ban ChatGPT altogether. Now, with the summer months having offered a bit of time for reflection, some schools seem to be reconsidering their approach. Tate Ryan-Mosley, our senior tech policy reporter, spoke to the associate provost at Yale University to find out why the prestigious school never considered banning ChatGPT--and instead wants to work with it.
US restricts exports of Nvidia AI chips to Middle East
The US has expanded the restriction of exports of Nvidia artificial intelligence chips beyond China to some countries in the Middle East. Nvidia, which is one of the world's most valuable companies at $1.2tn, said in a regulatory filing this week the curbs affected its A100 and H100 chips, which are used to accelerate machine-learning tasks on major artificial intelligence apps, such as ChatGPT. The firm said the controls would not have an "immediate material impact" on its results. It did not say which countries in the Middle East were affected by these restrictions. Nvidia's rival in the sector, AMD, had also received an informed letter with similar restrictions, a person familiar with the matter told Reuters.
How to talk to an AI chatbot
ChatGPT doesn't come with an instruction manual. Only a quarter of Americans who have heard of the AI chatbot say they have used it, Pew Research Center reported this week. "The hardest lesson" for new AI chatbot users to learn, says Ethan Mollick, a Wharton professor and chatbot enthusiast, "is that they're really difficult to use." Or at least, to use well. The Washington Post talked with Mollick and other experts about how to get the most out of AI chatbots -- from OpenAI's ChatGPT to Google's Bard and Microsoft's Bing -- and how to avoid common pitfalls.