Gov. Gavin Newsom is on the spot. The California Senate passed a bill Monday mandating human drivers behind the wheel of autonomous trucks on state highways for at least the next five years. The Legislature says it's concerned about safety. The governor's office says it's concerned about innovation. It's now up to Newsom to veto the bill or sign it.
The United States Environmental Protection Agency blocked its employees from accessing ChatGPT while the US State Department staff in Guinea used it to draft speeches and social media posts. Maine banned its executive branch employees from using generative artificial intelligence for the rest of the year out of concern for the state's cybersecurity. In nearby Vermont, government workers are using it to learn new programming languages and write internal-facing code, according to Josiah Raiche, the state's director of artificial intelligence. The city of San Jose, California, wrote 23 pages of guidelines on generative AI and requires municipal employees to fill out a form every time they use a tool like ChatGPT, Bard, or Midjourney. Less than an hour's drive north, Alameda County's government has held sessions to educate employees about generative AI's risks--such as its propensity for spitting out convincing but inaccurate information--but doesn't see the need yet for a formal policy.
Authenticating works of art is far from an exact science, but a madonna and child painting has sparked a furious row, being dubbed "the battle of the AIs", after two separate scientific studies arrived at contradictory conclusions. Both studies used state-of-the art AI technology. Months after one study proclaimed that the so-called de Brécy Tondo, currently on display at Bradford council's Cartwright Hall Art Gallery, is "undoubtedly" by Raphael, another has found that it cannot be by the Renaissance master. In January, research teams from the universities of Nottingham and Bradford announced the findings of facial recognition technology, which compared the faces in the Tondo with those in Raphael's Sistine Madonna altarpiece, commissioned in 1512. Having used "millions of faces to train an algorithm to recognise and compare facial features", they stated: "The similarity between the madonnas was found to be 97%, while comparison of the child in both paintings produced an 86% similarity."
The US government should create a new body to regulate artificial intelligence--and restrict work on language models like OpenAI's GPT-4 to companies granted licenses to do so. That's the recommendation of a bipartisan duo of senators, Democrat Richard Blumenthal and Republican Josh Hawley, who launched a legislative framework yesterday to serve as a blueprint for future laws and influence other bills before Congress. Under the proposal, developing face recognition and other "high risk" applications of AI would also require a government license. To obtain one, companies would have to test AI models for potential harm before deployment, disclose instances when things go wrong after launch, and allow audits of AI models by an independent third party. The framework also proposes that companies should publicly disclose details of the training data used to create an AI model, and that people harmed by AI get a right to bring the company that created it to court.
As a fourth-year ophthalmology resident at Emory University School of Medicine, Dr. Riley Lyons' biggest responsibilities include triage: When a patient comes in with an eye-related complaint, Lyons must make an immediate assessment of its urgency. He often finds patients have already turned to "Dr. Online, Lyons said, they are likely to find that "any number of terrible things could be going on based on the symptoms that they're experiencing." So, when two of Lyons' fellow ophthalmologists at Emory came to him and suggested evaluating the accuracy of the AI chatbot ChatGPT in diagnosing eye-related complaints, he jumped at the chance. In June, Lyons and his colleagues reported in medRxiv, an online publisher of preliminary health science studies, that ChatGPT compared quite well to human doctors who reviewed the same symptoms -- and performed vastly better than the symptom checker on the popular health website WebMD. And despite the much-publicized "hallucination" problem known to ...
Self-driving cars are hitting city streets like never before. In August the California Public Utilities Commission (CPUC) granted two companies, Cruise and Waymo, permits to run fleets of driverless robo taxis 24/7 in San Francisco and to charge passengers fares for those rides. This was just the latest in a series of green lights that have allowed progressively more leeway for autonomous vehicles (AVs) in the city in recent years. Almost immediately, widely publicized accounts emerged of Cruise vehicles behaving erratically. One blocked the road outside a large music festival, another got stuck in wet concrete and another even collided with a fire truck.
The news: One of the leading companies offering alternatives to lithium batteries for the grid has just received a nearly $400 million loan from the US Department of Energy. Eos Energy makes zinc-halide batteries, which the firm hopes could one day be used to store renewable energy at a lower cost than is possible with existing lithium-ion batteries. What they're made of: Eos's batteries are primarily made from zinc, the fourth most produced metal in the world, and use a water-based electrolyte (the liquid that moves charge around in a battery) instead of organic solvent. This makes them more stable than lithium-ion cells, and means they won't catch fire. Why it matters: While the cost of lithium-ion batteries has plummeted over the past decade, there's a growing need for even cheaper options.
What is unique about AI is also what is most feared and celebrated--its ability to match some of our own skills, and then to go further, accomplishing what humans cannot. AI's capacity to model itself on human behavior has become its defining feature. Yet behind every advance in machine learning and large language models are, in fact, people--both the often obscured human labor that makes large language models safer to use, and the individuals who make critical decisions on when and how to best use this technology. Reporting on people and influence is what TIME does best. That led us to the TIME100 AI.
The advent of generative AI, which includes chatbots such as OpenAI's ChatGPT and Google's Bard, has triggered concern that the technology could replace jobs, leading governments around the world to scramble to understand AI tools and respond. Prominent AI companies say they welcome regulation but have also lobbied against some approaches, saying strict laws could stifle the tech's development. There are also signs that consumer usage of generative AI tools is slowing, raising questions of how long the boom will last.
The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned--human workers determined that it was not going fast enough. And so 25-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams's head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.