Deep neural networks might be able to produce lengthy stretches of coherent text, but they don't understand abstract and concrete concepts in the way that humans do. If a computer gives you all the right answers, does it mean that it is understanding the world as you do? This is a riddle that artificial intelligence scientists have been debating for decades. And discussions of understanding, consciousness, and true intelligence are resurfacing as deep neural networks have spurred impressive advances in language-related tasks. Many scientists believe that deep learning models are just large statistical machines that map inputs to outputs in complex and remarkable ways.
Adobe's PDF format has entrenched itself so deeply in US government document pipelines that the number of state-issued documents currently in existence is conservatively estimated to be in the hundreds of millions. Often opaque and lacking metadata, these PDFs – many created by automated systems – collectively tell no stories or sagas; if you don't know exactly what you're looking for, you'll probably never find a pertinent document. And if you did know, you probably didn't need the search. However a new project is using computer vision and other machine learning approaches to change this almost unapproachable mountain of data into a valuable and explorable resource for researchers, historians, journalists and scholars. When the US government discovered Adobe's Portable Document Format (PDF) in the 1990s, it decided that it liked it.
When Hongzhi Gao was young, he lived with his family in Gansu, a province located in the center of northern China by the Tengger Desert. Thinking back to his childhood, he recalls the constant, steady wind of dirt outside their house, and that during most months of the year it didn't take more than a minute after stepping outside before sand would fill any empty space and creep into his pockets, boots, and his mouth. The monotony of the desert stuck in his head for years, and at university he turned that memory into an idea to build a machine that can bring plant life to the desert landscape. Efforts to stop desertification--the process by which fertile land becomes desert--have been primarily focused on expensive manual solutions. Hongzhi designed a robot with deep learning technology to automate the process of tree planting: from identifying optimal spots to planting tree seedlings to watering. Despite having no experience with AI, as an undergraduate student Hongzhi used Baidu's deep learning platform PaddlePaddle to stitch together different modules to build a robot with better object detection capability than similar machines already available in the market.
Vince Patton, a new Tesla owner, demonstrates on Dec. 8, 2021, on a closed course in Portland, Ore., how he can play video games on the vehicle's console while driving. Under pressure from U.S. auto safety regulators, Tesla has agreed to stop allowing video games to be played on center touch screens while its vehicles are moving. Vince Patton, a new Tesla owner, demonstrates on Dec. 8, 2021, on a closed course in Portland, Ore., how he can play video games on the vehicle's console while driving. Under pressure from U.S. auto safety regulators, Tesla has agreed to stop allowing video games to be played on center touch screens while its vehicles are moving. DETROIT -- Under pressure from U.S. auto safety regulators, Tesla has agreed to stop allowing video games to be played on center touch screens while its vehicles are moving.
The algorithm will be used in tandem with the company's access to ultra-high-resolution imagery. The technology was previously used by the Brit claims team and its delegated claims adjusters in the wake of Hurricane Ida. Brit has successfully deployed the technology to expedite the identification of insured property damage in the wake of the tornadoes that ripped through the Midwest Dec. 10-11. The machine-learning algorithm, developed by the company's data science team, assesses ultra-high-resolution aerial images and data. The algorithm allows Brit's claims team to identify, triage and assign response activity even before claims are reported.
Researchers at the University of Texas have discovered a new way for neural networks to simulate symbolic reasoning. This discovery sparks an exciting path toward uniting deep learning and symbolic reasoning AI. In the new approach, each neuron has a specialized function that relates to specific concepts. "It opens the black box of standard deep learning models while also being able to handle more complex problems than what symbolic AI has typically handled," Paul Blazek, University of Texas Southwestern Medical Center researcher and one of the authors of the Nature paper, told VentureBeat. This work complements previous research on neurosymbolic methods such as MIT's Clevrer, which has shown some promise in predicting and explaining counterfactual possibilities more effectively than neural networks.
In November, voters in Bellingham, Washington, passed a ballot measure banning government use of face recognition technology. It added to a streak of such laws that started with San Francisco in 2019 and now number around two dozen. The spread of such bans has inspired hope from campaigners and policy experts of a turn against an artificial intelligence technology that can lead to invasions of privacy or even wrongful arrest. Such feelings got a boost when Facebook unexpectedly announced on the day of the Bellingham vote that it would shutter its own face recognition system for identifying people in photos and videos, due to "growing societal concerns." Yet a few months earlier and about 100 miles from Bellingham, the commission that runs Seattle-Tacoma International Airport passed its own face recognition restrictions that leave airlines free to use the technology for functions like bag drop and check in, although it promised to provide some oversight and barred the technology's use by port police.
Waymo, a unit of Google parent Alphabet Inc., is one of several companies testing driverless vehicles in the U.S. Automakers are also developing self-driving technology, but it still requires human drivers to take over when required. Waymo, a unit of Google parent Alphabet Inc., is one of several companies testing driverless vehicles in the U.S. Automakers are also developing self-driving technology, but it still requires human drivers to take over when required. If you're taking a lot of road trips this holiday season, maybe you've wished your car could just drive itself to Grandma's house. The auto industry has been working on autonomous driving for years. And companies like Waymo and Cruise are testing fully autonomous driving -- in some cities, you can already hop in a driverless taxi.
The debate topic was: "This house believes that AI will never be ethical." Not a day passes without a fascinating snippet on the ethical challenges created by "black box" artificial intelligence systems. These use machine learning to figure out patterns within data and make decisions – often without a human giving them any moral basis for how to do it. Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past. Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase "field hockey" or the first name "Jared". More seriously, former Google CEO Eric Schmidt recently combined with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the dangers of machine-learning AI systems so fast that they could react to hypersonic missiles by firing nuclear weapons before any human got into the decision-making process.