Street maps help to inform a wide range of decisions. Drivers, cyclists, and pedestrians use them for search and navigation. Rescue workers responding to disasters such as hurricanes, tsunamis, and earthquakes rely on street maps to understand where people are and to locate individual buildings.23 Transportation researchers consult street maps to conduct transportation studies, such as analyzing pedestrian accessibility to public transport.25 Indeed, with the need for accurate street maps growing in importance, companies are spending hundreds of millions of dollars to map roads globally.a However, street maps are incomplete or lag behind new construction in many parts of the world. In rural Indonesia, for example, entire groups of villages are missing from OpenStreet-Map, a popular open map dataset.3 In many of these villages, the closest mapped road is miles away. In Qatar, construction of new infrastructure has boomed in preparation for the FIFA World Cup 2022.
Although great progress has been made in automatic speech recognition (ASR), significant performance degradation still exists in very noisy environments. Over the past few years, Chinese startup AISpeech has been developing very deep convolutional neural networks (VDCNN),21 a new architecture the company recently began applying to ASR use cases. Different than traditional deep CNN models for computer vision, VDCNN features novel filter designs, pooling operations, input feature map selection, and padding strategies, all of which lead to more accurate and robust ASR performance. Moreover, VDCNN is further extended with adaptation, which can significantly alleviate the mismatch between training and testing. Factor-aware training and cluster-adaptive training are explored to fully utilize the environmental variety and quickly adapt model parameters.
Chinese AI businesses have been growing rapidly since 2010. They have attracted significant investment from Internet giants and a vast number of emerging AI companies have emerged. Over the past decade, Chinese AI start-ups have gradually moved away from noisy bubbles and landed in an investment boom. In 2020, when people were fighting against the pandemic, CloudMinds, an AI start-up based in Beijing, developed a humanoid service robot named Cloud Ginger XR-1. Ginger played an important role in local hospitals, delivering food and medication to patients in a contactless manner when it was needed the most. Moreover, Ginger entertained patients, freeing up doctors and medical teams to focus on more critical health matters.
Tesla cars' 'Full Self Driving' capability, which does not allow the car to drive fully by themselves, is finally rolling out following a delay from the company to deal with unknown "issues." In a tweet yesterday, Tesla CEO Elon Musk said that the company was "seeing some issues" with the new update and would be rolling it back temporarily to its previous version. The Full Self Driving beta was originally meant to be released on midnight of 7 October, but a few "last minute concerns" about the build delayed it until 11 October. On 24 October, Elon Musk said that "regression in some left turns at traffic lights" was found by the Tesla engineers, and undefined "other issues" meant the beta had to be delayed. Tesla's $10,000 "Full Self-Driving" option does not mean that the vehicles can drive themselves.
A British-built robot that uses artificial intelligence and a mechanical arm to create art has been released by customs officials in Egypt ahead of an exhibition this week. Ai-Da, named after the mathematician Ada Lovelace, was seized by officials earlier this month over concerns "her" machinery could contain espionage tools. The device was held for 10 days as the British embassy worked with Cairo on the matter. "The Embassy is glad to see that Ai-Da the artist robot has now been cleared through customs," the UK's embassy in Cairo said in a statement. "Customs clearance procedures can be lengthy, and are required before importation of any artworks or IT equipment."
A certain type of artificial intelligence agent can learn the cause-and-effect basis of a navigation task during training. Neural networks can learn to solve all sorts of problems, from identifying cats in photographs to steering a self-driving car. But whether these powerful, pattern-recognizing algorithms actually understand the tasks they are performing remains an open question. For example, a neural network tasked with keeping a self-driving car in its lane might learn to do so by watching the bushes at the side of the road, rather than learning to detect the lanes and focus on the road's horizon. Researchers at MIT have now shown that a certain type of neural network is able to learn the true cause-and-effect structure of the navigation task it is being trained to perform.
Today's artificial intelligence technology is intended to mimic nature and replicate the same decision-making abilities that people develop naturally in a computer. Artificial neural networks, like living brains, are made up of many individual cells. When a cell becomes active, it transmits a signal to all other cells in the vicinity. The following cell's signals are added together to determine if it will become active as well. The system's behavior is determined by the way one cell influences the activity of the next.
The Air Force Research Laboratory, in partnership with United Kingdom's Defence Science and Technology Laboratory (Dstl), have demonstrated for the first time the ability for the U.S. and the U.K. to jointly develop, select, train and deploy state-of-the-art machine learning algorithms in support of the armed forces of each of the two nations. This research is designed to support adjacent, collaborating U.S. and U.K. brigades with enduring wide-area situational awareness, which aims to improve decision-making, increase operational tempo, reduce risk to life and reduce manpower burden. The in-person, virtual demonstration was hosted jointly at AFRL's Information Directorate in Rome and Dstl at its site near Salisbury, U.K., Oct. 18. The demonstration highlighted integrated AI technologies across the two nations, showcasing the ability to share data and algorithms through a common development and deployment platform to enable the rapid selection, testing and deployment of AI capabilities. The event was made possible by a U.K. and U.S. partnership agreement concerning autonomy and AI collaboration established in December 2020.
The potential impact of the ongoing worldwide data explosion continues to excite the imagination. A 2018 report estimated that every second of every day, every person produces 1.7 MB of data on average--and annual data creation has more than doubled since then and is projected to more than double again by 2025. A report from McKinsey Global Institute estimates that skillful uses of big data could generate an additional $3 trillion in economic activity, enabling applications as diverse as self-driving cars, personalized health care, and traceable food supply chains. But adding all this data to the system is also creating confusion about how to find it, use it, manage it, and legally, securely, and efficiently share it. Where did a certain dataset come from?