KEY POINTS The information technology (IT) sector is poised strong, with 5.0 percent growth projected IT Industry Business Confidence Index notched one of its highest ratings ever heading into the first quarter of 2018. Executives cite robust customer demand and the uptake of emerging product and service categories as key contributors to the positive sentiment. Revenue growth should follow suit. Global Forecasts projects growth of 5.0 percent across the global tech sector in 2018; and, if everything falls into place, the upside of the forecast could push growth into the 7 percent-plus range. According to IDC, global information technology spending will top $4.8 trillion in 2018, with the U.S. accounting for approximately $1.5 trillion of the market.
As the United Kingdom's largest automobile manufacturer and investor in research and development in the UK manufacturing sector, Jaguar Land Rover is the combination of two iconic British car brands--Jaguar that features luxury sports cars and sedans and Land Rover, maker of premium all-wheel-drive vehicles. These brands began in the middle of the 20th century and gained a reputation for innovation.
Abstract--This paper investigates the vision-based autonomous driving with deep learning and reinforcement learning methods. Different from the end-to-end learning method, our method breaks the vision-based lateral control system down into a perception module and a control module. The perception module which is based on a multi-task learning neural network first takes a driver-view image as its input and predicts the track features. The control module which is based on reinforcement learning then makes a control decision based on these features. In order to improve the data efficiency, we propose visual TORCS (VTORCS), a deep reinforcement learning environment which is based on the open racing car simulator (TORCS). By means of the provided functions, one can train an agent with the input of an image or various physical sensor measurement, or evaluate the perception algorithm on this simulator. The trained reinforcement learning controller outperforms the linear quadratic regulator (LQR) controller and model predictive control (MPC) controller on different tracks. The experiments demonstrate that the perception module shows promising performance and the controller is capable of controlling the vehicle drive well along the track center with visual input. N recent years, artificial intelligence (AI) has flourished in many fields such as autonomous driving  , games  , and engineering applications  . As one of the most popular topics, autonomous driving has drawn great attention both from the academic and industrial communities and is thought to be the next revolution in the intelligent transportation system. The autonomous driving system mainly consists of four modules: an environment perception module, a trajectory planning module, a control module, and an actuator mechanism module. The initial perception methods   are based on the expensive LIDARs which usually cost tens of thousands of dollars. The high cost limits their large-scale applications to the ordinary vehicles. Recently, more attention is paid to the image-based methods  of which the core sensor, i.e. camera is relatively cheap and already equipped on most vehicles. Some of these perception methods have been developed into products  . In this paper, we focus on the lateral control problem based on the image captured by the onboard camera.
You could argue that Waymo, the self-driving subsidiary of Alphabet, has the safest autonomous cars around. It's certainly covered the most miles. But in recent years, serious accidents involving early systems from Uber and Tesla have eroded public trust in the nascent technology. To win it back, putting in the miles on real roads just isn't enough. So today Waymo not only announced that its vehicles have clocked more than 10 million miles since 2009.
Machine-learning and artificial intelligence algorithms used in sophisticated applications such as for autonomous cars are not foolproof and can be easily manipulated by introducing errors, Indian Institute of Science (IISc) researchers have warned. Machine-learning and AI software are trained with initial sets of data such as images of cats and it learns to identify feline images as more such data are fed. A common example is Google throwing up better results as more people search for the same information. Use of AI applications is becoming mainstream in areas such as healthcare, payments processing, deploying drones to monitor crowds, and for facial recognition in offices and airports. "If your data input is not clear and vetted, the AI machine could throw up surprising results and that could end up being hazardous.
The words "fly like an eagle" are famously part of a song, but they may also be words that make some scientists scratch their heads. Especially when it comes to soaring birds like eagles, falcons and hawks, who seem to ascend to great heights over hills, canyons and mountain tops with ease. Scientists realize that upward currents of warm air assist the birds in their flight, but they don't know how the birds find and navigate these thermal plumes. To figure it out, researchers from the University of California San Diego used reinforcement learning to train gliders to autonomously navigate atmospheric thermals, soaring to heights of 700 meters--nearly 2,300 feet. The novel research results, published in the Sept. 19 issue of Nature, highlight the role of vertical wind accelerations and roll-wise torques as viable biological cues for soaring birds.
For any autonomous driving vehicle, control module determines its road performance and safety, i.e. its precision and stability should stay within a carefully-designed range. Nonetheless, control algorithms require vehicle dynamics (such as longitudinal dynamics) as inputs, which, unfortunately, are obscure to calibrate in real time. As a result, to achieve reasonable performance, most, if not all, research-oriented autonomous vehicles do manual calibrations in a one-by-one fashion. Since manual calibration is not sustainable once entering into mass production stage for industrial purposes, we here introduce a machine-learning based auto-calibration system for autonomous driving vehicles. In this paper, we will show how we build a data-driven longitudinal calibration procedure using machine learning techniques. We first generated offline calibration tables from human driving data. The offline table serves as an initial guess for later uses and it only needs twenty-minutes data collection and process. We then used an online-learning algorithm to appropriately update the initial table (the offline table) based on real-time performance analysis. This longitudinal auto-calibration system has been deployed to more than one hundred Baidu Apollo self-driving vehicles (including hybrid family vehicles and electronic delivery-only vehicles) since April 2018. By August 27, 2018, it had been tested for more than two thousands hours, ten thousands kilometers (6,213 miles) and yet proven to be effective.
Artificial intelligence (AI) systems, blending data and advanced algorithms to mimic the cognitive functions of the human mind, have begun to simplify and enhance even the simplest aspects of our everyday experiences -- and the automotive industry is no exception. A Tractica market intelligence study forecasts that the demand for automotive AI hardware, software, and services will explode from $404 million in 2016 to $14 billion by 2025. Semi-autonomous and fully autonomous vehicles must heavily rely on AI systems to guide the dependability of their fail-safe navigation and earn the trust of drivers and passengers. In February 2017, Ford invested $1 billion -- Detroit's biggest investment yet -- in the self-driving car startup Argo AI, which was founded by a partnership between two top engineers from Google and Uber. Tesla founder Elon Musk speculates that AI will surpass solely human-based efforts by the year 2030.
Listen to your vehicle - this is an advice that all car and motorcycle owners are given when they're getting to know more about the vehicle. Now, a new AI service developed by 3Dsignals, an Israel based start-up is doing just that. The AI system can detect an impending failure in cars or other machines, just by listening to the sound. The system depends on deep learning technique to identify the noise patterns of a car. As per a report by IEEE spectrum, 3Dsignals promises to reduce machinery downtime by 40% and improve efficiency.
Don't hold your breath waiting for the first fully autonomous car to hit the streets anytime soon. Car manufacturers have projected for years that we might have fully automated cars on the roads by 2018. But for all the hype that they bring, it may be years, if not decades, before self-driving systems are reliably able to avoid accidents, according to a blog published Tuesday in The Verge. The million-dollar question is whether self-driving cars will keep getting better – like image search, voice recognition and other artificial intelligence "success stories" – or will they run into a "generalization" problem like chatbots (where some chatbots couldn't make unique responses to questions)? Generalization, author Russell Brandom explained in the blog Self-driving cars are headed toward an AI roadblock, can be difficult for conventional deep learning systems.