Apple's secretive efforts to develop a self-driving car -- its so-called "Project Titan" -- have taken a hard turn in 2019 after it emerged that the iPhone-maker has reassigned 200 employees previously involved in its development. That's according to CNBC which, citing sources, reported that a portion of the 200 staff were moved to other projects inside Apple, while others -- and it isn't clear how many -- were let go altogether. The news was enough to prompt Apple to respond with a confirmation that included a rare mention of its automotive ambitions. "We have an incredibly talented team working on autonomous systems and associated technologies at Apple. As the team focuses their work on several key areas for 2019, some groups are being moved to projects in other parts of the company, where they will support machine learning and other initiatives, across all of Apple. We continue to believe there is a huge opportunity with autonomous systems, that Apple has unique capabilities to contribute, and that this is the most ambitious machine learning project ever," a spokesperson said.
Advancements in artificial intelligence continue to develop on industries like aviation, manufacturing and technology, and others. This is because, the offerings of AI, machine learning and deep learning can help companies to become more efficient. But one industry which is witnessing dramatic change is the automotive sector. AI is revolutionizing this industry and has entirely new ways for people to get around and will also impact the way traffic will be maintained in the cities. The attempts to create driverless cars are gaining promising with the availability of advanced technologies, notably AI.
For the first issue of the PCMag Digital Edition in 2019, we're fast-forwarding to envision what technology--and our tech-driven society--will look like in 2039. We wanted to explore the myriad ways in which tech will be more intertwined with our lives and will have changed our culture. To do so, we interviewed a select group of futurists, execs, academics, researchers, and a speculative fiction writer, who gave us some thoughtful predictions. Each of our interviewees has a unique perspective on the most important factors that will influence our tech-driven future, including artificial intelligence, automation, biotechnology, nanotechnology, autonomous vehicles, Internet of Things devices, smart cities, and much more. They also speculate how broader issues such as climate change and online privacy and security will affect us and the technology with which we'll be living. It's our best educated guess at predicting what our world and technology's role in it will look like--whether our lives will be dystopian, utopian, or somewhere in that vast gray area in the middle. Jason Silva is host of the Emmy-nominated series Brain Games on National Geographic. He also created and hosts the YouTube series "Shots of Awe." The ebullient Venezuelan-born documentary filmmaker, speaker, and TV personality--who was once described by The Atlantic as "a Timothy Leary of the viral video age"--is a techno-optimist whose ideas are influenced by (among others) fellow futurist Ray Kurzweil, Wired founding editor Kevin Kelly and his concept of the Technium. In the next 20 years, we're going to see exponential progress in some of these nascent technologies, like virtual reality and augmented reality. I think the next thing to dematerialize is the smartphone itself. What that looks like, who knows? Maybe it's a pair of eyeglasses we put on that are connected to some kind of computational device, and it will beam an augmented reality interface that fully overlays, that is contextually aware, and enhances the way we interface with the world--so that essentially, each one of us has that kind of personalized experience of reality.
What DeepRacer is not, technically speaking, is an artificially intelligent car. To be accurate about this, the "intelligence" that drives the car resides on AWS' cloud. There, the DeepRacer system learns about the car's environment, and sends operating instructions over wireless link to the car. That does make the device an autonomous car, though in one respect, not an AI car.
Alphabet's self-driving spinoff Waymo achieved some noteworthy milestones this year, in August surpassing 10 million real-world miles with its driverless cars and last week launching Waymo One, a commercial driverless taxi service. But its researchers have their eyes fixed on the future. In a blog post published today on Medium, researchers Mayank Bansal and Abhijit Ogale detailed an approach to AI driver training that taps labeled data -- that is to say, Waymo's millions of annotated miles from expert driving demonstrations -- in a supervised manner. "In recent years, the supervised training of deep neural networks using large amounts of labeled data has rapidly improved the state-of-the-art in many fields, particularly in the area of object perception and prediction, and these technologies are used extensively at Waymo," the researchers wrote. "Following the success of neural networks for perception, we naturally asked ourselves the question: … can we train a skilled driver using a purely supervised deep learning approach?"
KEY POINTS The information technology (IT) sector is poised strong, with 5.0 percent growth projected IT Industry Business Confidence Index notched one of its highest ratings ever heading into the first quarter of 2018. Executives cite robust customer demand and the uptake of emerging product and service categories as key contributors to the positive sentiment. Revenue growth should follow suit. Global Forecasts projects growth of 5.0 percent across the global tech sector in 2018; and, if everything falls into place, the upside of the forecast could push growth into the 7 percent-plus range. According to IDC, global information technology spending will top $4.8 trillion in 2018, with the U.S. accounting for approximately $1.5 trillion of the market.
Autonomous driving is a challenging domain that entails multiple aspects: a vehicle should be able to drive to its destination as fast as possible while avoiding collision, obeying traffic rules and ensuring the comfort of passengers. In this paper, we present a deep learning variant of thresholded lexicographic Q-learning for the task of urban driving. Our multi-objective DQN agent learns to drive on multi-lane roads and intersections, yielding and changing lanes according to traffic rules. We also propose an extension for factored Markov Decision Processes to the DQN architecture that provides auxiliary features for the Q function. This is shown to significantly improve data efficiency. We then show that the learned policy is able to zero-shot transfer to a ring road without sacrificing performance. To our knowledge, this is the first reinforcement learning based autonomous driving agent in literature that can handle multi-lane intersections with traffic rules.
As the United Kingdom's largest automobile manufacturer and investor in research and development in the UK manufacturing sector, Jaguar Land Rover is the combination of two iconic British car brands--Jaguar that features luxury sports cars and sedans and Land Rover, maker of premium all-wheel-drive vehicles. These brands began in the middle of the 20th century and gained a reputation for innovation.
Abstract--This paper investigates the vision-based autonomous driving with deep learning and reinforcement learning methods. Different from the end-to-end learning method, our method breaks the vision-based lateral control system down into a perception module and a control module. The perception module which is based on a multi-task learning neural network first takes a driver-view image as its input and predicts the track features. The control module which is based on reinforcement learning then makes a control decision based on these features. In order to improve the data efficiency, we propose visual TORCS (VTORCS), a deep reinforcement learning environment which is based on the open racing car simulator (TORCS). By means of the provided functions, one can train an agent with the input of an image or various physical sensor measurement, or evaluate the perception algorithm on this simulator. The trained reinforcement learning controller outperforms the linear quadratic regulator (LQR) controller and model predictive control (MPC) controller on different tracks. The experiments demonstrate that the perception module shows promising performance and the controller is capable of controlling the vehicle drive well along the track center with visual input. N recent years, artificial intelligence (AI) has flourished in many fields such as autonomous driving  , games  , and engineering applications  . As one of the most popular topics, autonomous driving has drawn great attention both from the academic and industrial communities and is thought to be the next revolution in the intelligent transportation system. The autonomous driving system mainly consists of four modules: an environment perception module, a trajectory planning module, a control module, and an actuator mechanism module. The initial perception methods   are based on the expensive LIDARs which usually cost tens of thousands of dollars. The high cost limits their large-scale applications to the ordinary vehicles. Recently, more attention is paid to the image-based methods  of which the core sensor, i.e. camera is relatively cheap and already equipped on most vehicles. Some of these perception methods have been developed into products  . In this paper, we focus on the lateral control problem based on the image captured by the onboard camera.
You could argue that Waymo, the self-driving subsidiary of Alphabet, has the safest autonomous cars around. It's certainly covered the most miles. But in recent years, serious accidents involving early systems from Uber and Tesla have eroded public trust in the nascent technology. To win it back, putting in the miles on real roads just isn't enough. So today Waymo not only announced that its vehicles have clocked more than 10 million miles since 2009.