Goto

Collaborating Authors

Results


AI Computing for Automotive: The Battle for Autonomy - EE Times Asia

#artificialintelligence

The 2025 market for AI, including ADAS and robotic vehicles, is estimated at $2.75 billion – of which $2.5 billion will be "ADAS only"... Artificial Intelligence (AI) is gradually invading our lives through everyday objects like smartphones, smart speakers, and surveillance cameras. The hype around AI has led some players to consider it as a secondary objective, more or less difficult to achieve, rather than as a central tool to achieve the real objective: autonomy. Who are the winners and losers in the race for autonomy? "AI is gradually invading our lives and this will be particularly true in the automotive world" asserts Yohann Tschudi, Technology & Market Analyst, Computing & Software at Yole Développement (Yole). "AI could be the central tool to achieve AD, in the meantime some players are afraid of overinflated hype and do not put AI at the center of their AD strategy".


Julia Language in Machine Learning: Algorithms, Applications, and Open Issues

arXiv.org Machine Learning

Machine learning is driving development across many fields in science and engineering. A simple and efficient programming language could accelerate applications of machine learning in various fields. Currently, the programming languages most commonly used to develop machine learning algorithms include Python, MATLAB, and C/C ++. However, none of these languages well balance both efficiency and simplicity. The Julia language is a fast, easy-to-use, and open-source programming language that was originally designed for high-performance computing, which can well balance the efficiency and simplicity. This paper summarizes the related research work and developments in the application of the Julia language in machine learning. It first surveys the popular machine learning algorithms that are developed in the Julia language. Then, it investigates applications of the machine learning algorithms implemented with the Julia language. Finally, it discusses the open issues and the potential future directions that arise in the use of the Julia language in machine learning.


A Little Fog for a Large Turn

arXiv.org Machine Learning

A Little Fog for a Large T urn Harshitha Machiraju, Vineeth N Balasubramanian Indian Institute of Technology, Hyderabad, India {ee14btech11011, vineethnb }@iith.ac.in Abstract Small, carefully crafted perturbations called adversarial perturbations can easily fool neural networks. However, these perturbations are largely additive and not naturally found. W e turn our attention to the field of Autonomous navigation wherein adverse weather conditions such as fog have a drastic effect on the predictions of these systems. These weather conditions are capable of acting like natural adversaries that can help in testing models. T o this end, we introduce a general notion of adversarial perturbations, which can be created using generative models and provide a methodology inspired by Cycle-Consistent Generative Adversarial Networks to generate adversarial weather conditions for a given image. Our formulation and results show that these images provide a suitable testbed for steering models used in Autonomous navigation models. Our work also presents a more natural and general definition of Adversarial perturbations based on Perceptual Similarity. 1 1. Introduction Autonomous navigation has occupied a central position in the efforts of computer vision researchers in recent years. Autonomous vehicles can not only aid navigation in urban areas but also provide critical support in disaster-affected areas, places with unknown topography (such as Mars), and many more. The vast potential of the applications thereof and the feasibility of the solutions in contemporary times has led to the growth of several organizations across industry, academia, and government institutions that are investing significant efforts on self-driving vehicles.


Why Jamaica urgently needs a minister of artificial intelligence

#artificialintelligence

In 2016, in a face-to-face talk, I had advised The University of the West Indies (UWI) Mona's artificial intelligence lecturer to introduce artificial neural networks, also known as modern artificial intelligence (AI), as an independent course to the computer science degree. Artificial neural networks power many smart applications today, ranging from self-driving cars to automated disease diagnosers. In 2016, I also began something called Machine Learning Jamaica Institute, which later, on June 2, 2018, had a free online curriculum, including a 2017 artificial intelligence book written by myself. The goal was to establish a physical building dedicated to artificial intelligence, to prepare the nation for growing automation. Of note, this year Abu Dhabi, capital of the United Arab Emirates, announced it would start "the world's first University of Artificial Intelligence", capitalising on the fact that an increasing number of countries are becoming more and more serious about artificial intelligence education.


NVIDIA Announces DRIVE AXG Orin, One of the Most Advanced Platforms for Autonomous Vehicles

#artificialintelligence

At Nvidia's GTC Technology Conference in China this week, the chipmaker unveiled its latest NVIDIA DRIVE platform the AGX Orin. Orin is an advanced processor for autonomous vehicles or robots that was a result of four years of R&D investment by Nvidia. The new platform is powered by a new system-on-a-chip (SoC), which consists of 17 billion transistors. The Orin SoC integrates NVIDIA's next-generation GPU architecture and Arm Hercules CPU cores, combined with new deep learning and computer vision accelerators that can deliver 200 trillion operations per second (200 TOPS), which Nvidia says is 7 times better performance than the company's previous generation Xavier SoC, which delivers 30TOPS of performance. Orin can transmit over 200 gigabytes of data per second of data using just 60 to 70 Watts of power, according to Danny Shapiro, Nvidia's senior director of automotive.


A Start-up's Evolution from AI Lab to AI Business

#artificialintelligence

For Preferred Networks, building tech for self-driving cars and smart factories is the daily routine. One of its biggest opportunities is to devise a business model that complements its technology. If you live outside Japan or work outside of the machine learning community, you may not have heard of Preferred Networks (PFN). This Tokyo-based start-up has been incrementally realising the potential of AI to reshape the internet of things (IoT) – ever since shifting focus from search engines to deep learning (and dropping its original moniker, Preferred Infrastructure) in 2014. But some of the company's biggest breakthroughs appear modest at first glance.


Paper by "Deep Learning Conspiracy" in Nature

#artificialintelligence

In the context of convolutional neural networks (ConvNets), LBH mention pooling, but not its pioneer (Weng, 1992), who replaced Fukushima's (1979) spatial averaging by max-pooling, today widely used by many, including LBH, who write: "ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012," citing Hinton's 2012 paper (Krizhevsky et al., 2012). Earlier, committees of max-pooling ConvNets were accelerated on GPU (Ciresan et al., 2011a), and used to achieve the first superhuman visual pattern recognition in a controlled machine learning competition, namely, the highly visible IJCNN 2011 traffic sign recognition contest in Silicon Valley (relevant for self-driving cars). The system was twice better than humans, and three times better than the nearest non-human competitor (co-authored by LeCun of LBH). It also broke several other machine learning records, and surely was not "forsaken" by the machine-learning community. In fact, the later system (Krizhevsky et al. 2012) was very similar to the earlier 2011 system.


StradVision, ushering in the era of the fully autonomous vehicle - PetaCrunch

#artificialintelligence

StradVision has raised $16.6M in total. We talked with Junhwan Kim, its CEO. How would you describe StradVision in a single tweet? StradVision is a pioneer in deep learning-based vision processing technology, providing the software that will allow Advanced Driver-Assistance Aystems (ADAS) in autonomous vehicles to reach the next level of safety, and usher in the era of the fully autonomous vehicle. How did it all start and why?


Flow: A Modular Learning Framework for Autonomy in Traffic

arXiv.org Artificial Intelligence

The rapid development of autonomous vehicles (AVs) holds vast potential for transportation systems through improved safety, efficiency, and access to mobility. However, due to numerous technical, political, and human factors challenges, new methodologies are needed to design vehicles and transportation systems for these positive outcomes. This article tackles important technical challenges arising from the partial adoption of autonomy (hence termed mixed autonomy, to involve both AVs and human-driven vehicles): partial control, partial observation, complex multi-vehicle interactions, and the sheer variety of traffic settings represented by real-world networks. To enable the study of the full diversity of traffic settings, we first propose to decompose traffic control tasks into modules, which may be configured and composed to create new control tasks of interest. These modules include salient aspects of traffic control tasks: networks, actors, control laws, metrics, initialization, and additional dynamics. Second, we study the potential of model-free deep Reinforcement Learning (RL) methods to address the complexity of traffic dynamics. The resulting modular learning framework is called Flow. Using Flow, we create and study a variety of mixed-autonomy settings, including single-lane, multi-lane, and intersection traffic. In all cases, the learned control law exceeds human driving performance (measured by system-level velocity) by at least 40% with only 5-10% adoption of AVs. In the case of partially-observed single-lane traffic, we show that a low-parameter neural network control law can eliminate commonly observed stop-and-go traffic. In particular, the control laws surpass all known model-based controllers, achieving near-optimal performance across a wide spectrum of vehicle densities (even with a memoryless control law) and generalizing to out-of-distribution vehicle densities.


How Japan can win in the ongoing AI war The Japan Times

#artificialintelligence

Can Japan compete in the global battle for dominance in artificial intelligence and robotics that is under way? A long-standing strength in AI research gives the United States an advantage that is reinforced by the deep bench of AI talent at its numerous universities and tech giants like Apple, Amazon, Facebook, Google and Microsoft. China's government incentives and growing leadership in the mobile economy has led to a data advantage -- its e-commerce giants like Tencent, Alibaba, Baidu and DiDi have an unparalleled view into the minutiae of everyday economic activities across hundreds of millions of consumers, data that feeds into increasingly sophisticated deep learning systems that power AI-native applications ranging from news filtering to medical diagnostics. Japan does not have to be left behind as the U.S. and China race ahead of the rest of the world. But building dominance in this new generation of technologies will require change and planning.