Goto

Collaborating Authors

Results


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


The NVIDIA PilotNet Experiments

arXiv.org Artificial Intelligence

Four years ago, an experimental system known as PilotNet became the first NVIDIA system to steer an autonomous car along a roadway. This system represents a departure from the classical approach for self-driving in which the process is manually decomposed into a series of modules, each performing a different task. In PilotNet, on the other hand, a single deep neural network (DNN) takes pixels as input and produces a desired vehicle trajectory as output; there are no distinct internal modules connected by human-designed interfaces. We believe that handcrafted interfaces ultimately limit performance by restricting information flow through the system and that a learned approach, in combination with other artificial intelligence systems that add redundancy, will lead to better overall performing systems. We continue to conduct research toward that goal. This document describes the PilotNet lane-keeping effort, carried out over the past five years by our NVIDIA PilotNet group in Holmdel, New Jersey. Here we present a snapshot of system status in mid-2020 and highlight some of the work done by the PilotNet group.


NVIDIA Builds Supercomputer to Build Self-Driving Cars NVIDIA Blog

#artificialintelligence

In a clear demonstration of why AI leadership demands the best compute capabilities, NVIDIA today unveiled the world's 22nd fastest supercomputer -- DGX SuperPOD -- which provides AI infrastructure that meets the massive demands of the company's autonomous-vehicle deployment program. The system was built in just three weeks with 96 NVIDIA DGX-2H supercomputers and Mellanox interconnect technology. Delivering 9.4 petaflops of processing capability, it has the muscle for training the vast number of deep neural networks required for safe self-driving vehicles. Customers can buy this system in whole or in part from any DGX-2 partner based on our DGX SuperPOD design. AI training of self-driving cars is the ultimate compute-intensive challenge.


OpenEI: An Open Framework for Edge Intelligence

arXiv.org Artificial Intelligence

In the last five years, edge computing has attracted tremendous attention from industry and academia due to its promise to reduce latency, save bandwidth, improve availability, and protect data privacy to keep data secure. At the same time, we have witnessed the proliferation of AI algorithms and models which accelerate the successful deployment of intelligence mainly in cloud services. These two trends, combined together, have created a new horizon: Edge Intelligence (EI). The development of EI requires much attention from both the computer systems research community and the AI community to meet these demands. However, existing computing techniques used in the cloud are not applicable to edge computing directly due to the diversity of computing sources and the distribution of data sources. We envision that there missing a framework that can be rapidly deployed on edge and enable edge AI capabilities. To address this challenge, in this paper we first present the definition and a systematic review of EI. Then, we introduce an Open Framework for Edge Intelligence (OpenEI), which is a lightweight software platform to equip edges with intelligent processing and data sharing capability. We analyze four fundamental EI techniques which are used to build OpenEI and identify several open problems based on potential research directions. Finally, four typical application scenarios enabled by OpenEI are presented.


Robotics Heavyweights Embrace NVIDIA's Jetson AGX Xavier For AI Edge Intelligence

#artificialintelligence

NVIDIA Isaac platform with Jetson Xavier, a computer designed specifically for robotics.NVIDIA Robots are a well-established part of manufacturing but have the opportunity to unlock new efficiencies in industries such as retail, food service and healthcare. To date, robots have primarily been enclosed or segmented into specific areas to protect people from possible injuries. Today, companies want to integrate robotics into various types of workplaces, but this requires a new design paradigm for robotics. Allowing a robot to move freely in an unpredictable environment requires fast, reliable, intelligent computing within the robot. The ability to deliver this level of complex computing at within a small component, at a low price point has held the robotics industry back.


NVIDIA expands deep learning institute with new offerings - AI News

#artificialintelligence

NVIDIA is expanding its Deep Learning Institute (DLI) with new partnerships and educational courses. DLI, which trains thousands of students, developers and data scientists with critical skills needed to apply artificial intelligence, has joined hands with Booz Allen Hamilton and deeplearning.ai DLI and Booz Allen Hamilton will provide hands-on training for data scientists to solve challenging problems in healthcare, cybersecurity and defense. NVIDIA is also expanding its reach with the new NVIDIA University Ambassador Program that enables instructors worldwide to teach students critical job skills and practical applications of AI at no cost. The graphics processing designer is already working with professors at several universities, including Arizona State, Harvard, Hong Kong University of Science and Technology and UCLA.


GTC 2018 Keynote with NVIDIA CEO Jensen Huang

#artificialintelligence

Watch a replay of NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2018 in Silicon Valley, where he unveiled a series of advances to NVIDIA's deep learning computing platform that deliver a 10x performance boost on deep learning workloads; launched the Quadro GV100 GPU, transforming workstations with 118.5 TFLOPS of deep learning performance; introduced NVIDIA DRIVE Constellation to run self-driving car systems for billions of simulated miles, and much more.


NVIDIA unveils advances in AI platform

#artificialintelligence

The 10th edition of the $9.71-billion NVIDIA Corporation's annual GPU Technology Conference (GTC 2018) for GPU developers opened on Tuesday to an audience of 8,500 where its Founder, President and CEO, Jensen Huang unveiled a series of advances to its deep learning computing platform. For over two hours, Huang took the audience through some "amazing graphics, amazing science, amazing AI and amazing robots." Introducing NVIDIA RTX technology that runs on a Quadro GV100 processor, he said: "This technology is the most important advance in computer graphics in 15 years as we can now bring real-time ray tracing to the market. Virtually everyone is adopting it." Elaborating on its relevance, he said, the gaming industry that makes 400 games a year, uses ray-tracing to render entire games in advance.


Watch NVIDIA's GTC keynote in under 15 minutes

Engadget

As usual, NVIDIA CEO Jensen Huang revealed a ton of news during his keynote at the company's GPU Technology Conference yesterday. There's the new Quadro GV100 GPU, which is based on NVIDIA's Volta architecture and will power its new RTX ray tracing technology. The company also revealed its Drive Constellation system for testing self-driving cars in virtual reality, which will certainly help now that it's pausing real world testing. Finally, NVIDIA made some major announcements around AI: its new DGX-2 "personal supercomputer" is insanely powerful, and it's also partnering with ARM to bring its deep learning technology into upcoming Trillium mobile chips.


NVIDIA swings for the AI fences

ZDNet

CES showcases the tech trends that will shape the year ahead. See the most important products that will impact businesses and professionals. NVIDIA, as I've written about several times, is the company that started in gaming and graphics but which has rapidly transformed into an organization focused on AI. Nope, NVIDIA is swinging for the fences, leveraging its GPU technology, deep learning, its Volta architecture, its Cuda GPU programming platform and a dizzying array of partnerships to move beyond mere tech and become an industrial powerhouse. CEO and Founder Jensen Huang gave the Sunday night keynote at CES, an prized time slot once dominated by Microsoft.