Goto

Collaborating Authors

Results


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


The NVIDIA PilotNet Experiments

arXiv.org Artificial Intelligence

Four years ago, an experimental system known as PilotNet became the first NVIDIA system to steer an autonomous car along a roadway. This system represents a departure from the classical approach for self-driving in which the process is manually decomposed into a series of modules, each performing a different task. In PilotNet, on the other hand, a single deep neural network (DNN) takes pixels as input and produces a desired vehicle trajectory as output; there are no distinct internal modules connected by human-designed interfaces. We believe that handcrafted interfaces ultimately limit performance by restricting information flow through the system and that a learned approach, in combination with other artificial intelligence systems that add redundancy, will lead to better overall performing systems. We continue to conduct research toward that goal. This document describes the PilotNet lane-keeping effort, carried out over the past five years by our NVIDIA PilotNet group in Holmdel, New Jersey. Here we present a snapshot of system status in mid-2020 and highlight some of the work done by the PilotNet group.


NVIDIA Builds Supercomputer to Build Self-Driving Cars NVIDIA Blog

#artificialintelligence

In a clear demonstration of why AI leadership demands the best compute capabilities, NVIDIA today unveiled the world's 22nd fastest supercomputer -- DGX SuperPOD -- which provides AI infrastructure that meets the massive demands of the company's autonomous-vehicle deployment program. The system was built in just three weeks with 96 NVIDIA DGX-2H supercomputers and Mellanox interconnect technology. Delivering 9.4 petaflops of processing capability, it has the muscle for training the vast number of deep neural networks required for safe self-driving vehicles. Customers can buy this system in whole or in part from any DGX-2 partner based on our DGX SuperPOD design. AI training of self-driving cars is the ultimate compute-intensive challenge.


How To Train Your Self-Driving Car

#artificialintelligence

The ever-growing field of autonomous driving constantly yields fresh research to explore, innovative technology to test, and new skills to learn. With so much ground to cover, just figuring out where to begin can be a daunting task. Developers at GTC Silicon Valley can learn the latest skills in AI at our expert-led DLI labs.NVIDIA That's why at this year's NVIDIA GPU Technology Conference in San Jose, experts will be on-site to provide industry-leading expertise on the foremost topics in self-driving vehicle development. Attendees can learn how to build AI applications for autonomous vehicles in hands-on, instructor-led training offered by the NVIDIA Deep Learning Institute (DLI). Developers can explore how to build on NVIDIA DRIVE AGX and NVIDIA DriveWorks with the guidance of a DLI certified instructor.


NVIDIA expands deep learning institute with new offerings - AI News

#artificialintelligence

NVIDIA is expanding its Deep Learning Institute (DLI) with new partnerships and educational courses. DLI, which trains thousands of students, developers and data scientists with critical skills needed to apply artificial intelligence, has joined hands with Booz Allen Hamilton and deeplearning.ai DLI and Booz Allen Hamilton will provide hands-on training for data scientists to solve challenging problems in healthcare, cybersecurity and defense. NVIDIA is also expanding its reach with the new NVIDIA University Ambassador Program that enables instructors worldwide to teach students critical job skills and practical applications of AI at no cost. The graphics processing designer is already working with professors at several universities, including Arizona State, Harvard, Hong Kong University of Science and Technology and UCLA.


GTC 2018 Keynote with NVIDIA CEO Jensen Huang

#artificialintelligence

Watch a replay of NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2018 in Silicon Valley, where he unveiled a series of advances to NVIDIA's deep learning computing platform that deliver a 10x performance boost on deep learning workloads; launched the Quadro GV100 GPU, transforming workstations with 118.5 TFLOPS of deep learning performance; introduced NVIDIA DRIVE Constellation to run self-driving car systems for billions of simulated miles, and much more.


The AI Revolution Is Eating Software: NVIDIA Is Powering It NVIDIA Blog

#artificialintelligence

The remarkable success of our GPU Technology Conference this month demonstrated to anyone still in doubt the extraordinary momentum of the AI revolution. Throughout the four-day event here in Silicon Valley, attendees from the world's leading companies in media and entertainment, manufacturing, healthcare and transportation shared stories of their breakthroughs made possible by GPU computing. The numbers tell a powerful story. With more than 7,000 attendees, 150 exhibitors and 600 technical sessions, our eighth annual GTC was our largest yet. The world's top 15 tech companies were there, as were the world's top 10 automakers, and more than 100 startups focusing on AI and VR.


The AI Revolution Is Eating Software: NVIDIA Is Powering It NVIDIA Blog

#artificialintelligence

The remarkable success of our GPU Technology Conference this month demonstrated to anyone still in doubt the extraordinary momentum of the AI revolution. Throughout the four-day event here in Silicon Valley, attendees from the world's leading companies in media and entertainment, manufacturing, healthcare and transportation shared stories of their breakthroughs made possible by GPU computing. The numbers tell a powerful story. With more than 7,000 attendees, 150 exhibitors and 600 technical sessions, our eighth annual GTC was our largest yet. The world's top 15 tech companies were there, as were the world's top 10 automakers, and more than 100 startups focusing on AI and VR.


Deep Learning Institute Workshop hosted by Dedicated Computing, NVIDIA and Milwaukee School of Engineering

#artificialintelligence

Dedicated Computing is co-hosting a Deep Learning Institute workshop in collaboration with NVIDIA and Milwaukee School of Engineering (MSOE). The workshop will take place at MSOE on April 13, 2017. Deep learning is a new area of machine learning that seeks to use algorithms, big data, and parallel computing to enable real-world applications and deliver results. Machines are now able to learn at the speed, accuracy, and scale required for true artificial intelligence. This technology is used to improve self-driving cars, aid mega-city planners, and help discover new drugs to cure disease.


Artificial Intelligence Market - Impact of $16 Billion by 2022 in Semiconductor Industry

#artificialintelligence

Artificial intelligence (AI) can be understood as a science, engineering and deployment of machines, which perform tasks with intelligence as similar to humans. Since its inception 60 years ago, AI has observed significant growth in recent years. Initially, AI was considered as topic for academicians, though in recent years with development of various technologies, AI has turned into reality and is influencing many lives and businesses. Additionally, evolution of various other supplementary technologies such as cloud computing, machine learning and cognitive computing are collectively paving the growth of the market for AI. According to Mr. Sachin Garg - Associate Director at MarketsandMarkets who tracks the global semiconductor market, the global artificial intelligence chipset market is expected to be worth USD 16.06 Billion by 2022, growing at a CAGR of 62.9% between 2016 and 2022.