nvidia


NVIDIA AI (@NvidiaAI)

#artificialintelligence

Vantage is proud to announce that we have been named a partner of NVIDIA's DGX-Ready Data Center Colocation Program. This will enable Vantage to help customers scale their enterprise infrastructures to support NIVIDIA's machine learning and AI initiatives.


DIY AI Has Arrived With Jetson Nano eLearningInside News

#artificialintelligence

In the past DIY (do-it-yourself) was usually a scissors and glue or hammer and nails affair. Today, as artificial intelligence (AI) continues to transform how we work, learn, and live, AI is becoming increasingly accessible to non-experts and even beginning to transform our hobbies. Cait Etherington – When most of us think about AI, we don't think about DIY. Can you first tell us a bit about Jetson Nano? What is it and how does it make AI a DIY affair?


A Decade of Accelerated Computing Augurs Well For GPUs

#artificialintelligence

While accelerators have been around for some time to boost the performance of simulation and modeling applications, accelerated computing didn't gain traction for most people until the commercialization of the Tesla line of GPUs for general computing by Nvidia. This year marked the tenth annual Nvidia GPU Technology Conference (GTC). I have been to all but one starting with the inaugural event in 2009. Back then it was a much smaller group. Attendance has leaped 10X with this year's meeting attracting over 9,000 participants.


NVIDIA's EGX Platform Brings AI to the Edge - PC Perspective

#artificialintelligence

While NVIDIA's graphics cards, Studio laptops, and Super rumors comprised the majority of green team Computex coverage, the company also took the wraps off EGX which could have a much bigger impact on consumers' daily lives than those consumer-facing NVIDIA products, albeit indirectly. EGX is NVIDIA's initiative to push AI (accelerated by GPUs and CUDA APIs of course) and more specifically to bring the computing power and software running the AI algorithms as close to the "edge," or end users and data sources and devices, as possible while balancing on-premises costs, capabilities, and latencies against cloud-hosted strategies. NVIDIA EGX edge servers are aimed at healthcare, retail, manufacturing, transportation, and telecommunications industries processing and making decisions based on real-time streaming data. NVIDIA notes that by 2025 more than 150 billion sensors and Internet of Things (IoT) devices will stream data that needs to be processed by software that can perceive, understand, and act on the data. EGX solutions span from small form factor single board computers like the Jetson Nano running between 5 and 10 watts doing ½ TOPS (e.g.


AI is changing the entire nature of compute ZDNet

#artificialintelligence

The world of computing, from chips to software to systems, is going to change dramatically in coming years as a result of the spread of machine learning. We may still refer to these computers as "Universal Turing Machines," as we have for eighty years or more. But in practice they will be different from the way they have been built and used up to now. Such a change is of interest both to anyone who cares about what computers do, and to anyone who's interested in machine learning in all its forms. In February, Facebook's head of A.I. research, Yann LeCun, gave a talk at the International Solid State Circuits Conference in San Francisco, one of the longest running computer chip conferences in the world.


Radiology has Always Been a Technology Trendsetter. Here are the Top Technologies that are Bringing Hospitals into the AI Future

#artificialintelligence

I'm at the Society for Imaging Informatics in Medicine (SIIM) annual meeting this week, and looking forward to collaborating with the industry and share our latest work at the intersection of AI and medicine with the informatics community. Radiology has had a history of pushing leading edge technology in hospitals. For example, many of the earliest computer networks installed in healthcare were required because of the demands of the earliest networked modalities transmitting images to storage. That trend has continued ever since, where young startups to industry titans are exploring the tremendous potential AI holds to save the medical imaging field time and money while working to improve patient care. The field of radiology is embracing this opportunity.


Volvo & NVIDIA to develop AI platform for autonomous trucks

#artificialintelligence

The Volvo Group has signed an agreement with NVIDIA to jointly develop the decision making system of autonomous commercial vehicles and machines. Utilising NVIDIA's end-to-end artificial intelligence platform for training, simulation and in-vehicle computing, the resulting system is designed to safely handle fully autonomous driving on public roads and highways. The solution will be built on NVIDIA's full software stack for sensor processing, perception, map localisation and path planning, enabling a wide range of possible autonomous driving applications, such as freight transport, refuse and recycling collection, public transport, construction, mining, forestry and more. "Automation creates real-life benefits for both our customers and the society in terms of safety, energy efficiency and as a consequence productivity. We continue to gradually introduce automated applications in the entire spectrum of automation, from driver support systems to fully autonomous vehicles and machines. This partnership with NVIDIA is an important next step on that journey," says Martin Lundstedt, President and CEO of the Volvo Group.


World-Record AI Chip Announced By Habana Labs

#artificialintelligence

Out of the tsunami of AI chip startups that hit the scene in the last few years, Israeli startup Habana Labs stands out from the crowd. The company surprised and impressed many with the announcement last fall of a chip designed to process a trained neural network (a task called "inference") with record performance at low power. At the time, Eitan Medina, the company's Chief Business Officer, promised a second chip called Gaudi that could challenge NVIDIA in the market for training those neural networks. On Monday, the company made good on that promise, announcing a very fast chip that also includes an on-die standards-based fabric to build large networks of accelerators and systems. Availability is set for the second half of 2019.


Nvidia unveiled a new AI engine that renders virtual world's in real time – Fanatical Futurist by International Keynote Speaker Matthew Griffin

#artificialintelligence

Nvidia have announced that they've introduced a new Artificial Intelligence (AI) Deep Learning model that "aims to catapult the graphics industry into the AI Age," and the result is the first ever interactive AI rendered virtual world. In short, Nvidia now has an AI capable of rendering high definition virtual environments, that can be used to create Virtual Reality (VR) games and simulations, in real time, and that's big because it takes the effort and cost out of having to design and make them from scratch, which has all sorts of advantages. In order to work their magic the researchers used what they called a Conditional Generative Neural Network as a starting point and then trained a neural network to render new 3D environments, and now the breakthrough will allow developers and artists of all kinds to create new interactive 3D virtual worlds based on videos from the real world, dramatically lowering the cost and time it takes to create virtual worlds. "NVIDIA has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network," said the leader of the Nvidia researchers Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia. "Neural networks – specifically – generative models like these are going to change the way graphics are created."


Summit Achieves 445 Petaflops on New 'HPL-AI' Benchmark

#artificialintelligence

Traditionally, supercomputer performance is measured using the High-Performance Linpack (HPL) benchmark, which is the basis for the Top500 list that biannually ranks world's fastest supercomputers. The Linpack benchmark tests a supercomputer's ability to conduct high-performance tasks (like simulations) that use double-precision math. On June's Top500 list, announced Monday, Summit's 148 Linpack petaflops land it first place by a comfortable margin. Using that same machine configuration, Oak Ridge National Laboratory (ORNL) and Nvidia have tested Summit on HPL-AI and gotten a result of 445 petaflops. While the HPL benchmark tests supercomputers' performance in double-precision math, AI is a rapidly growing use case for supercomputers -- and most AI models use mixed-precision math.