Goto

Collaborating Authors

Results


ProAI: An Efficient Embedded AI Hardware for Automotive Applications - a Benchmark Study

arXiv.org Artificial Intelligence

Development in the field of Single Board Computers (SBC) have been increasing for several years. They provide a good balance between computing performance and power consumption which is usually required for mobile platforms, like application in vehicles for Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD). However, there is an ever-increasing need of more powerful and efficient SBCs which can run power intensive Deep Neural Networks (DNNs) in real-time and can also satisfy necessary functional safety requirements such as Automotive Safety Integrity Level (ASIL). ProAI is being developed by ZF mainly to run powerful and efficient applications such as multitask DNNs and on top of that it also has the required safety certification for AD. In this work, we compare and discuss state of the art SBC on the basis of power intensive multitask DNN architecture called Multitask-CenterNet with respect to performance measures such as, FPS and power efficiency. As an automotive supercomputer, ProAI delivers an excellent combination of performance and efficiency, managing nearly twice the number of FPS per watt than a modern workstation laptop and almost four times compared to the Jetson Nano. Furthermore, it was also shown that there is still power in reserve for further and more complex tasks on the ProAI, based on the CPU and GPU utilization during the benchmark.


How to Use NVIDIA GPU Accelerated Libraries - KDnuggets

#artificialintelligence

If you are working on an AI project, then it's time to take advantage of NVIDIA GPU accelerated libraries if you aren't doing so already. It wasn't until the late 2000s when AI projects became viable with the assistance of neural networks trained by GPUs to drastically speed up the process. Since that time, NVIDIA has been creating some of the best GPUs for deep learning, allowing GPU accelerated libraries to become a popular choice for AI projects. If you are wondering how you can take advantage of NVIDIA GPU accelerated libraries for your AI projects, this guide will help answer questions and get you started on the right path. When it comes to AI or, more broadly, machine learning, using GPU accelerated libraries is a great option.


Graphcore brings new competition to Nvidia in latest MLPerf AI benchmarks

ZDNet

MLPerf, the benchmark suite of tests for how long it takes to train a computer to perform machine learning tasks, has a new contender with the release Wednesday of results showing Graphcore, the Bristol, U.K.-based startup, notching respectable times versus the two consistent heavyweights, Nvidia and Google. Graphcore, which was founded five years ago and has $710 million in financing, didn't take the top score in any of the MLPerf tests, but it reported results that are significant when compared with the other two in terms of number of chips used. Moreover, when leaving aside Google's submission, which isn't commercially available, Graphcore was the only competitor to enter into the top five commercially available results alongside Nvidia. "It's called the democratization of AI," said Matt Fyles, the head of software for Graphcore, in a press briefing. Companies that want to use AI, he said, "can get a very respectable result as an alternative to Nvidia, and it only gets better over time, we'll keep pushing our system."


Nintendo's upgraded Switch may use NVIDIA DLSS for 4K gaming

Engadget

Nintendo's next Switch may use an NVIDIA GPU that supports Deep Learning Super Sampling (DLSS) that will allow it to output higher-quality graphics, Bloomberg has reported. The new system-on-chip would enable output at up to 4K quality when the Switch is connected to a TV, and will also reportedly include an upgraded CPU and increased memory. The next-gen Switch is set to have a built-in 7-inch 720p OLED display and 4K output, according to a previous Bloomberg report. Since NVIDIA's DLSS allows for good-quality 4K upscaling, it's not clear if an upgraded NVIDIA GPU would support native 4K or for upscale from a lower resolution. The current generation of Switch uses NVIDIA's Tegra graphics to output up to 1080p game quality.


The Decline of Computers as a General Purpose Technology

Communications of the ACM

Perhaps in no other technology has there been so many decades of large year-over-year improvements as in computing. It is estimated that a third of all productivity increases in the U.S. since 1974 have come from information technology,a,4 making it one of the largest contributors to national prosperity. The rise of computers is due to technical successes, but also to the economics forces that financed them. Bresnahan and Trajtenberg3 coined the term general purpose technology (GPT) for products, like computers, that have broad technical applicability and where product improvement and market growth could fuel each other for many decades. But, they also predicted that GPTs could run into challenges at the end of their life cycle: as progress slows, other technologies can displace the GPT in particular niches and undermine this economically reinforcing cycle. We are observing such a transition today as improvements in central processing units (CPUs) slow, and so applications move to specialized processors, for example, graphics processing units (GPUs), which can do fewer things than traditional universal processors, but perform those functions better. Many high profile applications are already following this trend, including deep learning (a form of machine learning) and Bitcoin mining. With this background, we can now be more precise about our thesis: "The Decline of Computers as a General Purpose Technology." We do not mean that computers, taken together, will lose technical abilities and thus'forget' how to do some calculations.


Setting up your Nvidia GPU for Deep Learning(2020)

#artificialintelligence

This article aims to help anyone who wants to set up their windows machine for deep learning. Although setting up your GPU for deep learning is slightly complex the performance gain is well worth it * . The steps I have taken taken to get my RTX 2060 ready for deep learning is explained in detail. The first step when you search for the files to download is to look at what version of Cuda that Tensorflow supports which can be checked here, at the time of writing this article it supports Cuda 10.1.To download cuDNN you will have to register as an Nvidia developer. I have provided the download links to all the software to be installed below.


As AI chips improve, is TOPS the best way to measure their power?

#artificialintelligence

Once in a while, a young company will claim it has more experience than would be logical -- a just-opened law firm might tout 60 years of legal experience, but actually consist of three people who have each practiced law for 20 years. The number "60" catches your eye and summarizes something, yet might leave you wondering whether to prefer one lawyer with 60 years of experience. There's actually no universally correct answer; your choice should be based on the type of services you're looking for. A single lawyer might be superb at certain tasks and not great at others, while three lawyers with solid experience could canvas a wider collection of subjects. If you understand that example, you also understand the challenge of evaluating AI chip performance using "TOPS," a metric that means trillions of operations per second, or "tera operations per second."


Deep Learning NVIDIA GPU Workstations

#artificialintelligence

We understand every development environment is different, so shouldn't you have the option to choose what's best for you? All EMLI (Exxact Machine Learning Images) environments are available in the latest Ubuntu or CentOS Linux versions, and are built to perform right out of the box.


Oracle BrandVoice: GPU Chips Are Poised To Rewrite (Again) What's Possible In Cloud Computing

#artificialintelligence

At Altair, chief technology officer Sam Mahalingam is heads-down testing the company's newest software for designing cars, buildings, windmills, and other complex systems. The engineering and design software company, whose customers include BMW, Daimler, Airbus, and General Electric, is developing software that combines computer models of wind and fluid flows with machine design in the same process--so an engineer could design a turbine blade while simultaneously seeing its draft's effect on neighboring mills in a wind farm. What Altair needs for a job as hard as this, though, is a particular kind of computing power, provided by graphics processing units (GPUs) made by Silicon Valley's Nvidia and others. "When solving complex design challenges like the interaction between wind structures in windmills, GPUs help expedite computing so faster business decisions can be made," Mahalingam says. An aerodynamics simulation performed with Altair ultraFluidX on the Altair CX-1 concept design, modeled in Altair Inspire Studio.


NVIDIA Announces Ampere - The Most Exciting GPU Architecture For Modern AI

#artificialintelligence

The GPU Technology Conference is the most exciting event for the AI and ML ecosystem. From researchers in academia to product managers at hyperscale cloud companies to IoT builders and makers, this conference has something relevant for each of them. As an AIoT enthusiast and a maker, I eagerly look forward to GTC. Due to the current COVID-19 situation, I was a bit disappointed to see the event turning into a virtual conference. But the keynote delivered by Jensen Huang, the CEO of NVIDIA made me forget that it was a virtual event.