Goto

Collaborating Authors

Results


Nvidia DLSS Is Building a Walled Garden, and It's Working

#artificialintelligence

I just reviewed AMD's new Radeon RX 6600, which is a budget GPU that squarely targets 1080p gamers. It's a decent option, especially in a time when GPU prices are through the roof, but it exposed a trend that I've seen brewing over the past few graphics card launches. Nvidia's Deep Learning Super Sampling (DLSS) tech is too good to ignore, no matter how powerful the competition is from AMD. In a time when resolutions and refresh rates continue to climb, and demanding features like ray tracing are becoming the norm, upscaling is essential to run the latest games in their full glory. AMD offers an alternative to DLSS in the form of FidelityFX Super Resolution (FSR).


Council Post: 8 Important Industry Functions Quantum Computing Could Soon Revolutionize

#artificialintelligence

From AI to 5G, tech that once seemed as if it belonged in the realm of science fiction is starting to impact our everyday lives. The next sci-fi crossover may well be quantum computers. Headlines on the accomplishments of supercomputers have popped up regularly in the past decade or so, with stories touting their help with issues ranging from predicting climate change and mapping the human bloodstream to defeating Jeopardy! Through the use of multidimensional representation, quantum computers leave supercomputers in the dust. In 2019, Google's quantum computer, Sycamore, took 200 seconds to perform a mathematical computation that would have taken IBM's Summit supercomputer 10,000 years.


Multi-model Machine Learning Inference Serving with GPU Spatial Partitioning

arXiv.org Artificial Intelligence

As machine learning techniques are applied to a widening range of applications, high throughput machine learning (ML) inference servers have become critical for online service applications. Such ML inference servers pose two challenges: first, they must provide a bounded latency for each request to support consistent service-level objective (SLO), and second, they can serve multiple heterogeneous ML models in a system as certain tasks involve invocation of multiple models and consolidating multiple models can improve system utilization. To address the two requirements of ML inference servers, this paper proposes a new ML inference scheduling framework for multi-model ML inference servers. The paper first shows that with SLO constraints, current GPUs are not fully utilized for ML inference tasks. To maximize the resource efficiency of inference servers, a key mechanism proposed in this paper is to exploit hardware support for spatial partitioning of GPU resources. With the partitioning mechanism, a new abstraction layer of GPU resources is created with configurable GPU resources. The scheduler assigns requests to virtual GPUs, called gpu-lets, with the most effective amount of resources. The paper also investigates a remedy for potential interference effects when two ML tasks are running concurrently in a GPU. Our prototype implementation proves that spatial partitioning enhances throughput by 102.6% on average while satisfying SLOs.


Representation of binary classification trees with binary features by quantum circuits

arXiv.org Machine Learning

We propose a quantum representation of binary classification trees with binary features based on a probabilistic approach. By using the quantum computer as a processor for probability distributions, a probabilistic traversal of the decision tree can be realized via measurements of a quantum circuit. We describe how tree inductions and the prediction of class labels of query data can be integrated into this framework. An on-demand sampling method enables predictions with a constant number of classical memory slots, independent of the tree depth. We experimentally study our approach using both a quantum computing simulator and actual IBM quantum hardware. To our knowledge, this is the first realization of a decision tree classifier on a quantum device.


ProAI: An Efficient Embedded AI Hardware for Automotive Applications - a Benchmark Study

arXiv.org Artificial Intelligence

Development in the field of Single Board Computers (SBC) have been increasing for several years. They provide a good balance between computing performance and power consumption which is usually required for mobile platforms, like application in vehicles for Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD). However, there is an ever-increasing need of more powerful and efficient SBCs which can run power intensive Deep Neural Networks (DNNs) in real-time and can also satisfy necessary functional safety requirements such as Automotive Safety Integrity Level (ASIL). ProAI is being developed by ZF mainly to run powerful and efficient applications such as multitask DNNs and on top of that it also has the required safety certification for AD. In this work, we compare and discuss state of the art SBC on the basis of power intensive multitask DNN architecture called Multitask-CenterNet with respect to performance measures such as, FPS and power efficiency. As an automotive supercomputer, ProAI delivers an excellent combination of performance and efficiency, managing nearly twice the number of FPS per watt than a modern workstation laptop and almost four times compared to the Jetson Nano. Furthermore, it was also shown that there is still power in reserve for further and more complex tasks on the ProAI, based on the CPU and GPU utilization during the benchmark.


Council Post: How Quantum Computing Will Transform Cybersecurity

#artificialintelligence

Paul Lipman has worked in cybersecurity for 10 years. Quantum computing is based on quantum mechanics, which governs how nature works at the smallest scales. The smallest classical computing element is a bit, which can be either 0 or 1. The quantum equivalent is a qubit, which can also be 0 or 1 or in what's called a superposition -- any combination of 0 and 1. Performing a calculation on two classical bits (which can be 00, 01, 10 and 11) requires four calculations. A quantum computer can perform calculations on all four states simultaneously.


IBM and CERN want to use quantum computing to unlock the mysteries of the universe

ZDNet

It is likely that future quantum computers will significantly boost the understanding of CERN's gigantic particle collider. The potential of quantum computers is currently being discussed in settings ranging from banks to merchant ships, and now the technology has been taken even further afield – or rather, lower down. One hundred meters below the Franco-Swiss border sits the world's largest machine, the Large Hadron Collider (LHC) operated by the European laboratory for particle physics, CERN. And to better understand the mountains of data produced by such a colossal system, CERN's scientists have been asking IBM's quantum team for some assistance. The partnership has been successful: in a new paper, which is yet to be peer-reviewed, IBM's researchers have established that quantum algorithms can help make sense of the LHC's data, meaning that it is likely that future quantum computers will significantly boost scientific discoveries at CERN. With CERN's mission statement being to understand why anything in the universe happens at all, this could have big implications for anyone interested in all things matter, antimatter, dark matter and so on.


5 years until enterprise quantum, but your prep begins now

#artificialintelligence

Quantum computing technology is advancing rapidly and is on track to solve extraordinarily complex business problems through enhanced optimization, machine learning, and simulation. Make no mistake, the technology promises to be one of the most disruptive of all time. In fact, I believe quantum computing will hand a significant competitive advantage to the companies that can successfully leverage its potential to transform their business and their industries. While quantum technologies are still maturing, companies are already preparing, with spending on quantum computing projected to surge from $260 million in 2020 to $9.1 billion by 2030, according to research from Tractica. Companies are pursuing the promise of quantum aggressively, as evidenced by the recently announced combination of Honeywell Quantum Solutions and Cambridge Quantum Computing.


How to Use NVIDIA GPU Accelerated Libraries - KDnuggets

#artificialintelligence

If you are working on an AI project, then it's time to take advantage of NVIDIA GPU accelerated libraries if you aren't doing so already. It wasn't until the late 2000s when AI projects became viable with the assistance of neural networks trained by GPUs to drastically speed up the process. Since that time, NVIDIA has been creating some of the best GPUs for deep learning, allowing GPU accelerated libraries to become a popular choice for AI projects. If you are wondering how you can take advantage of NVIDIA GPU accelerated libraries for your AI projects, this guide will help answer questions and get you started on the right path. When it comes to AI or, more broadly, machine learning, using GPU accelerated libraries is a great option.


Building your own Data Science Infrastructure for Deep Learning

#artificialintelligence

Do you want to get started with data science but lack the appropriate infrastructure or are you already a professional but still have knowledge gaps in deep learning? Then you have two options: 1. Rent a virtual machine from a cloud provider like Amazon, Microsoft Azure, Google Cloud or similar. To build our system, we need to consider several points in advance. One of the key points is the choice of the right OS. We have the option to choose between Windows 10 Pro, Linux and Mac OS X.