AI's rapid evolution is producing an explosion in new types of hardware accelerators for machine learning and deep learning. Some people refer to this as a "Cambrian explosion," which is an apt metaphor for the current period of fervent innovation. From that point onward, these creatures--ourselves included--fanned out to occupy, exploit, and thoroughly transform every ecological niche on the planet. The range of innovative AI hardware-accelerator architectures continues to expand. Although you may think that graphic processing units (GPUs) are the dominant AI hardware architecture, that is far from the truth.
Google recently released TensorFlow Quantum, a toolset for combining state-of-the-art machine learning techniques with quantum algorithm design. This is an essential step to build tools for developers working on quantum applications. Simultaneously, they have focused on improving quantum computing hardware performance by integrating a set of quantum firmware techniques and building a TensorFlow-based toolset working from the hardware level up – from the bottom of the stack. The fundamental driver for this work is tackling the noise and error in quantum computers. Here's a small overview of the above and how the impact of noise and imperfections (critical challenges) is suppressed in quantum hardware.
Nvidia's greatest growth in chips in 2017 was in the AI and cloud-based sectors, which should increase in 2018. This year tech companies will begin moving AI more to the "edge" of access, leveraging trained machine learning software with cloud-based computing, according to a VentureBeat.com The authors, Daniel Li, Principal, and S. Somasegar, Managing Director, predicted four new trends in 2018: Machine learning models will operate outside of the data centers and via phones and personal assistant devices, like Alexa and SIRI to reduce power and bandwidth consumption, reduce latency and ensure privacy. Specialized chips for AI will perform better than all-purpose chips, and computers built to optimize AI are already being designed. Text, voice, gestures and vision will all be used more widely to communicate with systems.
Machine learning is playing an increasingly significant role in emerging mobile application domains such as AR/VR, ADAS, etc. Accordingly, hardware architects have designed customized hardware for machine learning algorithms, especially neural networks, to improve compute efficiency. However, machine learning is typically just one processing stage in complex end-to-end applications, which involve multiple components in a mobile Systems-on-a-chip (SoC). Focusing on just ML accelerators loses bigger optimization opportunity at the system (SoC) level. This paper argues that hardware architects should expand the optimization scope to the entire SoC. We demonstrate one particular case-study in the domain of continuous computer vision where camera sensor, image signal processor (ISP), memory, and NN accelerator are synergistically co-designed to achieve optimal system-level efficiency.
Researchers at NVIDIA have come up with a clever machine learning technique for taking 2D images and fleshing them out into 3D models. Normally this happens in reverse--these days, it's not all that difficult to take a 3D model and flatten it into a 2D image. But to create a 3D model without feeding a system 3D data is far more challenging. But there's information to be gained from doing the opposite--a model that could infer a 3D object from a 2D image would be able to perform better object tracking, for example.," What the researchers came up with is a rendering framework called DIB-R, which stands for differentiable interpolation-based renderer.