Nvidia have announced that they've introduced a new Artificial Intelligence (AI) Deep Learning model that "aims to catapult the graphics industry into the AI Age," and the result is the first ever interactive AI rendered virtual world. In short, Nvidia now has an AI capable of rendering high definition virtual environments, that can be used to create Virtual Reality (VR) games and simulations, in real time, and that's big because it takes the effort and cost out of having to design and make them from scratch, which has all sorts of advantages. In order to work their magic the researchers used what they called a Conditional Generative Neural Network as a starting point and then trained a neural network to render new 3D environments, and now the breakthrough will allow developers and artists of all kinds to create new interactive 3D virtual worlds based on videos from the real world, dramatically lowering the cost and time it takes to create virtual worlds. "NVIDIA has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network," said the leader of the Nvidia researchers Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia. "Neural networks – specifically – generative models like these are going to change the way graphics are created."
Despite the widespread use of convolutional neural networks (CNN), the convolution operations used in standard CNNs have some limitations. To overcome these limitations, Researchers from NVIDIA and University of Massachusetts Amherst, developed a new type of convolutional operations that can dynamically adapt to input images to generate filters specific to the content. The researchers will present their work at the annual Computer Vision and Pattern Recognition (CVPR) conference in Long Beach, California this week. "Convolutions are the fundamental building blocks of CNNs," the researchers wrote in the research paper, "the fact that their weights are spatially shared is one of the main reasons for their widespread use, but it is also a major limitation, as it makes convolutions content-agnostic". To help improve the efficiency of CNNs, the team proposed a generalization of convolutional operation, Pixel-Adaptive Convolution (PAC), to mitigate the limitation.
Volvo Group and NVIDIA are delivering autonomy to the world's transportation industries, using AI to revolutionize how people and products move all over the world. At its headquarters in Gothenburg, Sweden, Volvo Group announced Tuesday that it's using the NVIDIA DRIVE end-to-end autonomous driving platform to train, test and deploy self-driving AI vehicles, targeting public transport, freight transport, refuse and recycling collection, construction, mining, forestry and more. By injecting AI into these industries, Volvo Group and NVIDIA can create amazing new vehicles and deliver more productive services. The two companies are co-locating engineering teams in Gothenburg and Silicon Valley. Together, they will build on the DRIVE AGX Pegasus platform for in-vehicle AI computing and utilize the full DRIVE AV software stack for 360-degree sensor processing, perception, map localization and path planning.
Nvidia Corp on Monday said it will make its chips work with processors from ARM Holdings Inc to build supercomputers, deepening Nvidia's push into systems that are used for modeling both climate change predictions and nuclear weapons. Nvidia was long known as a supplier of graphics chips for personal computers to make video games look more realistic, but researchers now also use its chips inside data centers to speed up artificial intelligence computing work such as training computers to recognize images. To do so, Nvidia's so-called accelerator chips work alongside central processors from companies such as Intel Corp and International Business Machines Corp. At a supercomputing conference held in Germany on Monday, Nvidia said its accelerator chips will work with ARM processors by the end of the year. ARM, owned by Japan's SoftBank Group Corp, provides the underlying processor technology for the chips in most mobile phones.
From discovering drugs, to locating black holes, to finding safer nuclear energy sources, high performance computing systems around the world have enabled breakthroughs across all scientific domains. Japan's fastest supercomputer, ABCI, powered by NVIDIA Tensor Core GPUs, enables similar breakthroughs by taking advantage of AI. The system is the world's first large-scale, open AI infrastructure serving researchers, engineers and industrial users to advance their science. The software used to drive these advances is as critical as the servers the software runs on. However, installing an application on an HPC cluster is complex and time consuming.
Activity recognition is the ability to identify and recognize the action or goals of the agent. The agent can be any object or entity that performs action that has end goals. The agents can be a single agent performing the action or group of agents performing the actions or having some interaction. Human activity recognition has gained popularity due to its demands in many practical applications such as entertainment, healthcare, simulations and surveillance systems. Vision based activity recognition is gaining advantage as it does not require any human intervention or physical contact with humans. Moreover, there are set of cameras that are networked with the intention to track and recognize the activities of the agent. Traditional applications that were required to track or recognize human activities made use of wearable devices. However, such applications require physical contact of the person. To overcome such challenges, vision based activity recognition system can be used, which uses a camera to record the video and a processor that performs the task of recognition. The work is implemented in two stages. In the first stage, an approach for the Implementation of Activity recognition is proposed using background subtraction of images, followed by 3D- Convolutional Neural Networks. The impact of using Background subtraction prior to 3D-Convolutional Neural Networks has been reported. In the second stage, the work is further extended and implemented on Raspberry Pi, that can be used to record a stream of video, followed by recognizing the activity that was involved in the video. Thus, a proof-of-concept for activity recognition using small, IoT based device, is provided, which can enhance the system and extend its applications in various forms like, increase in portability, networking, and other capabilities of the device.
With a little help from AI, you can now create a Bob Ross-style landscape in seconds. In March, researchers from NVIDIA unveiled GauGAN, a system that uses AI to transform images scribbled onto a Microsoft Paint-like canvas into photorealistic landscapes -- just choose a label such as "water," "tree," or "mountain" the same way you'd normally choose a color, and the AI takes care of the rest. At the time, they described GauGAN as a "smart paintbrush" -- and now, they've released an online beta demo so you can try it out for yourself. The level of detail included in NVIDIA's system is remarkable. Draw a vertical line with a circle at the top using the "tree" label, for example, and the AI knows to make the bottom part the trunk and the top part the leaves.
Wearable biometric monitoring devices (BMDs) and artificial intelligence (AI) enable the remote measurement and analysis of patient data in real time. These technologies have generated a lot of "hype," but their real-world effectiveness will depend on patients' uptake. Our objective was to describe patients' perceptions of the use of BMDs and AI in healthcare. We recruited adult patients with chronic conditions in France from the "Community of Patients for Research" (ComPaRe). Participants (1) answered quantitative and open-ended questions about the potential benefits and dangers of using of these new technologies and (2) participated in a case-vignette experiment to assess their readiness for using BMDs and AI in healthcare.
At the Computex event in Taiwan, NVIDIA unveiled EGX, a multi-cloud and AI-enabled edge computing platform for enterprises. NVIDIA EGX is a unified edge computing stack that can span from the tiny Jetson Nano to a full rack of T4 servers. Customers can start small with EGX and gradually scale to support full-blown GPUs. NVIDIA is optimizing the software stack to power devices such as drones to dedicated servers that can handle AI inferencing at scale. NVIDIA Edge Stack is an optimized platform powered by NVIDIA drivers, a CUDA Kubernetes plugin, a CUDA container runtime, CUDA-X libraries and containerized AI frameworks and applications such as TensorRT, TensorRT Inference Server and DeepStream SDK.
Michael Balint is a senior manager of applied solutions engineering at NVIDIA. Previously, Michael was a White House Presidential Innovation Fellow, where he brought his technical expertise to projects like Vice President Biden's Cancer Moonshot program and Code.gov. Michael has had the good fortune of applying software engineering and data science to many interesting problems throughout his career, including tailoring genetic algorithms to optimize air traffic, harnessing NLP to summarize product reviews, and automating the detection of melanoma via machine learning.