Goto

Collaborating Authors

it hardware


New AI Chips Set to Reshape Data Centers - EE Times India

#artificialintelligence

AI chip startups are hot on the heels of GPU leader Nvidia. At the same time, there is also significant competition in data center inference... New computing models such as machine learning and quantum are becoming more important for delivering cloud services. The most immediate computing change has been the rapid adoption of ML/AI for consumer and business applications. This new model requires the processing vast amounts of data to developing usable information, and eventually building knowledge models. These models are rapidly growing in complexity – doubling every 3.5 months.


Oracle BrandVoice: GPU Chips Are Poised To Rewrite (Again) What's Possible In Cloud Computing

#artificialintelligence

At Altair, chief technology officer Sam Mahalingam is heads-down testing the company's newest software for designing cars, buildings, windmills, and other complex systems. The engineering and design software company, whose customers include BMW, Daimler, Airbus, and General Electric, is developing software that combines computer models of wind and fluid flows with machine design in the same process--so an engineer could design a turbine blade while simultaneously seeing its draft's effect on neighboring mills in a wind farm. What Altair needs for a job as hard as this, though, is a particular kind of computing power, provided by graphics processing units (GPUs) made by Silicon Valley's Nvidia and others. "When solving complex design challenges like the interaction between wind structures in windmills, GPUs help expedite computing so faster business decisions can be made," Mahalingam says. An aerodynamics simulation performed with Altair ultraFluidX on the Altair CX-1 concept design, modeled in Altair Inspire Studio.


For Pac-Man's 40th birthday, Nvidia uses AI to make new levels

PCWorld

Pac-Man turns 40 today, and even though the days of quarter-munching arcade machines in hazy bars are long behind us, the legendary game's still helping to push the industry forward. On Friday, Nvidia announced that its researchers have trained an AI to create working Pac-Man games without teaching it about the game's rules or giving it access to an underlying game engine. Nvidia's "GameGAN" simply watched 50,000 Pac-Man games to learn the ropes. That's an impressive feat in its own right, but Nvidia hopes the "generative adversarial network" (GAN) technology underpinning the project can be used in the future to help developers create games faster and train autonomous robots. "This is the first research to emulate a game engine using GAN-based neural networks," Nvidia researcher Seung-Wook Kim said in a press release.


Gaming company NVIDIA shows off AI that recreated Pacman in just four days after watching gameplay

Daily Mail - Science & tech

Gaming company Nvidia says that it's developed an artificial intelligence that can recreate playable games just by watching them. The AI is able to absorb visual inputs as well as whatever actions a player inputs into the game. It's then able to reproduce code that translates into a playable game. In a demonstration, Nvidia showed how its AI was able to re-construct a playable version of the game Pacman after just four days of watching gamers play it. The AI managed to recreate Pacman (pictured) by watching gameplay and looking at user inputs.


NVIDIA's AI built Pac-Man from scratch in four days

Engadget

When Pac-Man hit arcades on May 22nd 1980, it held the record for time spent in development having taken a whopping 17 months to design, code and complete. Now, 40 years later to the day, NVIDIA needed just four days to train its new GameGAN AI to wholly recreate it based only on watching another AI play through. Dubbed GameGAN, it's a generative adversarial network (hence, GAN) similar to those used to generate (and detect) photo-realistic images of people that do not exist. The generator is trained on a large sample dataset and then instructed to generate an image based on what it saw. The discriminator then compares the generated image to the sample dataset to determine how close the two resemble one another.


AI chips in 2020: Nvidia and the challengers

ZDNet

Omri Geller, Run:AI co-founder and CEO told ZDNet that Nvidia's announcement about "fractionalizing" GPU, or running separate jobs within a single GPU, is revolutionary for GPU hardware. Geller said it has seen many customers with this need, especially for inference workloads: Why utilize a full GPU for a job that does not require the full compute and memory of a GPU? "We believe, however, that this is more easily managed in the software stack than at the hardware level, and the reason is flexibility. While hardware slicing creates'smaller GPUs' with a static amount of memory and compute cores, software solutions allow for the division of GPUs into any number of smaller GPUs, each with a chosen memory footprint and compute power. In addition, fractionalizing with a software solution is possible with any GPU or AI accelerator, not just Ampere servers - thus improving TCO for all of a company's compute resources, not just the latest ones. This is, in fact, what Run:AI's fractional GPU feature enables." InAccel is a Greek startup, built around the premise of providing an FPGA manager that allows the distributed acceleration of large data sets across clusters of FPGA resources using simple programming models.


Nvidia researchers propose technique to transfer AI trained in simulation to the real world

#artificialintelligence

In a preprint paper published this week on Arxiv.org, Nvidia and Stanford University researchers propose a novel approach to transferring AI models trained in simulation to real-world autonomous machines. It uses segmentation as the interface between perception and control, leading to what the coauthors characterize as "high success" in workloads like robot grasping. Simulators have advantages over the real world when it comes to model training in that they're safe and almost infinitely scalable. But generalizing strategies learned in simulation to real-world machines -- whether autonomous cars, robots, or drones -- requires adjustment, because even the most accurate simulators can't account for every perturbation.


Accelerating Medical Image Segmentation with NVIDIA Tensor Cores and TensorFlow 2 NVIDIA Developer Blog

#artificialintelligence

Medical image segmentation is a hot topic in the deep learning community. Proof of that is the number of challenges, competitions, and research projects being conducted in this area, which only rises year over year. Among all the different approaches to this problem, U-Net has become the backbone of many of the top-performing solutions for both 2D and 3D segmentation tasks. This is due to its simplicity, versatility, and effectiveness. When practitioners are confronted with a new segmentation task, the first step commonly is to use an existent implementation of U-Net as a backbone.


BMW Group Selects NVIDIA to Redefine Factory Logistics

#artificialintelligence

BMW Group has selected the new NVIDIA Isaac robotics platform for use in its automotive factories, utilizing logistics robots built on advanced AI computing and visualization technologies. The collaboration centers on implementing an end-to-end system based on NVIDIA technologies, from training and testing through to deployment, with robots developed using one software architecture, running on NVIDIA's open Isaac robotics platform. Autonomous AI-powered logistics robots now assist the current production process in order to assemble customized vehicles on the same production line. A full BMW Smart Transport Robot (STR) mission modeled in NVIDIA Isaac Sim. The window shows the robot perspective, Isaac SDK Sight visualization of the warehouse view, global/local maps, pose tree and compute graph.


Are giant AI chips the future of AI hardware?

#artificialintelligence

New types of AI chips that adopt different ways of organizing memory, compute and networking could reshape the way leading enterprises design and deploy AI algorithms. At least one vendor, Cerebras Systems, has begun testing a single chip about the size of an iPad that moves data around thousands of times faster than existing AI chips. This could open opportunities for developers to experiment with new kinds of AI algorithms. "This is a massive market opportunity and I see a complete rethink of computer architecture in progress," said Ashmeet Sidana, chief engineer at Engineering Capital, a VC firm. The rethink is long overdue, Sidana noted.