"We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago," Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. Nvidia's tech now resides in many of the world's most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. Now, Nvidia's graphics hardware occupies a more pivotal role, according to Huang – and the company's long list of high-profile partners, including Microsoft, Facebook and others, bears him out. GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world.
I asked Huang to compare the GTC of eight years ago to the GTC of today, given how much of Nvidia's focus has changed. "We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago," Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. "And so we started evangelizing all over the world. GTC is our developers conference for that. The first year, with just a few hundred people, we were mostly focused in two areas: They were focused in computer graphics, of course, and physics simulation, whether it's finite element analysis or fluid simulations or molecular dynamics.
San Francisco: Jensen Huang, co-founder, president and chief executive officer of Santa Clara-based Nvidia Corp., says that the rapid adoption of artificial intelligence (AI) technologies such as machine learning, deep learning, natural language processing and computer vision augur well for the growth prospects of his company. His confidence stems from the fact that Nvidia designs the chips that can deliver the extra computing power that clients need in an algorithm-driven world, which is increasingly using these AI technologies to make business sense of the voluminous data that users generate and thus gain a competitive edge. These chips, called graphics processing units (GPUs), helped Nvidia fuel the growth of the personal computer gaming market almost two decades back. Huang hopes the increasing use of GPUs for AI will help his company repeat the success. Huang argues that even when you increase the number of central processing unit transistors in a computer, they result in a small increase in application performance, whereas GPUs, which are specifically designed to handle multiple tasks simultaneously, make them more suitable for high-performance computing tasks.
Nvidia became famous for its graphics processing unit chips that power some of the hottest gaming personal computers. Today, Chief Executive Jen-Hsun Huang signaled that he's aiming even higher in a bid to reinvent the data center and cloud computing. The company announced a new chip and a new computers both focused on artificial intelligence, in particular the fast-rising branch called deep learning that attempts to mimic the activity on layers of neurons in the brain. The technology is the basis for recent breakthroughs in speech and image recognition, self-driving cars and other technology-driven products and services. "Our company has gone all-in on deep learning," Huang said at the Apr. 5 opening of its annual GPU Technology Conference in San Jose, where he made the announcements.
The "HGX" is the world's most complex motherboard, according to Nvidia CEO Jensen Huang, able to accommodate eight of the company's "A100" GPU chips, shown here as eight giant heat sinks. Nvidia chief executive officer Jensen Huang on Thursday held the virtual version of the company's "GTC" annual conference, and unveiled the latest architectural innovations for the company's flagship data center graphics processing unit, or "GPU" chips. As in past, the company has dipped into names of famous scientists, in this case French scientist André-Marie Ampère, following on previous branding exercises that have included Volta and Pascal and Maxwell in recent years. The first chip manufactured using the new architecture, "A100," is already shipping to customers. Huang said all cloud providers, including Microsoft's Azure, Google GCP, and Amazon AWS will be using the new part, in servers of various sorts made for the thing.