Goto

Collaborating Authors

nvidia


How AI is Pushing Virtual Reality To The Next Level -- AI Daily - Artificial Intelligence News

#artificialintelligence

For VR environments to be fully immersive, we must strive towards graphics that can mirror reality. Unfortunately, such high-level graphics are difficult to attain in real-time without running into problems with framerate and stuttering gameplay, which breaks immersion and can cause motion sickness in players. The consequence of this is that the majority of VR experiences must use simplistic graphics to keep the experience as smooth as possible. Fortunately, computer graphics titans Nvidia have been utilising deep learning techniques to make such dreams possible. "Deep Learning Super Sampling" is a technology developed by Nvidia which allows high-resolution images to be generated for a low-res image input, allowing for high-quality graphics for VR to be less costly than before.


Nvidia in advanced talks to buy chipmaker Arm from SoftBank

The Japan Times

LONDON/NEW YORK – Nvidia Corp. is in advanced talks to acquire Arm Ltd., the chip designer that SoftBank Group Corp. bought for $32 billion four years ago, according to people familiar with the matter. The two parties aim to reach a deal in the next few weeks, the people said, asking not to be identified because the information is private. Nvidia is the only suitor in concrete discussions with SoftBank, according to the people. A deal for Arm could be the largest ever in the semiconductor industry, which has been consolidating in recent years as companies seek to diversify and add scale. But any deal with Nvidia, which is a customer of Arm, would likely trigger regulatory scrutiny as well as a wave of opposition from other firms.


University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia – The Official NVIDIA Blog

#artificialintelligence

The University of Florida and NVIDIA Tuesday unveiled a plan to build the world's fastest AI supercomputer in academia, delivering 700 petaflops of AI performance. The effort is anchored by a $50 million gift: $25 million from alumnus and NVIDIA co-founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA. "We've created a replicable, powerful model of public-private cooperation for everyone's benefit," said Malachowsky, who serves as an NVIDIA Fellow, in an online event featuring leaders from both the UF and NVIDIA. UF will invest an additional $20 million to create an AI-centric supercomputing and data center. The $70 million public-private partnership promises to make UF one of the leading AI universities in the country, advance academic research and help address some of the state's most complex challenges.


EETimes - Nvidia, Google Both Claim MLPerf Training Crown

#artificialintelligence

The third round of MLPerf training benchmark scores for eight different AI models are out, with rivals Nvidia and Google both staking a claim to the crown. While both companies claimed victory, the results bear further scrutiny. Scores are based on systems, not individual accelerator chips. While Nvidia swept the board for commercially available systems with its Ampere A100-based supercomputer, Google's massive TPU v3 system and smaller TPU v4 systems, which it entered under the research category, makes the search giant a strong contender. Nvida took first place in normalized results for all benchmarks in the commercially available systems category with its A100-based systems.


Nvidia Dominates Latest MLPerf Training Benchmark Results

#artificialintelligence

MLPerf.org released its third round of training benchmark (v0.7) results today and Nvidia again dominated, claiming 16 new records. Meanwhile, Google provided early benchmarks for its next generation TPU 4.0 accelerator and Intel previewed performance on third-gen processors (Cooper Lake). Notably, the MLPerf benchmarking organization continues to demonstrate growth; it now has 70 members, a jump from 40 last July when training benchmarks were last released. Fresh from the launch of its new A100 GPU in May and a top ten finish by Selene (DGX A100 SuperPOD) in June on the most recent Top500 List, Nvidia was able run the MLPerf training benchmarks on its new offerings in time for the July MLPerf release. Impressively, Nvidia set records for scaled out system performance and single node performance (see slides below).


Nvidia and Google claim bragging rights in MLPerf benchmarks as AI computers get bigger and bigger

ZDNet

Nvidia and Google on Wednesday each announced that they had aced a series of tests called MLPerf to be the biggest and best in hardware and software to crunch common artificial intelligence tasks. The devil's in the details, but both companies' achievements show the trend in AI continues to be that of bigger and bigger machine learning endeavors, backed by more-brawny computers. Benchmark tests are never without controversy, and some upstart competitors of Nvidia and Google, notably Cerebras Systems and Graphcore, continued to avoid the benchmark competition. In the results announced Wednesday by the MLPerf organization, an industry consortium that administers the tests, Nvidia took top marks across the board for a variety of machine learning "training" tasks, meaning the computing operations required to develop a machine learning neural network from scratch. The full roster of results can be seen in a spreadsheet form.


NVIDIA Considers Arm Acquisition In A Deal That Could Upend The Chip Industry

#artificialintelligence

Multiple reports yesterday claim that graphics and data center AI silicon powerhouse NVIDIA has expressed interest in acquiring Arm. Arm's Japanese holding company SoftBank has been exploring the potential sale or an IPO of Arm for some time now, more recently courting Apple for a possible deal. Apple reportedly decided not to engage a bid and a Bloomberg source now claims NVIDIA has stepped up with specific interest in a deal. For reference, Arm core processing IP is heavily licensed around the globe and the company's technologies power virtually every smartphone chip on the market, from Apple silicon to Qualcomm, Huawei and others. Arm core processor technologies also power a huge range of connected devices, from the IoT and the connect home, to automotive applications and even supercomputing.


Exxact Extends Deep Learning Infrastructure Solutions with NVIDIA DGX A100 Systems

#artificialintelligence

The NVIDIA DGX A100 is a high-performance computing system for AI training, inference and analytics. It sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy infrastructure silos with one flexible platform that can support every AI workload. "With the NVIDIA DGX A100, NVIDIA has really changed the game for AI in terms of extreme performance, scale and flexibility. By offering colocation services and flexible lease options, we're making this technology more accessible than ever before," said Jason Chen, Vice President of Exxact Corporation. More than just a server, the DGX A100 integrates exclusive access to the Exxact team of AI-fluent experts that offer prescriptive planning, deployment, and optimization expertise to help fast-track AI transformation. Available now, the NVIDIA DGX A100 can be bundled with an optional three-year warranty and support package to improve productivity by reducing downtime on production systems.


NVIDA: AI, Robotics and Self-Driving Cars

#artificialintelligence

NVIDIA (NVDA) is the pioneer and leading designer of graphics processing unit (GPU) chips, which initially were built into computers to improve video gaming quality, asserts Bruce Kaser, editor of Cabot Undervalued Stocks Advisor. However, they were discovered to be nearly ideal for other uses that required immense and accelerated processing power, including data centers and artificial intelligence applications such as professional visualization, robotics and self-driving cars. In April, NVIDIA completed the $6.9 billion acquisition of Mellanox Technologies, an innovator in high-performance interconnect technology routinely used in supercomputers and hyperscale data centers. The firm's data center business now represents about 50% of total revenues. Its shares have increased 17x since the start of 2015 and now trade essentially at their all-time high.


WWT Named Partner of the Year for Deep Learning AI by NVIDIA

#artificialintelligence

ST. LOUIS, MO – July 17, 2020 – World Wide Technology (WWT) today announced that it has been selected by the NVIDIA Partner Network (NPN) as the 2019 Deep Learning AI Partner of the Year for the Americas. This is the third year that WWT has been honored in this category. The NPN selected WWT for its ongoing AI research and development program. To help customers develop AI leadership, WWT published six white papers about leveraging the compute power of NVIDIA DGX systems to develop Machine Learning and Deep Learning models for real-time edge video analytics, network optimization, and performance comparisons of multiple reference architectures for ML model development. The WWT research into ML and Deep Learning is tied to real-world business outcomes, and improvements in mining safety, utilities grid optimization, and resource management for manufacturing.