Goto

Collaborating Authors

Hardware


Look what's inside Linus Torvalds' latest Linux development PC

ZDNet

In a Linux Kernel Mailing List (LKML), Linus Torvalds, Linux's top developer, talked about the latest progress in the next version of Linux: Linux 5.7-rc7. Along the way, he mentioned, "for the first time in about 15 years, my desktop isn't Intel-based." But a computer is more than just a processor no matter how speedy it is, so I talked with Torvalds to see exactly what's in his new box. First, he's already impressed by its performance: "My'allmodconfig' test builds are now three times faster than they used to be, which doesn't matter so much right now during the calming down period, but I will most definitely notice the upgrade during the next merge window," said Torvalds. The AMD Threadripper 3970x comes with 32 cores.


Meltdown

Communications of the ACM

Moritz Lipp is a Ph.D. candidate at Graz University of Technology, Flanders, Austria. Michael Schwarz is a postdoctoral researcher at Graz University of Technology, Flanders, Austria. Daniel Gruss is an assistant professor at Graz University of Technology, Flanders, Austria. Thomas Prescher is a chief architect at Cyberus Technology GmbH, Dresden, Germany. Werner Haas is the Chief Technology Officer at Cyberus Technology GmbH, Dresden, Germany.


Oracle BrandVoice: GPU Chips Are Poised To Rewrite (Again) What's Possible In Cloud Computing

#artificialintelligence

At Altair, chief technology officer Sam Mahalingam is heads-down testing the company's newest software for designing cars, buildings, windmills, and other complex systems. The engineering and design software company, whose customers include BMW, Daimler, Airbus, and General Electric, is developing software that combines computer models of wind and fluid flows with machine design in the same process--so an engineer could design a turbine blade while simultaneously seeing its draft's effect on neighboring mills in a wind farm. What Altair needs for a job as hard as this, though, is a particular kind of computing power, provided by graphics processing units (GPUs) made by Silicon Valley's Nvidia and others. "When solving complex design challenges like the interaction between wind structures in windmills, GPUs help expedite computing so faster business decisions can be made," Mahalingam says. An aerodynamics simulation performed with Altair ultraFluidX on the Altair CX-1 concept design, modeled in Altair Inspire Studio.


Australia's new quantum-supercomputing innovation hub and CSIRO roadmap

ZDNet

The Pawsey Supercomputing Centre and Canberra-based quantum computing hardware startup Quantum Brilliance have announced a new hub that aims to combine innovations from both sectors. The partnership will see quantum expertise developed among Pawsey staff to then install and provide access to a quantum emulator at Pawsey and to work alongside Australian researchers. The Pawsey centre is an unincorporated joint venture between the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Curtin University, Edith Cowan University, Murdoch University, and the University of Western Australia. It currently serves over 1,500 researchers across Australia that are involved in more than 150 supercomputing projects. Quantum Brilliance, meanwhile, is using diamond to develop quantum computers that can operate at room temperature, without the cryogenics or what it called complex infrastructure of other quantum technologies.


NVIDIA Announces Ampere - The Most Exciting GPU Architecture For Modern AI

#artificialintelligence

The GPU Technology Conference is the most exciting event for the AI and ML ecosystem. From researchers in academia to product managers at hyperscale cloud companies to IoT builders and makers, this conference has something relevant for each of them. As an AIoT enthusiast and a maker, I eagerly look forward to GTC. Due to the current COVID-19 situation, I was a bit disappointed to see the event turning into a virtual conference. But the keynote delivered by Jensen Huang, the CEO of NVIDIA made me forget that it was a virtual event.


Nvidia Jetson Xavier NX review: Redefining GPU accelerated machine learning

#artificialintelligence

Nvidia launched the Jetson Xavier NX embedded System-on-Module (SoM) at the end of last year. It is pin-compatible with the Jetson Nano SoM and includes a CPU, a GPU, PMICs, DRAM, and flash storage. However, it was missing an important accessory, its own development kit. Since a SoM is an embedded board with just a row of connector pins, it is hard to use out-of-the-box. A development board connects all the pins on the module to ports like HDMI, Ethernet, and USB.


Quantum machine learning concepts TensorFlow Quantum

#artificialintelligence

Google's quantum supremacy experiment used 53 noisy qubits to demonstrate it could perform a calculation in 200 seconds on a quantum computer that would take 10,000 years on the largest classical computer using existing algorithms. This marks the beginning of the Noisy Intermediate-Scale Quantum (NISQ) computing era. In the coming years, quantum devices with tens-to-hundreds of noisy qubits are expected to become a reality. Quantum computing relies on properties of quantum mechanics to compute problems that would be out of reach for classical computers. A quantum computer uses qubits.


EETimes - Nvidia Reinvents GPU, Blows Previous Generation Out of the Water -

#artificialintelligence

Jensen Huang's much-anticipated keynote speech today, postponed from Nvidia's GPU Technology Conference (GTC) in March, will unveil the company's eighth-generation GPU architecture. Emerging three years after the debut of the previous generation Volta architecture, Ampere is said to be the biggest generational leap in the company's history. Ampere is built to accelerate both AI training and inference, as well as data analytics, scientific computing and cloud graphics. The first chip built on Ampere, the A100, has some pretty impressive vital statistics. Nvidia claims the A100 has 20x the performance of the equivalent Volta device for both AI training (single precision, 32-bit floating point numbers) and AI inference (8-bit integer numbers).


Nvidia's bleeding-edge Ampere GPU architecture revealed: 5 things PC gamers need to know

PCWorld

Nearly a year and a half after the GeForce RTX 20-series launched with Nvidia's Turing architecture inside, and three years after the launch of the data center-focused Volta GPUs, CEO Jensen Huang unveiled graphics cards powered by the new Ampere architecture during a digital GTC 2020 keynote on Thursday morning. It looks like an absolute monster. Ampere debuts in the form of the A100, a humongous data center GPU powering Nvidia's new DGX-A100 systems. Make no mistake: This 6,912 CUDA core-packing beast targets data scientists, with internal hardware optimized around deep learning tasks. You won't be using it to play Cyberpunk 2077.


Nvidia and Databricks announce GPU acceleration for Spark 3.0

ZDNet

At its GPU Technology Conference (GTC) event today, consumer graphics and AI silicon powerhouse Nvidia is announcing its next-generation Graphical Processing Unit (GPU) architecture, dubbed Ampere, and its first Ampere-based GPU, the A100. For more details, please see ZDNet's Natalie Gagliordi's coverage of all the Nvidia Ampere-related news today. Specifically, Nvidia is announcing new GPU-acceleration capabilities coming to Apache Spark 3.0, the release of which is anticipated in late spring. The GPU acceleration functionality is based on the open source RAPIDS suite of software libraries, themselves built on CUDA-X AI. The acceleration technology, named (logically enough) the RAPIDS Accelerator for Apache Spark, was collaboratively developed by Nvidia and Databricks (the company founded by Spark's creators).