Goto

Collaborating Authors

Benchmarking Edge Computing

#artificialintelligence

The arrival of new hardware designed to run machine learning models at vastly increased speeds, and inside a relatively low power envelope, without needing a connection to the cloud, makes edge based computing that much more of an attractive proposition. Especially as alongside this new hardware we've seen the release of TensorFlow 2.0 as well as TensorFlow Lite for micro-controllers and new ultra-low powered hardware like the SparkFun Edge. The ecosystem around edge computing is starting to feel far more mature. Which means that biggest growth area in machine learning practice over the next year or two could well be around inferencing, rather than training. Time to run some benchmarking and find that out.


Deep Learning Stretches Up to Scientific Supercomputers

#artificialintelligence

Researchers delivered a 15-petaflop deep-learning software and ran it on Cori, a supercomputer at the National Energy Research Scientific Computing Center, a Department of Energy Office of Science user facility. Machine learning, a form of artificial intelligence, enjoys unprecedented success in commercial applications. However, the use of machine learning in high performance computing for science has been limited. Why? Advanced machine learning tools weren't designed for big data sets, like those used to study stars and planets. A team from Intel, National Energy Research Scientific Computing Center (NERSC), and Stanford changed that situation.


CES 2018: Intel's 49-Qubit Chip Shoots for Quantum Supremacy

IEEE Spectrum Robotics

Intel has passed a key milestone while running alongside Google and IBM in the marathon to build quantum computing systems. The tech giant has unveiled a superconducting quantum test chip with 49 qubits: enough qubits to possibly enable quantum computing that begins to exceed the practical limits of modern classical computers.


Join NVIDIA at Supercomputing 2019 (SC19)

#artificialintelligence

Join the PGI Compilers & Tools team to learn how to use the PGI C compiler to program NVIDIA GPUs using standard C 17 parallel algorithms and increase your GPU programming productivity with OpenACC Fortran, C, and C compilers. See how 200 science and engineering applications have been parallelized using OpenACC for both GPUs and multi-core CPUs, delivering world-class acceleration. Watch a demo and pick up a free PGI and NVIDIA beanie, and have a chance to win one of five NVIDIA Jetson Nano developer kits.


IBM's cloud adds support for Nvidia's fastest GPUs yet

#artificialintelligence

IBM today announced that users on its Bluemix cloud will soon be able to add two Nvidia Tesla P100 accelerator cards to their bare metal servers. The company says this feature will launch later this month and, when it's live, IBM will likely be the first major cloud provider to offer support for these chips, which can provide up to 4.7 teraflops of double-precision performance and 16 gigabytes of memory. There is still a chance that Google could beat IBM to the market, though. Late last year, Google also announced that it would support Nvidia's newest GPUs early this year, but we haven't heard when exactly the company plans to launch this feature. We asked Google for an updated timeline but haven't heard back yet.