Scientific Computing


HPC AI Advisory Council Conference Returns to Perth Aug. 28-29 - insideHPC

#artificialintelligence

The HPC Advisory Council has posted the Agenda for their upcoming meeting in Perth, Australia. Hosted by the Pawsey Supercomputing Centre, the event takes place August 28-29. The hosts have added a powerful international dialog session to the second annual conference agenda featuring an impressive list of leading HPC centres and industry representatives. This year's program also includes a wide variety of invited and contributed talks on high performance computing, artificial intelligence, and cutting-edge research and development from industry notables throughout the region and beyond. The 2018 agenda also features hands-on tutorials, the latest trends, newest technologies and breakthrough works along with the latest best practices in applications, tools and techniques.


NVIDIA Unveils Nine New High-Performance Computing Containers NVIDIA Blog

#artificialintelligence

As part of our effort to speed the deployment of GPU-accelerated high-performance computing and AI, we've more than tripled the number of containers available from our NVIDIA GPU Cloud (NGC) since launch last year. Users can now take advantage of 35 deep learning, high-performance computing, and visualization containers from NGC, a story we'll be telling in depth at this week's International Supercomputing Conference in Frankfurt. Over the past three years, containers have become a crucial tool in deploying applications on a shared cluster and speeding the work, especially for researchers and data scientists running AI workloads. These containers make deploying deep learning frameworks -- building blocks for designing, training and validating deep neural networks -- faster and easier. Installing frameworks is complicated and time consuming.


The US may have just pulled even with China in the race to build supercomputing's next big thing

MIT Technology Review

There was much celebrating in America last month when the US Department of Energy unveiled Summit, the world's fastest supercomputer. Now the race is on to achieve the next significant milestone in processing power: exascale computing. This involves building a machine within the next few years that's capable of a billion billion calculations per second, or one exaflop, which would make it five times faster than Summit (see chart). Every person on Earth would have to do a calculation every second of every day for just over four years to match what an exascale machine will be able to do in a flash. This phenomenal power will enable researchers to run massively complex simulations that spark advances in many fields, from climate science to genomics, renewable energy, and artificial intelligence.


High-performance computing aids traumatic brain injury research - Verdict Medical Devices

#artificialintelligence

The project began in March of this year but is still in its initial stages. A new multi-year project involving several American universities and national laboratories aims to use supercomputing resources and artificial intelligence (AI) to enable a precision medicine approach for treating traumatic brain injury (TBI). The participating institutions include the Department of Energy's (DOE) Lawrence Livermore (LLNL), Lawrence Berkeley (LBNL) and Argonne (ANL) national laboratories, in collaboration with the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) consortium led by the University of California, San Francisco (UCSF) and involving other leading universities across the US. Funded primarily by the National Institutes of Health's National Institute of Neurological Disorders and Stroke (NINDS) DOE scientists will analyse some of the largest and most complex TBI patient data sets collected, including advanced computed tomography (CT) and magnetic resonance imaging (MRI), proteomic and genomic biomarkers and clinical outcomes. To do this they will use artificial intelligence based technologies and supercomputing resources.


Enterprise High Performance Computing

#artificialintelligence

Traditional and AI-Focused HPC Compute, Storage, Software, and Services: Market Analysis and Forecasts Over the past two decades, enterprises have realized the value of using clusters of computers to solve complex mathematical, computational, and simulation/modeling problems. By addressing these massive problems using parallel computing techniques (allowing the problem to be split into parts that can be tackled by individual or groups of processors), the time to complete a solution can be drastically reduced. However, as enterprises have become more focused on automating manual processes, as well as incorporating some degree of cognition or intelligence into their systems, it has become clear that these processes require the ingestion and analysis of large amounts of data, and single workstation or server-based processing would simply lack the speed and power to provide results in a reasonable amount of time. Tractica forecasts that the overall market for enterprise high performance computing (HPC) hardware, software, storage, and networking equipment will reach $31.5 billion annually by 2025, an increase from approximately $18.8 billion in 2017. The market is currently dominated by HPC equipment utilized for traditional use cases, or situations in which an HPC system is used for heavy-duty number crunching, simulation, and analysis, techniques that require the brute force of cluster computing to reduce the time to complete complex calculations.


Nvidia unveils the HGX-2, a server platform for HPC and AI workloads

ZDNet

Nvidia on Tuesday announced a new server platform, the HGX-2, designed to meet the needs of the growing number of applications that seek to leverage both high-performance computing (HPC) and artificial intelligence. The platform, Nvidia says, is the first to offer high-precision computing capabilities to handle both HPC and AI workloads. It uses FP64 and FP32 for scientific computing and simulations, while enabling FP16 and Int8 for AI training and inference. The HGX-2 server platform consists of a pair of baseboards. The 16 GPUs are fully connected through 12 NVSwitches to collectively deliver two petaflops of AI performance.


What If All Data Were Hot Data? Flash Storage Hits Two Inflection Points

Forbes Technology

Within the enterprise data center, today's state of the art is the All-Flash Array, which depends upon non-volatile Flash storage to provide high performance and durability, as compared to the spinning disk technology that preceded it. Until recently, however, Flash arrays were decidedly more expensive than hard drives, prompting enterprises to implement a mix of different storage technologies for different purposes. At the high end, Flash supports high performance computing (HPC) and certain mission-critical tasks that require real-time processing of data – what we call'hot data.' For top performance, hot data require expensive network protocols like InfiniBand or similarly costly storage-area networks (SANs) that depend upon Fibre Channel networking technology. In the middle are'warm data' on hard drives that leverage earlier spinning disk technology.


Pawsey gets new GPU nodes to bolster AI capability

ZDNet

Australia's Pawsey Supercomputing Centre has announced its cloud service Nimbus has received a processing boost in the name of artificial intelligence. Currently, Nimbus consists of AMD Opteron Central Processing Units (CPUs), making up 3,000 cores and 288 terabytes of storage. The expansion announced on Wednesday will see Nimbus score 6x HPE SX40 nodes, which each contain 2x Nvidia Tesla V100 16GB graphics processing units (GPUs). "These bad boys are built to accelerate artificial intelligence, HPC [high-performance computing], and graphics," Pawsey said in a statement. Powered by Nvidia Volta architecture, a single GPU offers the performance of up to 100 CPUs.


Simulating the human brain: an exascale effort - IEEE Future Directions

#artificialintelligence

As of Spring 2018 the fastest computer is the Sunway Taihulight, Wuxi – China. It has 10,649,600 processing cores, clustered in group of 260 each and delivering an overall performance of 125.44 PetaFLOPS (million of billions of instructions per second) requiring some 20MW of power. In the US the National Strategic Computing Initiative aims at developing the first exascale computer (8 times faster than the Sunway Taihulight computer) and the race is on against China, South Korea and Europe. We might be seeing the winner this year (next month the top 500 computers list will be revised -it happens twice a year). These supercomputers are used today in studying the Earth climate and earthquakes, simulating weapons effect, designing new drugs, simulating the folding of proteins.


International Neuroscience Initiatives through the Lens of High-Performance Computing

IEEE Computer

Neuroscience initiatives aim to develop new technologies and tools to measure and manipulate neuronal circuits. To deal with the massive amounts of data generated by these tools, the authors envision the co-location of open data repositories in standardized formats together with high-performance computing hardware utilizing open source optimized analysis codes.