Scientific Computing


Will Supercomputers Be Super-Data and Super-AI Machines?

Communications of the ACM

High-performance computing (HPC) plays an important role in promoting scientific discovery, addressing grand-challenge problems, and promoting social and economic development. Over the past several decades, China has put significant effort into improving its own HPC through a series of key projects under its national research and development program. Development of supercomputing systems has advanced parallel applications in various fields in China, along with related software and hardware technology, and helped advance China's technological innovation and social development. To meet the requirements of multidisciplinary and multidomain applications, new challenges in architecture, system software, and application technologies must be addressed to help develop next-generation exascale supercomputing systems.


When supercomputing and AI meets the cloud ZDNet

#artificialintelligence

The Irish Centre for High-End Computing (ICHEC) is preparing for the installation of a new national supercomputer which will also be accessible via the cloud to researchers across Ireland. Linux-based cloud and high-performance computing company Penguin Computing is one of the companies involved with the project along with Intel. ZDNet talked to Penguin CEO Tom Coull to find out more. ZDNet: Tell me a little about your company. Coull: We are a platform and performance scale-out company and we have been around for almost 20 years.


Microsoft Azure now supports NVIDIA GPU Cloud for AI, HPC workloads

ZDNet

Microsoft has added a new level of support for NVIDIA GPU projects to Azure, which may benefit those running deep-learning and other high performance computing (HPC) workloads. The pair are touting availability of pre-configured containers with GPU-accelerated software as helping data scientists, developers and researchers circumvent a number of integration and testing steps before running their HPC tasks. As NVIDIA noted, these same NVIDIA GPU Cloud (NGC) containers work across Azure instance types, even with different types or quantities of GPUs. There's a pre-configured Azure virtual machine image with everything needed to run NGC containers in the Microsoft Azure Marketplace. Microsoft also made generally available today "Azure CycleCloud," which officials described as "a tool for creating, managing, operating and optimizing HPC clusters of any scale in Azure."


Garvan Institute gets new supercomputer for genomic research

ZDNet

The Garvan Institute of Medical Research has announced that it will be receiving a new high-performance computing (HPC) system to support genomic research and analysis. Genomics is the study of information encoded in an individual's DNA, allowing researchers to study how genes impact health and disease, and it is the institute's mission to make significant contributions to medical research that will change the directions of science and medicine and have major impacts on human health. The new supercomputing system, to be delivered by Dell EMC, will be used by Garvan's Data Intensive Computer Engineering (DICE) group. The Garvan Institute is one of Australia's largest medical research institutions, focused specifically on research into cancer, diabetes and metabolism, genomics and epigenetics, immunology and inflammation, osteoporosis and bone biology, and neuroscience. According to Dr Warren Kaplan, chief of informatics at Garvan's Kinghorn Centre for Clinical Genomics, genomics requires significant computational power to analyse the data.


When supercomputing and AI meets the cloud

ZDNet

The Irish Centre for High-End Computing (ICHEC) is preparing for the installation of a new national supercomputer which will also be accessible via the cloud to researchers across Ireland. Linux-based cloud and high-performance computing company Penguin Computing is one of the companies involved with the project along with Intel. ZDNet talked to Penguin CEO Tom Coull to find out more. ZDNet: Tell me a little about your company. Coull: We are a platform and performance scale-out company and we have been around for almost 20 years.


HPC AI Advisory Council Conference Returns to Perth Aug. 28-29 - insideHPC

#artificialintelligence

The HPC Advisory Council has posted the Agenda for their upcoming meeting in Perth, Australia. Hosted by the Pawsey Supercomputing Centre, the event takes place August 28-29. The hosts have added a powerful international dialog session to the second annual conference agenda featuring an impressive list of leading HPC centres and industry representatives. This year's program also includes a wide variety of invited and contributed talks on high performance computing, artificial intelligence, and cutting-edge research and development from industry notables throughout the region and beyond. The 2018 agenda also features hands-on tutorials, the latest trends, newest technologies and breakthrough works along with the latest best practices in applications, tools and techniques.


NVIDIA Unveils Nine New High-Performance Computing Containers NVIDIA Blog

#artificialintelligence

As part of our effort to speed the deployment of GPU-accelerated high-performance computing and AI, we've more than tripled the number of containers available from our NVIDIA GPU Cloud (NGC) since launch last year. Users can now take advantage of 35 deep learning, high-performance computing, and visualization containers from NGC, a story we'll be telling in depth at this week's International Supercomputing Conference in Frankfurt. Over the past three years, containers have become a crucial tool in deploying applications on a shared cluster and speeding the work, especially for researchers and data scientists running AI workloads. These containers make deploying deep learning frameworks -- building blocks for designing, training and validating deep neural networks -- faster and easier. Installing frameworks is complicated and time consuming.


The US may have just pulled even with China in the race to build supercomputing's next big thing

MIT Technology Review

There was much celebrating in America last month when the US Department of Energy unveiled Summit, the world's fastest supercomputer. Now the race is on to achieve the next significant milestone in processing power: exascale computing. This involves building a machine within the next few years that's capable of a billion billion calculations per second, or one exaflop, which would make it five times faster than Summit (see chart). Every person on Earth would have to do a calculation every second of every day for just over four years to match what an exascale machine will be able to do in a flash. This phenomenal power will enable researchers to run massively complex simulations that spark advances in many fields, from climate science to genomics, renewable energy, and artificial intelligence.


High-performance computing aids traumatic brain injury research - Verdict Medical Devices

#artificialintelligence

The project began in March of this year but is still in its initial stages. A new multi-year project involving several American universities and national laboratories aims to use supercomputing resources and artificial intelligence (AI) to enable a precision medicine approach for treating traumatic brain injury (TBI). The participating institutions include the Department of Energy's (DOE) Lawrence Livermore (LLNL), Lawrence Berkeley (LBNL) and Argonne (ANL) national laboratories, in collaboration with the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) consortium led by the University of California, San Francisco (UCSF) and involving other leading universities across the US. Funded primarily by the National Institutes of Health's National Institute of Neurological Disorders and Stroke (NINDS) DOE scientists will analyse some of the largest and most complex TBI patient data sets collected, including advanced computed tomography (CT) and magnetic resonance imaging (MRI), proteomic and genomic biomarkers and clinical outcomes. To do this they will use artificial intelligence based technologies and supercomputing resources.


Enterprise High Performance Computing

#artificialintelligence

Traditional and AI-Focused HPC Compute, Storage, Software, and Services: Market Analysis and Forecasts Over the past two decades, enterprises have realized the value of using clusters of computers to solve complex mathematical, computational, and simulation/modeling problems. By addressing these massive problems using parallel computing techniques (allowing the problem to be split into parts that can be tackled by individual or groups of processors), the time to complete a solution can be drastically reduced. However, as enterprises have become more focused on automating manual processes, as well as incorporating some degree of cognition or intelligence into their systems, it has become clear that these processes require the ingestion and analysis of large amounts of data, and single workstation or server-based processing would simply lack the speed and power to provide results in a reasonable amount of time. Tractica forecasts that the overall market for enterprise high performance computing (HPC) hardware, software, storage, and networking equipment will reach $31.5 billion annually by 2025, an increase from approximately $18.8 billion in 2017. The market is currently dominated by HPC equipment utilized for traditional use cases, or situations in which an HPC system is used for heavy-duty number crunching, simulation, and analysis, techniques that require the brute force of cluster computing to reduce the time to complete complex calculations.