Scientific Computing


Pawsey gets new GPU nodes to bolster AI capability

ZDNet

Australia's Pawsey Supercomputing Centre has announced its cloud service Nimbus has received a processing boost in the name of artificial intelligence. Currently, Nimbus consists of AMD Opteron Central Processing Units (CPUs), making up 3,000 cores and 288 terabytes of storage. The expansion announced on Wednesday will see Nimbus score 6x HPE SX40 nodes, which each contain 2x Nvidia Tesla V100 16GB graphics processing units (GPUs). "These bad boys are built to accelerate artificial intelligence, HPC [high-performance computing], and graphics," Pawsey said in a statement. Powered by Nvidia Volta architecture, a single GPU offers the performance of up to 100 CPUs.


Simulating the human brain: an exascale effort - IEEE Future Directions

#artificialintelligence

As of Spring 2018 the fastest computer is the Sunway Taihulight, Wuxi – China. It has 10,649,600 processing cores, clustered in group of 260 each and delivering an overall performance of 125.44 PetaFLOPS (million of billions of instructions per second) requiring some 20MW of power. In the US the National Strategic Computing Initiative aims at developing the first exascale computer (8 times faster than the Sunway Taihulight computer) and the race is on against China, South Korea and Europe. We might be seeing the winner this year (next month the top 500 computers list will be revised -it happens twice a year). These supercomputers are used today in studying the Earth climate and earthquakes, simulating weapons effect, designing new drugs, simulating the folding of proteins.


International Neuroscience Initiatives through the Lens of High-Performance Computing

IEEE Computer

Neuroscience initiatives aim to develop new technologies and tools to measure and manipulate neuronal circuits. To deal with the massive amounts of data generated by these tools, the authors envision the co-location of open data repositories in standardized formats together with high-performance computing hardware utilizing open source optimized analysis codes.


Sylabs Brings Singularity Container Platform to Enterprises

#artificialintelligence

Sylabs is bringing to market a new option for enterprises to integrate a container architecture into their cloud operations. Those efforts are based on the Linux-based Singularity container platform, which was developed in late 2015 by Sylabs CEO Gregory Kurtzer for use in high-performance computing (HPC) and scientific use cases.


Dell EMC high performance computing bundles aimed at AI, deep learning

ZDNet

Systems that aim to meld high performance computing and data analytics for mainstream enterprises. These systems are designed for fraud detection, image processing, financial analysis and personalized medicine. The server is aimed at industries such as scientific imaging, oil and gas and financial services. Systems that aim to meld high performance computing and data analytics for mainstream enterprises. These systems are designed for fraud detection, image processing, financial analysis and personalized medicine.


WekaIO Partners with HPE to Develop All-Flash Storage for HPC and AI - insideHPC

#artificialintelligence

Today WekaIO announced a partnership with HPE to deliver integrated flash-based parallel file system capabilities that can significantly accelerate compute-intensive workloads. At HPE we're committed to providing innovative solutions for our customers in the rapidly growing markets for high-performance computing, artificial intelligence and machine learning," said Bill Mannel, Vice President and General Manager, HPC and AI Segment Solutions, Hewlett Packard Enterprise. "The combination of WekaIO Matrix with HPE Apollo Systems is an option that enables customers to maximize the throughput of their HPC environment by making it easy to scale storage capacity and performance to new levels without the requirement to modify compute codes or HPC workflows." The agreement creates an offering that targets the space, energy, and processor attributes of supercomputing. The HPE portfolio of HPC and AI solutions--including Apollo 2000, Apollo 6000, Apollo 6500, and SGI 8600--all provide rich architectures for leveraging high-performance flash storage both within the actual compute platforms as well as across high-performance interconnect fabrics.


NCI doubles supercomputer throughput with Power

ZDNet

The supercomputing market is largely dominated by x86 architecture, of which Intel boasts the majority of the market share. According to manager of high performance computing (HPC) systems and cloud services at the National Computational Infrastructure (NCI) Dr Muhammad Atif, when there is only one big vendor, they do their own thing, which results in certain applications or features not enabled or not present in their architecture. As a result, NCI turned to IBM to boost the research capacity of the biggest supercomputing cluster in the Southern Hemisphere, Raijin, which is currently benchmarked at clocking 1.67 petaflops, Atif told ZDNet. NCI, Australia's national research computing service, purchased four IBM Power System servers for HPC in December, in a bid to advance its research efforts through artificial intelligence (AI), deep learning, high performance data analytics, and other compute-heavy workloads. The upgrades added much-needed capacity to the Raijin system, Atif explained.


Weather supercomputing 'heads to Italy'

BBC News

The next-generation supercomputer that will drive Europe's medium-range weather forecasts looks set to be housed in Bologna, Italy, from 2020. It would succeed the current system based in Reading, UK. Member states of the European Centre for Medium-range Weather Forecasts (ECMWF) made the indicative decision to relocate the facility on Wednesday. Detailed negotiations will now be held with Italian authorities. The intention is to confirm the choice in June.


Python: High Performance or Not? You Might Be Surprised

#artificialintelligence

The concept of an "accelerated Python" is relatively new, and it's made Python worth another look for Big Data and High Performance Computing (HPC) applications. Thanks to some Python aficionados at Intel, who have utilized the well-known Intel Math Kernel Library (MKL) under the covers, we can all use an accelerated Python that yields big returns for Python performance without requiring that we change our Python code! But Python is relatively slow because it's an interpreted (not a compiled) language. We can learn and explore interactively--including doing a "Hello, World!" program interactively: The reason an "accelerated Python" can be so effective comes from a combination of three factors: Python has mature and widely used packages and libraries: These libraries can be accelerated, without needing to change our Python code at all. All we have to do is install an accelerated Python.


Directorate for Computer & Information Science & Engineering (CISE)

AITopics Original Links

The National Strategic Computing Initiative is a whole-of-nation effort to accelerate scientific discovery and economic competitiveness by maximizing the benefits of high-performance computing research, development, and deployment.