Scientific Computing


Hewlett Packard Enterprise Signs Huge Supercomputer Deal with Defense Department

#artificialintelligence

The Defense Department is paying $57 million to Hewlett Packard Enterprise for supercomputers that it plans to use for tasks like designing helicopters and weather forecasting. The Air Force Research Laboratory and Defense Department's Supercomputing Resource Center at the Wright-Patterson Air Forc...


Sylabs Brings Singularity Container Platform to Enterprises

#artificialintelligence

Sylabs is bringing to market a new option for enterprises to integrate a container architecture into their cloud operations. Those efforts are based on the Linux-based Singularity container platform, which was developed in late 2015 by Sylabs CEO Gregory Kurtzer for use in high-performance computing...


Samsung launches 800GB Z-SSD for supercomputing

ZDNet

SZ985 boasts a ultra-low latency of 16 microseconds. Samsung Electronics has launched an 800GB solid state drive, the SZ985 Z-SSD, aimed at supercomputing, the firm announced. The new offering boasts five times less latency than of NVMe SSDs and is aimed at high-speed cache data and log data processing, the company said. The single port, four-lane Z-SSD comes with Z-NAND with 10 times higher cell read performance than 3-bit V-NAND. It has an ultra-low latency controller with 1.5GB LPDDR4 DRAM that allows 1.7 times faster random read performance at 750,000 IOPS and a write latency of 16 microseconds, five times less than NVMe SSD PM963.


Dell EMC high performance computing bundles aimed at AI, deep learning

ZDNet

Systems that aim to meld high performance computing and data analytics for mainstream enterprises. These systems are designed for fraud detection, image processing, financial analysis and personalized medicine. The server is aimed at industries such as scientific imaging, oil and gas and financial services. Systems that aim to meld high performance computing and data analytics for mainstream enterprises. These systems are designed for fraud detection, image processing, financial analysis and personalized medicine.


WekaIO Partners with HPE to Develop All-Flash Storage for HPC and AI - insideHPC

#artificialintelligence

Today WekaIO announced a partnership with HPE to deliver integrated flash-based parallel file system capabilities that can significantly accelerate compute-intensive workloads. The WekaIO Matrix software-defined storage solution is validated for deployment within HPE environments – including the HPE Apollo Gen10 System platform that delivers rich capabilities for high-performance computing (HPC), artificial intelligence (AI) and machine learning (ML) use cases. At HPE we're committed to providing innovative solutions for our customers in the rapidly growing markets for high-performance computing, artificial intelligence and machine learning," said Bill Mannel, Vice President and General Manager, HPC and AI Segment Solutions, Hewlett Packard Enterprise. "The combination of WekaIO Matrix with HPE Apollo Systems is an option that enables customers to maximize the throughput of their HPC environment by making it easy to scale storage capacity and performance to new levels without the requirement to modify compute codes or HPC workflows." The agreement creates an offering that targets the space, energy, and processor attributes of supercomputing.


NCI doubles supercomputer throughput with Power

ZDNet

The supercomputing market is largely dominated by x86 architecture, of which Intel boasts the majority of the market share. According to manager of high performance computing (HPC) systems and cloud services at the National Computational Infrastructure (NCI) Dr Muhammad Atif, when there is only one big vendor, they do their own thing, which results in certain applications or features not enabled or not present in their architecture. As a result, NCI turned to IBM to boost the research capacity of the biggest supercomputing cluster in the Southern Hemisphere, Raijin, which is currently benchmarked at clocking 1.67 petaflops, Atif told ZDNet. NCI, Australia's national research computing service, purchased four IBM Power System servers for HPC in December, in a bid to advance its research efforts through artificial intelligence (AI), deep learning, high performance data analytics, and other compute-heavy workloads. The upgrades added much-needed capacity to the Raijin system, Atif explained.


NVIDIA Reports Strong First Quarter on Record Datacenter Revenue

#artificialintelligence

NVIDIA has reported Q1 revenue of $1.94 billion, buoyed by record datacenter sales of $409 million. Without a doubt, NVIDIA's fastest growing segment is the datacenter business, which includes traditional high performance computing (HPC), deep learning, and data analytics. This quarter's $409 million figure represents 21.1 percent of NVIDIA's revenue over the last three months. To keep that momentum growing, the company plans to train 100,000 additional AI developers this year through the NVIDIA Deep Learning Institute.


Weather supercomputing 'heads to Italy'

BBC News

Member states of the European Centre for Medium-range Weather Forecasts (ECMWF) made the indicative decision to relocate the facility on Wednesday. The ECMWF is an independent intergovernmental organisation supported by 22 full member states from Europe, with another 12 co-operating nations. These forecasts are then shared with the member national meteorological agencies, such as Meteo France and the UK's Met Office. "It has been clear for a while now that the current data centre facility does not offer the required flexibility for future growth and changes in high-performance computing technology," ECMWF's Director-General Florence Rabier said in a statement.


Python: High Performance or Not? You Might Be Surprised

#artificialintelligence

The concept of an "accelerated Python" is relatively new, and it's made Python worth another look for Big Data and High Performance Computing (HPC) applications. Thanks to some Python aficionados at Intel, who have utilized the well-known Intel Math Kernel Library (MKL) under the covers, we can all use an accelerated Python that yields big returns for Python performance without requiring that we change our Python code! But Python is relatively slow because it's an interpreted (not a compiled) language. We can learn and explore interactively--including doing a "Hello, World!" program interactively: The reason an "accelerated Python" can be so effective comes from a combination of three factors: Python has mature and widely used packages and libraries: These libraries can be accelerated, without needing to change our Python code at all. All we have to do is install an accelerated Python.


Directorate for Computer & Information Science & Engineering (CISE)

AITopics Original Links

The National Strategic Computing Initiative is a whole-of-nation effort to accelerate scientific discovery and economic competitiveness by maximizing the benefits of high-performance computing research, development, and deployment.