Member states of the European Centre for Medium-range Weather Forecasts (ECMWF) made the indicative decision to relocate the facility on Wednesday. The ECMWF is an independent intergovernmental organisation supported by 22 full member states from Europe, with another 12 co-operating nations. These forecasts are then shared with the member national meteorological agencies, such as Meteo France and the UK's Met Office. "It has been clear for a while now that the current data centre facility does not offer the required flexibility for future growth and changes in high-performance computing technology," ECMWF's Director-General Florence Rabier said in a statement.
The National Computational Infrastructure (NCI), Australia's national research computing service, has purchased four IBM Power System servers for high performance computing in a bid to advance its research efforts through artificial intelligence, deep learning, high performance data analytics, and other compute-heavy workloads. Turnbull's agile struggle is all glitz and no grunt Australian government to continue focus on digital delivery in 2017 Australian ISPs to block piracy sites from the pocket of content owners TPG outbids MyRepublic to snag Singapore's fourth telco license Turnbull's agile struggle is all glitz and no grunt Friday's announcement follows a development process NCI undertook with the IBM Australia Development Laboratory and its Linux and Open Technology team. According to NCI, the development lab provides OpenPower development capability and locally develops IBM's Power System firmware, with the decision to purchase the new servers strongly influenced by its direct access to the local IBM Power development team, NCI said. Prime Minister Malcolm Turnbull announced last year that the government would be providing AU$1.5 billion over 10 years for the NCRIS, committing AU$150 million each year for 2015-16 and 2016-17, with funding of AU$153.5 million to be provided in 2017-18 and on an ongoing basis, indexed for inflation.
For the past few years compute silicon vendors have increased their influence on the supercomputing market. Cloud infrastructure manufacturers, such as Dell EMC, HPE, IBM, Huawei, QCT, Inspur and Sugon and storage vendors, such as DDN, Dell EMC, and HPE, were also in the spotlight. Prior to 2000, innovation entered the computing market through HPC; HPC was the top of the technology waterfall. Increasing compute density means increasing both rack-level power consumption and rack-level heat dissipation – they are tightly linked.
The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has kicked off the hunt for a new supercomputer to replace its existing Bragg accelerator cluster, a system the organisation currently uses to solve big data challenges in fields such as bioscience, image analysis, fluid dynamics modelling, and environmental science. According to CSIRO's acting deputy chief information officer for scientific computing Angus Macoustra, Bragg's replacement will be capable of "petaflop" speeds to support the broad range of projects undertaken by the organisation's researchers. Over 780k email addresses reportedly exposed in Capgemini leak of Michael Page data NAB shifting towards a continuous digital delivery strategy House of Reps Committee reopens Australian'innovation and creativity' inquiry OAIC told of 94 My Health Record-related breaches in 2015-16 Optus pins 10 percent profit decline on termination rates and Sport launch House of Reps Committee reopens Australian'innovation and creativity' inquiry Bragg is accessible to any of the research projects within the organisation, with Macoustra noting the CSIRO also makes the system available to external partners on request. Sticking within the organisation's AU$4 million budget, the new system is expected to significantly exceed the existing computer's performance.
Back in 2010, when the term "cloud computing" was still laden with peril and mystery for many users in enterprise and high performance computing, HPC cloud startup, Nimbix, stepped out to tackle that perceived risk for some of the most challenging, latency-sensitive applications. For these users, they provide the data and performance parameters and the system orchestrates the workflows using JARVICE and their container approach to application delivery. What is interesting here is that just as companies that have specialized in HPC hardware are finding their gear is a good fit for deep learning training and broader machine learning applications, so too is Nimbix finding a potential new path. They have managed to carve out a niche in supercomputing and a few other areas, but so far, there aren't a lot of robust, tuned high performance hardware options as a service that fit the machine learning bill.
Rutgers is taking a leading role in an IBM-sponsored World Community Grid project that will use supercomputing power to identify potential drug candidates to cure the Zika virus. IBM created World Community Grid in 2004 to address researchers' critical need for supercomputing power. Partially hosted on IBM's SoftLayer cloud technology, World Community Grid provides massive amounts of supercomputing power to scientists for free by harnessing the unused computing power of volunteers' computers and Android devices. In 2011, Perryman designed, developed and ran the grid's Global Online Fight Against Malaria (GO FAM) project, which has resulted in identifying promising tool compounds for treating malaria and drug-resistant tuberculosis.
According to the center's director, Thomas Schulthess, teams there firmed the GPU foundation early, beginning with a proposal in 2008 that pushed for rapid code refactoring across the many application areas served by CSCS in weather, engineering, biosciences, and beyond. The center made the decision to go with the Cray, GPU-backed architecture when it evaluated what would be required for applications running on Piz Daint in 2012, and while if they had waited long enough, the OpenPower Foundation would have revealed a hybrid GPU supercomputer featuring the intranode NVLink interconnect to avoid the hops over PCIe required on other machines, Schulthess says they likely would have still gone with a Cray system, giving up on the intranode interconnect provided via NVlink. Schulthess tells us that the center has test systems sporting NVlink on site already for development purposes, but that teams are most interested in getting the Pascal nodes on the floor for the boost in memory performance, something that is increasingly more important than the floating point advantages that also come with Pascal. Schulthess tells The Next Platform that the upgrade will bolster CSCS's ability to crunch Large Hadron Collider data, push current research on the Human Brain Project's High Performance Computing and Analytics platform, which is rooted to Piz Daint, as well as run a large cadre of other scientific codes for European institutions.
"Prabhat leads the Data and Analytics Services team at NERSC. His current research interests include scientific data management, parallel I/O, high performance computing and scientific visualization. He is also interested in applied statistics, machine learning, computer graphics and computer vision. Prabhat received an ScM in Computer Science from Brown University (2001) and a B.Tech in Computer Science and Engineering from IIT-Delhi (1999).