Goto

Collaborating Authors

Scientific Computing


NetApp Teams with NVIDIA to Accelerate HPC and AI with Turnkey Supercomputing Infrastructure

#artificialintelligence

NetApp, a global, cloud-led, data-centric software company, announced that NetApp EF600 all-flash NVMe storage combined with the parallel file system is now certified for NVIDIA DGX SuperPOD. The new certification simplifies artificial intelligence (AI) and high-performance computing (HPC) infrastructure to enable faster implementation of these use cases. Since 2018, NetApp and NVIDIA have served hundreds of customers with a range of solutions, from building AI Centers of Excellence to solving massive-scale AI training challenges. The qualification of NetApp EF600 and BeeGFS file system for DGX SuperPOD is the latest addition to a complete set of AI solutions that have been developed by the companies. "The NetApp and NVIDIA alliance has delivered industry-leading innovation for years, and this new qualification for NVIDIA DGX SuperPOD builds on that momentum," said Phil Brotherton, Vice President of Solutions and Alliances at NetApp.


the-increase-in-demand-for-high-performance-computing-hpc-and-ai

#artificialintelligence

As the world increasingly turns to renewable energy sources to power our homes and businesses, the need for high-performance computing (HPC) and artificial intelligence (AI) is also increasing. HPC and AI are used to model and predict complex phenomena, like weather patterns and climate change, as well as to optimize the design of renewable energy systems. The demand for HPC and AI is therefore increasing in many industries that are critical to the transition to a low-carbon economy. In addition, a great deal of research and development (R&D) has been put into play using these technologies, which are leading to breakthroughs that promise to change the way people live and work. With supercomputing technology in the limelight and companies focusing on enhancing their data centers' performance, it's easy to get caught up in the hype surrounding the new computer systems that boast high computing power. But a lot of people aren't sure where all of this is going or why it's such a big deal.


Seminar for machine learning and UQ in scientific computing Nazanin Abedini

#artificialintelligence

Title: Convergence properties of a data-assimilation method based on a Gauss-Newton iteration Abstract: Data assimilation is broadly used in many practical situations, such as weather forecasting, oceanography and subsurface modelling. There are some challenges in studying these physical systems. For example, their state cannot be directly and accurately observed or the underlying time-dependent system is chaotic which means that small changes in initial conditions can lead to large changes in prediction accuracy. The aim of data assimilation is to correct error in the state estimation by incorporating information from measurements into the mathematical model. The widely-used data-assimilation methods are variational methods.


Exscalate: supercomputing and artificial intelligence for drug discovery and design

#artificialintelligence

Despite tremendous technological advances in drug discovery and medicinal chemistry, the failure rate of new molecular entities remains extremely high, and drug development costly and slow. Dompé, a global biopharmaceutical company with a 130-year legacy of medical innovation, is here to solve the problem. Leveraging strong drug development capabilities and more than 20 years of experience, Dompé has developed the most advanced intelligent supercomputing platform for drug testing, and the largest enumerated chemical library in the world for preclinical and candidate identification, enabling faster, more efficient and inexpensive drug discovery. "Our virtual screening platform, Exscalate, leverages high-performance computing, big data and artificial intelligence (AI) to perform in silico drug testing and design," explained Andrea R. Beccari, head of discovery platform senior director. "The platform not only has unprecedented speed, quality and scalability, but is also open to the scientific community to drive innovation."


AI/ML, Data Science Jobs #hiring

#artificialintelligence

Altair Engineering Inc. is an American multinational information technology company headquartered in Troy, Michigan. It provides software and cloud solutions for simulation, IoT, high performance computing (HPC), data analytics, and artificial intelligence (AI). Altair Engineering is the creator of the HyperWorks CAE software product, among numerous other software packages and suites. The company was founded in 1985 and went public in 2017.


San Diego Supercomputer Center to Offer Two Summer Institutes - insideHPC

#artificialintelligence

The San Diego Supercomputer Center at UC San Diego has planned summer institutes for June and August, one focused on cyberinfrastructure-enabled machine learning and the on high-performance computing (HPC) and data science. Application deadlines are April 15 and May 13, respectively. The Cyberinfrastructure-Enabled Machine Learning (CIML) Summer Institute will be held June 27-29 (with a preparatory session on June 22). The institute will introduce machine learning (ML) researchers, developers and educators to the techniques and methods needed to migrate their ML applications from smaller, locally run resources (such as laptops and workstations) to high-performance computing (HPC) systems (e.g., SDSC's Expanse supercomputer). The CIML application deadline is Friday, April 15.


Nvidia describes Arm-based Grace CPU 'Superchip'

#artificialintelligence

Did you miss a session at the Data Summit? Nvidia offered details on its Grace central processing unit (CPU) "Superchip" during CEO Jensen Huang's keynote speech at its virtual Nvidia GTC 2022 event. Huang said the chip would double the performance and energy efficiency of Nvidia's chips. It is on schedule to ship next year, he said, and it can be a "superchip," or essentially two chips connected together. The chip is Nvidia's own variant of the Arm Neoverse architecture, and it is a discrete datacenter CPU designed for AI infrastructure and high-performance computing, providing the highest performance and twice the memory bandwidth and energy-efficiency compared to today's leading server chips, Huang said.


DDN and Aspen Systems partner together

#artificialintelligence

DDN, the global leader in artificial intelligence (AI) and multi-cloud data management solutions, and Aspen Systems, the premier manufacturer of HPC products, have partnered to deliver custom AI and HPC solutions that enable data-intensive organizations to generate more value and lower times for analyzing data, on-premise and in the cloud. As datasets continue to dynamically grow in size, organizations require personalized high-performance computing (HPC) solutions that are quick to deploy and easy to use to successfully facilitate complex projects and generate faster time to results. "Aspen Systems is passionate about the latest technologies that relate to our industry, and our clients value us for our precision craftsmanship and expert support," said Mako Furukawa, senior sales engineer, Aspen Systems, Inc. "The need to provide high-speed parallel storage to our customers drove the partnership with DDN over a decade ago. DDN is an excellent partner for us because of their technical knowledge, ongoing customer preference for their solutions as well as their depth of understanding customers' complex requirements." DDN and Aspen Systems serve a breadth of organizations that conduct various forms of computational science research, including government agencies, higher education, aerospace, and automotive to pharmacology, psychology and biology.


Czech Digital Innovation Hubs: who are they and how can they help?

Robohub

In the Czech Republic, the initial network of Digital Innovation Hubs (DIHs) in 2016 was very small – it included only two pioneer hubs raised from Horizon 2020 European projects focused on support of digital manufacturing (DIGIMAT located in Kuřim was the first one) and high-performance computing (the National Supercomputing Centre IT4Innovations located in Ostrava was the first DIH registered in CZ). Since then, the Czech network has enlarged up to twelve DIHs and their specialisation varies from manufacturing, cybersecurity, artificial intelligence, robotics, high-performance computing up to sectoral focus such as agriculture and food industry, health or smart cities and smart regions development. How are the Czech DIHs built and structured? They are based often on a leading competence centre, research organisation, technical university, science park, technology centre or the most active innovations-supporting NGOs who have created an efficient consortium of other partners around them, always following the selected field of specialisation. The DIHs in Czechia, as well as all over the Europe, have not been built "on a green meadow" – as we say in Czech, but based on a record of projects, activities and common references that have created the optimal mixture of the DIH s expertise and capacities.


How High Performance Computing and Artificial Intelligence are working together to tackle the challenges of data overload

#artificialintelligence

Initially, many supercomputers were based on mainframes, however, their cost and complexity were significant barriers to entry for many institutions. The idea of utilising multiple low-cost PCs over a network to provide a cost-effective form of parallel computing led research institutions along the path of high performance computing (HPC) clusters starting with "Beowulf" clusters in the 90's. More recently, we have witnessed the advancement of HPC from the original, CPU-based clusters to systems that do the bulk of their processing on Graphic Processing Units (GPUs), resulting in the growth of GPU accelerated computing. While HPC was scaling up with more compute resource, the data was growing at a far faster pace. This has presented big data challenges for storage, processing, and transfer.