Goto

Collaborating Authors

scientific computing


NetApp Teams with NVIDIA to Accelerate HPC and AI with Turnkey Supercomputing Infrastructure

#artificialintelligence

NetApp, a global, cloud-led, data-centric software company, announced that NetApp EF600 all-flash NVMe storage combined with the parallel file system is now certified for NVIDIA DGX SuperPOD. The new certification simplifies artificial intelligence (AI) and high-performance computing (HPC) infrastructure to enable faster implementation of these use cases. Since 2018, NetApp and NVIDIA have served hundreds of customers with a range of solutions, from building AI Centers of Excellence to solving massive-scale AI training challenges. The qualification of NetApp EF600 and BeeGFS file system for DGX SuperPOD is the latest addition to a complete set of AI solutions that have been developed by the companies. "The NetApp and NVIDIA alliance has delivered industry-leading innovation for years, and this new qualification for NVIDIA DGX SuperPOD builds on that momentum," said Phil Brotherton, Vice President of Solutions and Alliances at NetApp.


Top 10 Python Code Generators that Data Scientists Should Know

#artificialintelligence

Python code generators are in high demand in the data science world for completing multiple data science projects. Code generation tools help with productivity, simplification, consistency, and portability in data science projects. Data scientists are leveraging Python code generators including two issues such as maintenance and complexity. Let's explore some of the top Python code generators for data science projects to be used by data scientists efficiently in 2022. PyTorch is one of the top Python code generators for data scientists as an open-source machine learning framework to help in research prototyping as well as a production deployment.


the-increase-in-demand-for-high-performance-computing-hpc-and-ai

#artificialintelligence

As the world increasingly turns to renewable energy sources to power our homes and businesses, the need for high-performance computing (HPC) and artificial intelligence (AI) is also increasing. HPC and AI are used to model and predict complex phenomena, like weather patterns and climate change, as well as to optimize the design of renewable energy systems. The demand for HPC and AI is therefore increasing in many industries that are critical to the transition to a low-carbon economy. In addition, a great deal of research and development (R&D) has been put into play using these technologies, which are leading to breakthroughs that promise to change the way people live and work. With supercomputing technology in the limelight and companies focusing on enhancing their data centers' performance, it's easy to get caught up in the hype surrounding the new computer systems that boast high computing power. But a lot of people aren't sure where all of this is going or why it's such a big deal.


Exscalate: supercomputing and artificial intelligence for drug discovery and design

#artificialintelligence

Despite tremendous technological advances in drug discovery and medicinal chemistry, the failure rate of new molecular entities remains extremely high, and drug development costly and slow. Dompé, a global biopharmaceutical company with a 130-year legacy of medical innovation, is here to solve the problem. Leveraging strong drug development capabilities and more than 20 years of experience, Dompé has developed the most advanced intelligent supercomputing platform for drug testing, and the largest enumerated chemical library in the world for preclinical and candidate identification, enabling faster, more efficient and inexpensive drug discovery. "Our virtual screening platform, Exscalate, leverages high-performance computing, big data and artificial intelligence (AI) to perform in silico drug testing and design," explained Andrea R. Beccari, head of discovery platform senior director. "The platform not only has unprecedented speed, quality and scalability, but is also open to the scientific community to drive innovation."


AI/ML, Data Science Jobs #hiring

#artificialintelligence

Altair Engineering Inc. is an American multinational information technology company headquartered in Troy, Michigan. It provides software and cloud solutions for simulation, IoT, high performance computing (HPC), data analytics, and artificial intelligence (AI). Altair Engineering is the creator of the HyperWorks CAE software product, among numerous other software packages and suites. The company was founded in 1985 and went public in 2017.


Python's increasing popularity in scientific and high-performance computing

#artificialintelligence

Python is an experiment on how much freedom programmers need. Too much freedom and nobody can read another's code; too little and expressiveness is endangered. Last year, Python was named the most popular programming language. The language's growing popularity can be attributed to the rise of data science and the machine learning ecosystem and corresponding software libraries like Pandas, Tensorflow, PyTorch, and NumPy, among others. The fact that it is so easy to learn helps Python gain favour among the programmers' community.


San Diego Supercomputer Center to Offer Two Summer Institutes - insideHPC

#artificialintelligence

The San Diego Supercomputer Center at UC San Diego has planned summer institutes for June and August, one focused on cyberinfrastructure-enabled machine learning and the on high-performance computing (HPC) and data science. Application deadlines are April 15 and May 13, respectively. The Cyberinfrastructure-Enabled Machine Learning (CIML) Summer Institute will be held June 27-29 (with a preparatory session on June 22). The institute will introduce machine learning (ML) researchers, developers and educators to the techniques and methods needed to migrate their ML applications from smaller, locally run resources (such as laptops and workstations) to high-performance computing (HPC) systems (e.g., SDSC's Expanse supercomputer). The CIML application deadline is Friday, April 15.


Jack Dongarra, who made supercomputers usable, awarded 2021 ACM Turing prize

ZDNet

"Science is driven by simulation," observed Dongarra. "It's that match between the hardware capability, and the necessity of the simulations to use that hardware, where my software fits in." A good chunk of Jack J. Dongarra's life has been spent shuttling between two worlds. In one world, a group of mathematicians sit with pen and paper and imagine things that could be figured out with computers. In another world, a colossus of integrated circuits sits with incredible power but also incredible constraints -- speed, memory, energy, cost.


Nvidia describes Arm-based Grace CPU 'Superchip'

#artificialintelligence

Did you miss a session at the Data Summit? Nvidia offered details on its Grace central processing unit (CPU) "Superchip" during CEO Jensen Huang's keynote speech at its virtual Nvidia GTC 2022 event. Huang said the chip would double the performance and energy efficiency of Nvidia's chips. It is on schedule to ship next year, he said, and it can be a "superchip," or essentially two chips connected together. The chip is Nvidia's own variant of the Arm Neoverse architecture, and it is a discrete datacenter CPU designed for AI infrastructure and high-performance computing, providing the highest performance and twice the memory bandwidth and energy-efficiency compared to today's leading server chips, Huang said.


Why You Should (or Shouldn't) be Using Google's JAX in 2022

#artificialintelligence

Since Google's JAX hit the scene in late 2018, it has been steadily growing in popularity, and for good reason. DeepMind announced in 2020 that it is using JAX to accelerate its research, and a growing number of publications and projects from Google Brain and others are using JAX. With all of this buzz, it seems like JAX is the next big Deep Learning framework, right? In this article we'll clarify what JAX is (and isn't), why you should care (or shouldn't, but you probably should), and whether you should (or shouldn't) use it. If you're already familiar with JAX and want to skip the benchmarks, you can jump ahead to our recommendations on when to use it here It may be best to start off with what JAX is not. JAX is not a Deep Learning framework or library, and it is not designed to ever be a Deep Learning framework or library in and of itself. In a sentence, JAX is a high performance, numerical computing library which incorporates composable function transformations[1]. This is the universal aspect of JAX that is relevant for any use case.