Goto

Collaborating Authors

scientific computing


High performance computing -- at condo prices

#artificialintelligence

San Diego Supercomputer Center makes high performance computing resources available to researchers via a "condo cluster" model. Many homebuyers have found that the most affordable path to homeownership leads to a condominium, in which the purchaser buys a piece of a much larger building. This same model is in play today in the high performance computing centers at many universities. Under this "condo cluster" model, faculty researchers buy a piece of a much larger HPC system. In a common scenario, researchers use equipment purchase funds from grants or other funding sources to buy compute nodes that are added to the cluster.


Local teams make it to global AI competition

#artificialintelligence

TWO student teams are representing Malaysia in an ongoing international Artificial Intelligence (AI) competition. And one of them is from Curtin University Malaysia. The third annual Asia Pacific High Performance Computing – Artificial Intelligence (APAC HPC-AI) Competition is running from May 20 to Oct 15 and is co-organised by the HPC-AI Advisory Council and the Singapore National Supercomputing Centre. This year's edition of the competition encourages international teams in the Asia Pacific to showcase their mastery of high-performance computing and AI expertise in a friendly yet spirited competition that builds critical skills, professional relationships, competitive spirit and lifelong camaraderie. Held remotely, the competition is seeing a record number of teams – 30 in total – comprising undergraduate and graduate competitors from some of the region's leading academic institutions.


Asetek Collaborates With HPE to Next-Gen HPC Server Cooling Solutions

#artificialintelligence

Asetek's Direct Liquid Cooling Solution will be Delivered in HPE Apollo Systems for Increased Power Density and High Wattage Processing in Data Centers Asetek announced a collaboration with Hewlett Packard Enterprise (HPE) to deliver its premium data center liquid cooling solutions in HPE Apollo Systems, which are high-performing and density-optimized to target high-performance computing (HPC) and Artificial Intelligence (AI) needs. The integration enables deployment of high wattage processors in high density configurations to support compute-intense workloads. When developing its next-generation HPC server solutions, HPE worked closely with Asetek to define a plug and play HPC system that is integrated, installed, and serviced by HPE that serves as the ideal complement to HPE's Gen10 Plus platform. With the resulting solution, HPE is able to maximize processor and interconnect performance by efficiently cooling high density computing clusters. HPE will be deploying these DLC systems, which support warm water cooling, this calendar year.


How scientists are using supercomputers to combat COVID-19

#artificialintelligence

Alongside the White House Office of Science and Technology Policy (OSTP), IBM announced in March that it would help coordinate an effort to provide hundreds of petaflops of compute to scientists researching the coronavirus. As part of the newly launched COVID-19 High Performance Computing (HPC) Consortium, IBM pledged to assist in evaluating proposals and to provide access to resources for projects that "make the most immediate impact." Much work remains, but some of the Consortium's most prominent members -- among them Microsoft, Intel, and Nvidia -- claim that progress is being made. Powerful computers allow researchers to undertake high volumes of calculations in epidemiology, bioinformatics, and molecular modeling, many of which would take months on traditional computing platforms (or years if done by hand). Moreover, because the computers are available in the cloud, they enable teams to collaborate from anywhere in the world. Insights generated by the experiments can help advance our understanding of key aspects of COVID-19, such as viral-human interaction, viral structure and function, small molecule design, drug repurposing, and patient trajectory and outcomes.


Australia's new quantum-supercomputing innovation hub and CSIRO roadmap

ZDNet

The Pawsey Supercomputing Centre and Canberra-based quantum computing hardware startup Quantum Brilliance have announced a new hub that aims to combine innovations from both sectors. The partnership will see quantum expertise developed among Pawsey staff to then install and provide access to a quantum emulator at Pawsey and to work alongside Australian researchers. The Pawsey centre is an unincorporated joint venture between the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Curtin University, Edith Cowan University, Murdoch University, and the University of Western Australia. It currently serves over 1,500 researchers across Australia that are involved in more than 150 supercomputing projects. Quantum Brilliance, meanwhile, is using diamond to develop quantum computers that can operate at room temperature, without the cryogenics or what it called complex infrastructure of other quantum technologies.


Machine learning with Python: A guide to getting started

#artificialintelligence

"Machine learning" has an almost cinematic quality to it, doesn't it? It evokes the work Isaac Asimov and Arthur C. Clarke. Science fiction has often been the predecessor to true scientific advancement, and in regards to artificial intelligence this is definitely the case, though not in the ways that authors and filmmakers have predicted. Machine learning is very real, and not as impenetrable as you might think. If you've used a search engine, tagged a friend in a Facebook photo, or noticed a lack of spam in your email inbox, then you've used technology that utilizes machine learning.


An information-geometric approach to feature extraction and moment reconstruction in dynamical systems

arXiv.org Machine Learning

We propose a dimension reduction framework for feature extraction and moment reconstruction in dynamical systems that operates on spaces of probability measures induced by observables of the system rather than directly in the original data space of the observables themselves as in more conventional methods. Our approach is based on the fact that orbits of a dynamical system induce probability measures over the measurable space defined by (partial) observations of the system. We equip the space of these probability measures with a divergence, i.e., a distance between probability distributions, and use this divergence to define a kernel integral operator. The eigenfunctions of this operator create an orthonormal basis of functions that capture different timescales of the dynamical system. One of our main results shows that the evolution of the moments of the dynamics-dependent probability measures can be related to a time-averaging operator on the original dynamical system. Using this result, we show that the moments can be expanded in the eigenfunction basis, thus opening up the avenue for nonparametric forecasting of the moments. If the collection of probability measures is itself a manifold, we can in addition equip the statistical manifold with the Riemannian metric and use techniques from information geometry. We present applications to ergodic dynamical systems on the 2-torus and the Lorenz 63 system, and show on a real-world example that a small number of eigenvectors is sufficient to reconstruct the moments (here the first four moments) of an atmospheric time series, i.e., the realtime multivariate Madden-Julian oscillation index.


Simple Local Models for Complex Dynamical Systems

Neural Information Processing Systems

We present a novel mathematical formalism for the idea of a local model,'' a model of a potentially complex dynamical system that makes only certain predictions in only certain situations. As a result of its restricted responsibilities, a local model may be far simpler than a complete model of the system. We then show how one might combine several local models to produce a more detailed model. We demonstrate our ability to learn a collection of local models on a large-scale example and do a preliminary empirical comparison of learning a collection of local models and some other model learning methods." Papers published at the Neural Information Processing Systems Conference.


Nonparametric Bayesian Learning of Switching Linear Dynamical Systems

Neural Information Processing Systems

Many nonlinear dynamical phenomena can be effectively modeled by a system that switches among a set of conditionally linear dynamical modes. We consider two such models: the switching linear dynamical system (SLDS) and the switching vector autoregressive (VAR) process. In this paper, we present a nonparametric approach to the learning of an unknown number of persistent, smooth dynamical modes by utilizing a hierarchical Dirichlet process prior. We develop a sampling algorithm that combines a truncated approximation to the Dirichlet process with an efficient joint sampling of the mode and state sequences. The utility and flexibility of our model are demonstrated on synthetic data, sequences of dancing honey bees, and the IBOVESPA stock index.


Learning to Correspond Dynamical Systems

arXiv.org Machine Learning

Many dynamical systems exhibit similar structure, as often captured by hand-designed simplified models that can be used for analysis and control. We develop a method for learning to correspond pairs of dynamical systems via a learned latent dynamical system. Given trajectory data from two dynamical systems, we learn a shared latent state space and a shared latent dynamics model, along with an encoder-decoder pair for each of the original systems. With the learned correspondences in place, we can use a simulation of one system to produce an imagined motion of its counterpart. We can also simulate in the learned latent dynamics and synthesize the motions of both corresponding systems, as a form of bisimulation. We demonstrate the approach using pairs of controlled bipedal walkers, as well as by pairing a walker with a controlled pendulum.