Education


Project to elucidate the structure of atomic nuclei at the femtoscale

MIT News

The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, has selected 10 data science and machine learning projects for its Aurora Early Science Program (ESP). Set to be the nation's first exascale system upon its expected 2021 arrival, Aurora will be capable of performing a quintillion calculations per second, making it 10 times more powerful than the fastest computer that currently exists. The Aurora ESP, which commenced with 10 simulation-based projects in 2017, is designed to prepare key applications, libraries, and infrastructure for the architecture and scale of the exascale supercomputer. Researchers in the Laboratory for Nuclear Science's Center for Theoretical Physics have been awarded funding for one of the projects under the ESP. Associate professor of physics William Detmold, assistant professor of physics Phiala Shanahan, and principal research scientist Andrew Pochinsky will use new techniques developed by the group, coupling novel machine learning approaches and state-of-the-art nuclear physics tools, to study the structure of nuclei.


GPU computing: Accelerating the deep learning curve

ZDNet

Artificial intelligence (AI) may be what everyone's talking about, but getting involved isn't straightforward. You'll need a more than decent grasp of maths and theoretical data science, plus an understanding of neural networks and deep learning fundamentals -- not to mention a good working knowledge of the tools required to turn those theories into practical models and applications. You'll also need an abundance of processing power -- beyond that required by even the most demanding of standard applications. One way to get this is via the cloud but, because deep learning models can take days or even weeks to come up with the goods, that can be hugely expensive. In this article, therefore, we'll look at on-premises alternatives and why the once-humble graphics controller is now the must-have accessory for the would-be AI developer.


AI's Ultimate Impact on Jobs is in Limbo and the Quantum Quandary

#artificialintelligence

Welcome to the club if you are still behind the artificial intelligence curve. This is the last chapter of my AI series, and I hope it has shed a humble light upon the linchpin of the Fourth Industrial Revolution (4IR). Included below are links to previous installments. You do not want to miss the mini-documentary in part 3. Keep the following quotes in mind as I prognosticate today on AI jobs for the near-term. "I have all the tools and gadgets. I tell my son, who is a producer.


Machine learning spotlight: Industry 4.0 and predictive maintenance

#artificialintelligence

Industry 4.0 is characterized by applying cloud and cognitive computing to current automated and computerized industrial systems resulting in the ability to create smart factories that monitor physical processes, identify issues or optimizations, and perform iterative refinement or proactive maintenance and updates. A recent study was released by Emory University and Presenso called The Future of IIoT Predictive Maintenance. The study is focused on predictive maintenance current state, implementation, resulting impact, and future needs identified within smart factories. Over 100 operations and maintenance professionals across Europe, North America, and Asia Pacific participated. The results showed that while there was good satisfaction with existing predictive maintenance environments, the modeling and machine learning aspects are lagging behind where spreadsheet based statistical modeling has not been replaced by more advanced capabilities.


Deep Learning on the Edge – Towards Data Science

#artificialintelligence

Scalable Deep Learning services are contingent on several constraints. Depending on your target application, you may require low latency, enhanced security or long-term cost effectiveness. Hosting your Deep Learning model on the cloud may not be the best solution in such cases. Computing on the edge alleviates the above issues, and provides other benefits. Edge here refers to the computation that is performed locally on the consumer's products.


How Quantum Computing & Machine Learning Work Together

#artificialintelligence

"The most important benefit of quantum computers is the speed at which it can solve complex problems," says Bansal. While they're lightning quick at what they do, Bansal notes, "they don't provide capabilities to solve problems from undecidable or NP Hard problem classes." There is a problem set that quantum computing will be able to solve, however it's not applicable for all computing problems. Typically, the problem set that quantum computers are good at solving involves number or data crunching with a huge amount of inputs, such as "complex optimisation problems and communication systems analysis problems" – calculations that would typically take supercomputers days, years, even billions of years to brute force. The application that's regularly trotted out as an example that quantum computers will be able to instantly solve is strong RSA encryption.


Life lessons from artificial intelligence: What Microsoft's AI chief wants computer science grads to know about the future

#artificialintelligence

In addition to awarding Bachelors, Masters and Ph.D. degrees, the Allen School recognized two 2018 Alumni Impact Award recipients, Yaw Anokwa and Eileen Bjorkman.


Machine Learning's Limits

#artificialintelligence

Semiconductor Engineering sat down with Rob Aitken, an Arm fellow; Raik Brinkmann, CEO of OneSpin Solutions; Patrick Soheili, vice president of business and corporate development at eSilicon; and Chris Rowen, CEO of Babblelabs. What follows are excerpts of that conversation. SE: Where are we with machine learning? What problems still have to be resolved? Aitken: We're in a state where things are changing so rapidly that it's really hard to keep up with where we are at any given instance. We've seen that machine learning has been able to take some of the things we used to think were very complicated and rendered them simple to do.


Towards Dependability Metrics for Neural Networks

arXiv.org Machine Learning

Artificial neural networks (NN) are instrumental in realizing highly-automated driving functionality. An overarching challenge is to identify best safety engineering practices for NN and other learning-enabled components. In particular, there is an urgent need for an adequate set of metrics for measuring all-important NN dependability attributes. We address this challenge by proposing a number of NN-specific and efficiently computable metrics for measuring NN dependability attributes including robustness, interpretability, completeness, and correctness.


Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning

arXiv.org Artificial Intelligence

To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is a promising paradigm by providing computing capabilities in close proximity within a sliced radio access network (RAN), which supports both traditional communication and MEC services. Nevertheless, the design of computation offloading policies for a virtual MEC system remains challenging. Specifically, whether to execute a computation task at the mobile device or to offload it for MEC server execution should adapt to the time-varying network dynamics. In this paper, we consider MEC for a representative mobile user in an ultra-dense sliced RAN, where multiple base stations (BSs) are available to be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to maximize the long-term utility performance whereby an offloading decision is made based on the task queue state, the energy queue state as well as the channel qualities between MU and BSs. To break the curse of high dimensionality in state space, we first propose a double deep Q-network (DQN) based strategic computation offloading algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics. Then motivated by the additive structure of the utility function, a Q-function decomposition technique is combined with the double DQN, which leads to novel learning algorithm for the solving of stochastic computation offloading. Numerical experiments show that our proposed learning algorithms achieve a significant improvement in computation offloading performance compared with the baseline policies.