qss
Physics-informed Reduced-Order Learning from the First Principles for Simulation of Quantum Nanostructures
Veresko, Martin, Cheng, Ming-Cheng
Multi-dimensional direct numerical simulation (DNS) of the Schrödinger equation is needed for design and analysis of quantum nanostructures that offer numerous applications in biology, medicine, materials, electronic/photonic devices, etc. In large-scale nanostructures, extensive computational effort needed in DNS may become prohibitive due to the high degrees of freedom (DoF). This study employs a physics-based reduced-order learning algorithm, enabled by the first principles, for simulation of the Schrödinger equation to achieve high accuracy and efficiency. The proposed simulation methodology is applied to investigate two quantum-dot structures; one operates under external electric field, and the other is influenced by internal potential variation with periodic boundary conditions. The former is similar to typical operations of nanoelectronic devices, and the latter is of interest to simulation and design of nanostructures and materials, such as applications of density functional theory. In each structure, cases within and beyond training conditions are examined. Using the proposed methodology, a very accurate prediction can be realized with a reduction in the DoF by more than 3 orders of magnitude and in the computational time by 2 orders, compared to DNS. An accurate prediction beyond the training conditions, including higher external field and larger internal potential in untrained quantum states, is also achieved. Comparison is also carried out between the physics-based learning and Fourier-based plane-wave approaches for a periodic case.
- North America > United States > New York > Nassau County > Mineola (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Costa Rica > Heredia Province > Heredia (0.04)
- (3 more...)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.67)
- Energy > Renewable > Solar (0.46)
State Advantage Weighting for Offline RL
Lyu, Jiafei, Gong, Aicheng, Wan, Le, Lu, Zongqing, Li, Xiu
We present state advantage weighting for offline reinforcement learning (RL). In contrast to action advantage $A(s,a)$ that we commonly adopt in QSA learning, we leverage state advantage $A(s,s^\prime)$ and QSS learning for offline RL, hence decoupling the action from values. We expect the agent can get to the high-reward state and the action is determined by how the agent can get to that corresponding state. Experiments on D4RL datasets show that our proposed method can achieve remarkable performance against the common baselines. Furthermore, our method shows good generalization capability when transferring from offline to online.
Estimating Q(s,s') with Deep Deterministic Dynamics Gradients
Edwards, Ashley D., Sahni, Himanshu, Liu, Rosanne, Hung, Jane, Jain, Ankit, Wang, Rui, Ecoffet, Adrien, Miconi, Thomas, Isbell, Charles, Yosinski, Jason
In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at \url{sites.google.com/view/qss-paper}.
Artificial intelligence on Hadoop: Does it make sense? ZDNet
This week MapR announced a new solution called Quick Start Solution (QSS), focusing on deep learning applications. MapR touts QSS as a distributed deep learning (DL) product and services offering that enables the training of complex deep learning algorithms at scale. Here's the idea: deep learning requires lots of data, and it is complex. If MapR's Converged Data Platform is your data backbone, then QSS gives you what you need to use your data for DL applications. It makes sense, and it is in line with MapR's strategy.
Adaptive Parallel Iterative Deepening Search
Many of the artificial intelligence techniques developed to date rely on heuristic search through large spaces. Unfortunately, the size of these spaces and the corresponding computational effort reduce the applicability of otherwise novel and effective algorithms. A number of parallel and distributed approaches to search have considerably improved the performance of the search process. Our goal is to develop an architecture that automatically selects parallel search strategies for optimal performance on a variety of search problems. In this paper we describe one such architecture realized in the Eureka system, which combines the benefits of many different approaches to parallel heuristic search. Through empirical and theoretical analyses we observe that features of the problem space directly affect the choice of optimal parallel search strategy. We then employ machine learning techniques to select the optimal parallel search strategy for a given problem space. When a new search task is input to the system, Eureka uses features describing the search space and the chosen architecture to automatically select the appropriate search strategy. Eureka has been tested on a MIMD parallel processor, a distributed network of workstations, and a single workstation using multithreading. Results generated from fifteen puzzle problems, robot arm motion problems, artificial search spaces, and planning problems indicate that Eureka outperforms any of the tested strategies used exclusively for all problem instances and is able to greatly reduce the search time for these applications.