Collaborating Authors


Customized Monte Carlo Tree Search for LLVM/Polly's Composable Loop Optimization Transformations Artificial Intelligence

Polly is the LLVM project's polyhedral loop nest optimizer. Recently, user-directed loop transformation pragmas were proposed based on LLVM/Clang and Polly. The search space exposed by the transformation pragmas is a tree, wherein each node represents a specific combination of loop transformations that can be applied to the code resulting from the parent node's loop transformations. We have developed a search algorithm based on Monte Carlo tree search (MCTS) to find the best combination of loop transformations. Our algorithm consists of two phases: exploring loop transformations at different depths of the tree to identify promising regions in the tree search space and exploiting those regions by performing a local search. Moreover, a restart mechanism is used to avoid the MCTS getting trapped in a local solution. The best and worst solutions are transferred from the previous phases of the restarts to leverage the search history. We compare our approach with random, greedy, and breadth-first search methods on PolyBench kernels and ECP proxy applications. Experimental results show that our MCTS algorithm finds pragma combinations with a speedup of 2.3x over Polly's heuristic optimizations on average.

Randomized Algorithms for Scientific Computing (RASC) Artificial Intelligence

Randomized algorithms have propelled advances in artificial intelligence and represent a foundational research area in advancing AI for Science. Future advancements in DOE Office of Science priority areas such as climate science, astrophysics, fusion, advanced materials, combustion, and quantum computing all require randomized algorithms for surmounting challenges of complexity, robustness, and scalability. This report summarizes the outcomes of that workshop, "Randomized Algorithms for Scientific Computing (RASC)," held virtually across four days in December 2020 and January 2021.

Automated Mathematical Equation Structure Discovery for Visual Analysis Artificial Intelligence

Finding the best mathematical equation to deal with the different challenges found in complex scenarios requires a thorough understanding of the scenario and a trial and error process carried out by experts. In recent years, most state-of-the-art equation discovery methods have been widely applied in modeling and identification systems. However, equation discovery approaches can be very useful in computer vision, particularly in the field of feature extraction. In this paper, we focus on recent AI advances to present a novel framework for automatically discovering equations from scratch with little human intervention to deal with the different challenges encountered in real-world scenarios. In addition, our proposal can reduce human bias by proposing a search space design through generative network instead of hand-designed. As a proof of concept, the equations discovered by our framework are used to distinguish moving objects from the background in video sequences. Experimental results show the potential of the proposed approach and its effectiveness in discovering the best equation in video sequences.

Dynamic Attention guided Multi-Trajectory Analysis for Single Object Tracking Artificial Intelligence

Most of the existing single object trackers track the target in a unitary local search window, making them particularly vulnerable to challenging factors such as heavy occlusions and out-of-view movements. Despite the attempts to further incorporate global search, prevailing mechanisms that cooperate local and global search are relatively static, thus are still sub-optimal for improving tracking performance. By further studying the local and global search results, we raise a question: can we allow more dynamics for cooperating both results? In this paper, we propose to introduce more dynamics by devising a dynamic attention-guided multi-trajectory tracking strategy. In particular, we construct dynamic appearance model that contains multiple target templates, each of which provides its own attention for locating the target in the new frame. Guided by different attention, we maintain diversified tracking results for the target to build multi-trajectory tracking history, allowing more candidates to represent the true target trajectory. After spanning the whole sequence, we introduce a multi-trajectory selection network to find the best trajectory that delivers improved tracking performance. Extensive experimental results show that our proposed tracking strategy achieves compelling performance on various large-scale tracking benchmarks. The project page of this paper can be found at

Neural Architecture Search From Fr\'echet Task Distance Machine Learning

We formulate a Fr\'echet-type asymmetric distance between tasks based on Fisher Information Matrices. We show how the distance between a target task and each task in a given set of baseline tasks can be used to reduce the neural architecture search space for the target task. The complexity reduction in search space for task-specific architectures is achieved by building on the optimized architectures for similar tasks instead of doing a full search without using this side information. Experimental results demonstrate the efficacy of the proposed approach and its improvements over the state-of-the-art methods.

Adapting User Interfaces with Model-based Reinforcement Learning Artificial Intelligence

Adapting an interface requires taking into account both the positive and negative effects that changes may have on the user. A carelessly picked adaptation may impose high costs to the user -- for example, due to surprise or relearning effort -- or "trap" the process to a suboptimal design immaturely. However, effects on users are hard to predict as they depend on factors that are latent and evolve over the course of interaction. We propose a novel approach for adaptive user interfaces that yields a conservative adaptation policy: It finds beneficial changes when there are such and avoids changes when there are none. Our model-based reinforcement learning method plans sequences of adaptations and consults predictive HCI models to estimate their effects. We present empirical and simulation results from the case of adaptive menus, showing that the method outperforms both a non-adaptive and a frequency-based policy.

Policy Search with Rare Significant Events: Choosing the Right Partner to Cooperate with Artificial Intelligence

A typical example is that of an agent who opportunity can either be seized for an immediate reward or ignored has to choose a partner to cooperate with, while a large number if the agent hopes to get a better reward in the future. of partners are simply not interested in cooperating, regardless of We address this problem in the context of an independent, what the agent has to offer. We address this problem in a continuous on-line and on-policy episodic learning task with continuous state and action space with two different kinds of search methods: state and action spaces. The practical application addressed in this a gradient policy search method and a direct policy search method paper is that of an agent learning to choose a partner for a task using an evolution strategy. We show that when significant events that requires cooperation (e.g., predators hunting a large prey or are rare, gradient information is also scarce, making it difficult for individuals selecting a lifelong mate). The agent can choose to policy gradient search methods to find an optimal policy, with or cooperate or not with a potential partner, based on the effort this without a deep neural architecture. On the other hand, we show partner is willing to invest in the cooperation. At the same time, that direct policy search methods are invariant to the rarity of the agent must invest enough so that its partner also accepts to significant events, which is yet another confirmation of the unique cooperate. In this setup, the agent may face partners willing to role evolutionary algorithms has to play as a reinforcement learning invest various amount of energy in cooperation (i.e., a possibly method.

Analytics and Machine Learning in Vehicle Routing Research Artificial Intelligence

The Vehicle Routing Problem (VRP) is one of the most intensively studied combinatorial optimisation problems for which numerous models and algorithms have been proposed. To tackle the complexities, uncertainties and dynamics involved in real-world VRP applications, Machine Learning (ML) methods have been used in combination with analytical approaches to enhance problem formulations and algorithmic performance across different problem solving scenarios. However, the relevant papers are scattered in several traditional research fields with very different, sometimes confusing, terminologies. This paper presents a first, comprehensive review of hybrid methods that combine analytical techniques with ML tools in addressing VRP problems. Specifically, we review the emerging research streams on ML-assisted VRP modelling and ML-assisted VRP optimisation. We conclude that ML can be beneficial in enhancing VRP modelling, and improving the performance of algorithms for both online and offline VRP optimisations. Finally, challenges and future opportunities of VRP research are discussed.

Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces Machine Learning

However, real-world optimisation problems High-dimensional black-box optimisation remains are often neither low-dimensional nor continuous: many an important yet notoriously challenging large-scale practical problems exhibit complex interactions problem. Despite the success of Bayesian among high-dimensional input variables, and are optimisation methods on continuous domains, often categorical in nature or involve a mixture of both domains that are categorical, or that mix continuous and categorical input variables. An example continuous and categorical variables, remain of the former is the maximum satisfiability problem, challenging. We propose a novel solution whose exact solution is np-hard (Creignou et al., 2001), - we combine local optimisation with a tailored and an example for the latter is the hyperparameter kernel design, effectively handling highdimensional tuning for a deep neural network: the optimisation categorical and mixed search scope comprise both continuous hyperparameters, e.g., spaces, whilst retaining sample efficiency. We learning rate and momentum, and categorical ones, further derive convergence guarantee for the e.g., optimiser type {sgd, Adam,...} and learning rate proposed approach. Finally, we demonstrate scheduler type {step decay, cosine annealing}.

Deep Reinforcement Learning for Combinatorial Optimization: Covering Salesman Problems Artificial Intelligence

This paper introduces a new deep learning approach to approximately solve the Covering Salesman Problem (CSP). In this approach, given the city locations of a CSP as input, a deep neural network model is designed to directly output the solution. It is trained using the deep reinforcement learning without supervision. Specifically, in the model, we apply the Multi-head Attention to capture the structural patterns, and design a dynamic embedding to handle the dynamic patterns of the problem. Once the model is trained, it can generalize to various types of CSP tasks (different sizes and topologies) with no need of re-training. Through controlled experiments, the proposed approach shows desirable time complexity: it runs more than 20 times faster than the traditional heuristic solvers with a tiny gap of optimality. Moreover, it significantly outperforms the current state-of-the-art deep learning approaches for combinatorial optimization in the aspect of both training and inference. In comparison with traditional solvers, this approach is highly desirable for most of the challenging tasks in practice that are usually large-scale and require quick decisions.