Representation & Reasoning: Overviews


A new approach to forecast service parts demand by integrating user preferences into multi-objective optimization

arXiv.org Artificial Intelligence

Service supply chain management is to prepare spare parts for failed products under warranty. Their goal is to reach agreed service level at the minimum cost. We convert this business problem into a preference based multi-objective optimization problem, where two quality criteria must be simultaneously optimized. One criterion is accuracy of demand forecast and the other is service level. Here we propose a general framework supporting solving preference-based multi-objective optimization problems (MOPs) by multi-gradient descent algorithm (MGDA), which is well suited for training deep neural network. The proposed framework treats agreed service level as a constrained criterion that must be met and generate a Pareto-optimal solution with highest forecasting accuracy. The neural networks used here are two Encoder-Decoder LSTM modes: one is used for pre-training phase to learn distributed representation of former generations' service parts consumption data, and the other is used for supervised learning phase to generate forecast quantities of current generations' service parts. Evaluated under the service parts consumption data in Lenovo Group Ltd, the proposed method clearly outperform baseline methods.


Wasserstein Reinforcement Learning

arXiv.org Machine Learning

We propose behavior-driven optimization via Wasserstein distances (WDs) to improve several classes of state-of-the-art reinforcement learning (RL) algorithms. We show that WD regularizers acting on appropriate policy embeddings efficiently incorporate behavioral characteristics into policy optimization. We demonstrate that they improve Evolution Strategy methods by encouraging more efficient exploration, can be applied in imitation learning and to speed up training of Trust Region Policy Optimization methods. Since the exact computation of WDs is expensive, we develop approximate algorithms based on the combination of different methods: dual formulation of the optimal transport problem, alternating optimization and random feature maps, to effectively replace exact WD computations in the RL tasks considered. We provide theoretical analysis of our algorithms and exhaustive empirical evaluation in a variety of RL settings.


The Broad Optimality of Profile Maximum Likelihood

arXiv.org Machine Learning

We study three fundamental statistical-learning problems: distribution estimation, property estimation, and property testing. We establish the profile maximum likelihood (PML) estimator as the first unified sample-optimal approach to a wide range of learning tasks. In particular, for every alphabet size $k$ and desired accuracy $\varepsilon$: $\textbf{Distribution estimation}$ Under $\ell_1$ distance, PML yields optimal $\Theta(k/(\varepsilon^2\log k))$ sample complexity for sorted-distribution estimation, and a PML-based estimator empirically outperforms the Good-Turing estimator on the actual distribution; $\textbf{Additive property estimation}$ For a broad class of additive properties, the PML plug-in estimator uses just four times the sample size required by the best estimator to achieve roughly twice its error, with exponentially higher confidence; $\boldsymbol{\alpha}\textbf{-R\'enyi entropy estimation}$ For integer $\alpha>1$, the PML plug-in estimator has optimal $k^{1-1/\alpha}$ sample complexity; for non-integer $\alpha>3/4$, the PML plug-in estimator has sample complexity lower than the state of the art; $\textbf{Identity testing}$ In testing whether an unknown distribution is equal to or at least $\varepsilon$ far from a given distribution in $\ell_1$ distance, a PML-based tester achieves the optimal sample complexity up to logarithmic factors of $k$. With minor modifications, most of these results also hold for a near-linear-time computable variant of PML.


Survey: Here's Why AI May Be the Fastest Paradigm Shift in Tech History

#artificialintelligence

RADIUS guest contributor Gary Grossman currently leads the Edelman AI Center of Excellence. As part of that, he led development of the 2019 Edelman Artificial Intelligence Survey that can be viewed here. Just how important is artificial intelligence (AI)? Microsoft's Chief Envisioning Officer, Dave Coplin, said recently that AI is "the most important technology that anybody on the planet is working on today." A PwC report estimates that global GDP will be 14 percent higher in 2030 as a result of AI--the equivalent of $15.7 trillion, which is more than the current output of China and India combined.


Memetic EDA-Based Approaches to Comprehensive Quality-Aware Automated Semantic Web Service Composition

arXiv.org Artificial Intelligence

Comprehensive quality-aware automated semantic web service composition is an NP-hard problem, where service composition workflows are unknown, and comprehensive quality, i.e., Quality of services (QoS) and Quality of semantic matchmaking (QoSM) are simultaneously optimized. The objective of this problem is to find a solution with optimized or near-optimized overall QoS and QoSM within polynomial time over a service request. In this paper, we proposed novel memetic EDA-based approaches to tackle this problem. The proposed method investigates the effectiveness of several neighborhood structures of composite services by proposing domain-dependent local search operators. Apart from that, a joint strategy of the local search procedure is proposed to integrate with a modified EDA to reduce the overall computation time of our memetic approach. To better demonstrate the effectiveness and scalability of our approach, we create a more challenging, augmented version of the service composition benchmark based on WSC-08 \cite{bansal2008wsc} and WSC-09 \cite{kona2009wsc}. Experimental results on this benchmark show that one of our proposed memetic EDA-based approach (i.e., MEEDA-LOP) significantly outperforms existing state-of-the-art algorithms.


Learning to Plan Hierarchically from Curriculum

arXiv.org Artificial Intelligence

We present a framework for learning to plan hierarchically in domains with unknown dynamics. We enhance planning performance by exploiting problem structure in several ways: (i) We simplify the search over plans by leveraging knowledge of skill objectives, (ii) Shorter plans are generated by enforcing aggressively hierarchical planning, (iii) We learn transition dynamics with sparse local models for better generalisation. Our framework decomposes transition dynamics into skill effects and success conditions, which allows fast planning by reasoning on effects, while learning conditions from interactions with the world. We propose a simple method for learning new abstract skills, using successful trajectories stemming from completing the goals of a curriculum. Learned skills are then refined to leverage other abstract skills and enhance subsequent planning. We show that both conditions and abstract skills can be learned simultaneously while planning, even in stochastic domains. Our method is validated in experiments of increasing complexity, with up to 2^100 states, showing superior planning to classic non-hierarchical planners or reinforcement learning methods. Applicability to real-world problems is demonstrated in a simulation-to-real transfer experiment on a robotic manipulator.


Declarative Learning-Based Programming as an Interface to AI Systems

arXiv.org Artificial Intelligence

Data-driven approaches are becoming more common as problem-solving techniques in many areas of research and industry. In most cases, machine learning models are the key component of these solutions, but a solution involves multiple such models, along with significant levels of reasoning with the models' output and input. Current technologies do not make such techniques easy to use for application experts who are not fluent in machine learning nor for machine learning experts who aim at testing ideas and models on real-world data in the context of the overall AI system. We review key efforts made by various AI communities to provide languages for high-level abstractions over learning and reasoning techniques needed for designing complex AI systems. We classify the existing frameworks based on the type of techniques and the data and knowledge representations they use, provide a comparative study of the way they address the challenges of programming real-world applications, and highlight some shortcomings and future directions.


A Framework for Parallelizing OWL Classification in Description Logic Reasoners

arXiv.org Artificial Intelligence

In this paper we report on a black-box approach to parallelize existing description logic (DL) reasoners for the Web Ontology Language (OWL). We focus on OWL ontology classification, which is an important inference service and supported by every major OWL/DL reasoner. We propose a flexible parallel framework which can be applied to existing OWL reasoners in order to speed up their classification process. In order to test its performance, we evaluated our framework by parallelizing major OWL reasoners for concept classification. In comparison to the selected black-box reasoner our results demonstrate that the wall clock time of ontology classification can be improved by one order of magnitude for most real-world ontologies.


Evaluating Ising Processing Units with Integer Programming

arXiv.org Artificial Intelligence

The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, present new opportunities for hybrid-optimization algorithms that are hardware accelerated by these devices. In this work, we propose the idea of an Ising processing unit as a computational abstraction for reasoning about these emerging devices. The challenges involved in using and benchmarking these devices are presented and commercial mixed integer programming solvers are proposed as a valuable tool for the validation of these disparate hardware platforms. The proposed validation methodology is demonstrated on a D-Wave 2X adiabatic quantum computer, one example of an Ising processing unit. The computational results demonstrate that the D-Wave hardware consistently produces high-quality solutions and suggests that as IPU technology matures it could become a valuable co-processor in hybrid-optimization algorithms.


Multi-user Resource Control with Deep Reinforcement Learning in IoT Edge Computing

arXiv.org Machine Learning

By leveraging the concept of mobile edge computing (MEC), massive amount of data generated by a large number of Internet of Things (IoT) devices could be offloaded to MEC server at the edge of wireless network for further computational intensive processing. However, due to the resource constraint of IoT devices and wireless network, both the communications and computation resources need to be allocated and scheduled efficiently for better system performance. In this paper, we propose a joint computation offloading and multi-user scheduling algorithm for IoT edge computing system to minimize the long-term average weighted sum of delay and power consumption under stochastic traffic arrival. We formulate the dynamic optimization problem as an infinite-horizon average-reward continuous-time Markov decision process (CTMDP) model. One critical challenge in solving this MDP problem for the multi-user resource control is the curse-of-dimensionality problem, where the state space of the MDP model and the computation complexity increase exponentially with the growing number of users or IoT devices. In order to overcome this challenge, we use the deep reinforcement learning (RL) techniques and propose a neural network architecture to approximate the value functions for the post-decision system states. The designed algorithm to solve the CTMDP problem supports semi-distributed auction-based implementation, where the IoT devices submit bids to the BS to make the resource control decisions centrally. Simulation results show that the proposed algorithm provides significant performance improvement over the baseline algorithms, and also outperforms the RL algorithms based on other neural network architectures.