Adaptive Transfer Learning of Multi-View Time Series Classification

arXiv.org Machine Learning

Time Series Classification (TSC) has been an important and challenging task in data mining, especially on multivariate time series and multi-view time series data sets. Meanwhile, transfer learning has been widely applied in computer vision and natural language processing applications to improve deep neural network's generalization capabilities. However, very few previous works applied transfer learning framework to time series mining problems. Particularly, the technique of measuring similarities between source domain and target domain based on dynamic representation such as density estimation with importance sampling has never been combined with transfer learning framework. In this paper, we first proposed a general adaptive transfer learning framework for multi-view time series data, which shows strong ability in storing inter-view importance value in the process of knowledge transfer. Next, we represented inter-view importance through some time series similarity measurements and approximated the posterior distribution in latent space for the importance sampling via density estimation techniques. We then computed the matrix norm of sampled importance value, which controls the degree of knowledge transfer in pre-training process. We further evaluated our work, applied it to many other time series classification tasks, and observed that our architecture maintained desirable generalization ability. Finally, we concluded that our framework could be adapted with deep learning techniques to receive significant model performance improvements.


Granger-causal Attentive Mixtures of Experts: Learning Important Features with Neural Networks

arXiv.org Artificial Intelligence

Knowledge of the importance of input features towards decisions made by machine-learning models is essential to increase our understanding of both the models and the underlying data. Here, we present a new approach to estimating feature importance with neural networks based on the idea of distributing the features of interest among experts in an attentive mixture of experts (AME). AMEs couple attentive gating networks with a Granger-causal objective to jointly produce accurate predictions as well as estimates of feature importance. Our experiments on an established benchmark and two real-world datasets show (i) that the feature importance estimates provided by AMEs compare favourably to those provided by state-of-the-art methods, (ii) that AMEs are significantly faster than existing methods, and (iii) that the associations discovered by AMEs are consistent with those reported by domain experts. In addition, we analyse the trade-off between predictive performance and estimation accuracy, the degree to which importance estimates of existing methods conform to predictive value, and whether a lower Granger-causal error on held-out data indicates a better feature importance estimation accuracy.


A New Approach to Heuristic Estimations for Cost-Based Planning

AAAI Conferences

Solving relaxed problems is a commonly used technique in heuristic search to derive heuristic estimates. In heuristic planning, this is usually done by expanding a planning (reachability) graph on the current search state where the delete lists of operators are removed from their definition. Usually, this technique is used to obtain plan length estimates. However, in cost-based planning the goal is to find plans minimizing some criteria. This requires the redefinition of the heuristic estimation to account for operators costs. This paper introduces a new approach to compute cost-based heuristics using planning graphs in order to overcome some problems of the existing heuristics, together with a common way of characterizing heuristics based on planning graphs. We explore the heuristics behaviour in combination to two search algorithms. Results show that in some domains the new heuristics are adequate to obtain good quality plans without imposing significant overheads in running time.


Building Contextual Anchor Text Representation using Graph Regularization

AAAI Conferences

Anchor texts are useful complementary description for target pages, widely applied to improve search relevance. The benefits come from the additional information introduced into document representation and the intelligent ways of estimating their relative importance. Previous work on anchor importance estimation treated anchor text independently without considering its context. As a result, the lack of constraints from such context fails to guarantee a stable anchor text representation. We propose an anchor graph regularization approach to incorporate constraints from such context into anchor text weighting process, casting the task into a convex quadratic optimization problem. The constraints draw from the estimation of anchor-anchor, anchor-page, and page-page similarity. Based on any estimators, our approach operates as a post process of refining the estimated anchor weights, making it a plug and play component in search infrastructure. Comparable experiments on standard data sets (TREC 2009 and 2010) demonstrate the efficacy of our approach.


Estimating Node Importance in Knowledge Graphs Using Graph Neural Networks

arXiv.org Machine Learning

How can we estimate the importance of nodes in a knowledge graph (KG)? A KG is a multi-relational graph that has proven valuable for many tasks including question answering and semantic search. In this paper, we present GENI, a method for tackling the problem of estimating node importance in KGs, which enables several downstream applications such as item recommendation and resource allocation. While a number of approaches have been developed to address this problem for general graphs, they do not fully utilize information available in KGs, or lack flexibility needed to model complex relationship between entities and their importance. To address these limitations, we explore supervised machine learning algorithms. In particular, building upon recent advancement of graph neural networks (GNNs), we develop GENI, a GNN-based method designed to deal with distinctive challenges involved with predicting node importance in KGs. Our method performs an aggregation of importance scores instead of aggregating node embeddings via predicate-aware attention mechanism and flexible centrality adjustment. In our evaluation of GENI and existing methods on predicting node importance in real-world KGs with different characteristics, GENI achieves 5-17% higher NDCG@100 than the state of the art.