Liu, Yue
Online NEAT for Credit Evaluation -- a Dynamic Problem with Sequential Data
Liu, Yue, Ghandar, Adam, Theodoropoulos, Georgios
In this paper, we describe application of Neuroevolution to a P2P lending problem in which a credit evaluation model is updated based on streaming data. We apply the algorithm Neuroevolution of Augmenting Topologies (NEAT) which has not been widely applied generally in the credit evaluation domain. In addition to comparing the methodology with other widely applied machine learning techniques, we develop and evaluate several enhancements to the algorithm which make it suitable for the particular aspects of online learning that are relevant in the problem. These include handling unbalanced streaming data, high computation costs, and maintaining model similarity over time, that is training the stochastic learning algorithm with new data but minimizing model change except where there is a clear benefit for model performance
Risk Variance Penalization: From Distributional Robustness to Causality
Xie, Chuanlong, Chen, Fei, Liu, Yue, Li, Zhenguo
Learning under multi-environments often requires the ability of out-of-distribution generalization for the worst-environment performance guarantee. Some novel algorithms, e.g. Invariant Risk Minimization and Risk Extrapolation, build stable models by extracting invariant (causal) feature. However, it remains unclear how these methods learn to remove the environmental features. In this paper, we focus on the Risk Extrapolation (REx) and make attempts to fill this gap. We first propose a framework, Quasi-Distributional Robustness, to unify the Empirical Risk Minimization (ERM), the Robust Optimization (RO) and the Risk Extrapolation. Then, under this framework, we show that, comparing to ERM and RO, REx has a much larger robust region. Furthermore, based on our analysis, we propose a novel regularization method, Risk Variance Penalization (RVP), which is derived from REx. The proposed method is easy to implement, and has proper degree of penalization, and enjoys an interpretable tuning parameter. Finally, our experiments show that under certain conditions, the regularization strategy that encourages the equality of training risks has ability to discover relationships which do not exist in the training data. This provides important evidence to support that RVP is useful to discover causal models.
Low Rank Directed Acyclic Graphs and Causal Structure Learning
Fang, Zhuangyan, Zhu, Shengyu, Zhang, Jiji, Liu, Yue, Chen, Zhitang, He, Yangbo
Despite several important advances in recent years, learning causal structures represented by directed acyclic graphs (DAGs) remains a challenging task in high dimensional settings when the graphs to be learned are not sparse. In particular, the recent formulation of structure learning as a continuous optimization problem proved to have considerable advantages over the traditional combinatorial formulation, but the performance of the resulting algorithms is still wanting when the target graph is relatively large and dense. In this paper we propose a novel approach to mitigate this problem, by exploiting a low rank assumption regarding the (weighted) adjacency matrix of a DAG causal model. We establish several useful results relating interpretable graphical conditions to the low rank assumption, and show how to adapt existing methods for causal structure learning to take advantage of this assumption. We also provide empirical evidence for the utility of our low rank algorithms, especially on graphs that are not sparse. Not only do they outperform state-of-the-art algorithms when the low rank condition is satisfied, the performance on randomly generated scale-free graphs is also very competitive even though the true ranks may not be as low as is assumed.
Stable Prediction via Leveraging Seed Variable
Kuang, Kun, Li, Bo, Cui, Peng, Liu, Yue, Tao, Jianrong, Zhuang, Yueting, Wu, Fei
In this paper, we focus on the problem of stable prediction across unknown test data, where the test distribution is agnostic and might be totally different from the training one. In such a case, previous machine learning methods might exploit subtly spurious correlations in training data induced by non-causal variables for prediction. Those spurious correlations are changeable across data, leading to instability of prediction across data. By assuming the relationships between causal variables and response variable are invariant across data, to address this problem, we propose a conditional independence test based algorithm to separate those causal variables with a seed variable as priori, and adopt them for stable prediction. By assuming the independence between causal and non-causal variables, we show, both theoretically and with empirical experiments, that our algorithm can precisely separate causal and non-causal variables for stable prediction across test data. Extensive experiments on both synthetic and real-world datasets demonstrate that our algorithm outperforms state-of-the-art methods for stable prediction.
Causal Discovery by Kernel Intrinsic Invariance Measure
Chen, Zhitang, Zhu, Shengyu, Liu, Yue, Tse, Tim
Reasoning based on causality, instead of association has been considered as a key ingredient towards real machine intelligence. However, it is a challenging task to infer causal relationship/structure among variables. In recent years, an Independent Mechanism (IM) principle was proposed, stating that the mechanism generating the cause and the one mapping the cause to the effect are independent. As the conjecture, it is argued that in the causal direction, the conditional distributions instantiated at different value of the conditioning variable have less variation than the anti-causal direction. Existing state-of-the-arts simply compare the variance of the RKHS mean embedding norms of these conditional distributions. In this paper, we prove that this norm-based approach sacrifices important information of the original conditional distributions. We propose a Kernel Intrinsic Invariance Measure (KIIM) to capture higher order statistics corresponding to the shapes of the density functions. We show our algorithm can be reduced to an eigen-decomposition task on a kernel matrix measuring intrinsic deviance/invariance. Causal directions can then be inferred by comparing the KIIM scores of two hypothetic directions. Experiments on synthetic and real data are conducted to show the advantages of our methods over existing solutions.
Seq2RDF: An end-to-end application for deriving Triples from Natural Language Text
Liu, Yue, Zhang, Tongtao, Liang, Zhicheng, Ji, Heng, McGuinness, Deborah L.
We present an end-to-end approach that takes unstructured textual input and generates structured output compliant with a given vocabulary. Inspired by recent successes in neural machine translation, we treat the triples within a given knowledge graph as an independent graph language and propose an encoder-decoder framework with an attention mechanism that leverages knowledge graph embeddings. Our model learns the mapping from natural language text to triple representation in the form of subject-predicate-object using the selected knowledge graph vocabulary. Experiments on three different data sets show that we achieve competitive F1-Measures over the baselines using our simple yet effective approach. A demo video is included.
Exploiting Task-Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion
Liu, Yue, Ge, Tao, Mathews, Kusum S., Ji, Heng, McGuinness, Deborah L.
In the medical domain, identifying and expanding abbreviations in clinical texts is a vital task for both better human and machine understanding. It is a challenging task because many abbreviations are ambiguous especially for intensive care medicine texts, in which phrase abbreviations are frequently used. Besides the fact that there is no universal dictionary of clinical abbreviations and no universal rules for abbreviation writing, such texts are difficult to acquire, expensive to annotate and even sometimes, confusing to domain experts. This paper proposes a novel and effective approach -- exploiting task-oriented resources to learn word embeddings for expanding abbreviations in clinical notes. We achieved 82.27\% accuracy, close to expert human performance.