Goto

Collaborating Authors

 Kelleher, John


Poisoning Knowledge Graph Embeddings via Relation Inference Patterns

arXiv.org Artificial Intelligence

We study the problem of generating data poisoning attacks against Knowledge Graph Embedding (KGE) models for the task of link prediction in knowledge graphs. To poison KGE models, we propose to exploit their inductive abilities which are captured through the relationship patterns like symmetry, inversion and composition in the knowledge graph. Specifically, to degrade the model's prediction confidence on target facts, we propose to improve the model's prediction confidence on a set of decoy facts. Thus, we craft adversarial additions that can improve the model's prediction confidence on decoy facts through different inference patterns. Our experiments demonstrate that the proposed poisoning attacks outperform state-of-art baselines on four KGE models for two publicly available datasets. We also find that the symmetry pattern based attacks generalize across all model-dataset combinations which indicates the sensitivity of KGE models to this pattern.


Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods

arXiv.org Artificial Intelligence

Despite the widespread use of Knowledge Graph Embeddings (KGE), little is known about the security vulnerabilities that might disrupt their intended behaviour. We study data poisoning attacks against KGE models for link prediction. These attacks craft adversarial additions or deletions at training time to cause model failure at test time. To select adversarial deletions, we propose to use the model-agnostic instance attribution methods from Interpretable Machine Learning, which identify the training instances that are most influential to a neural model's predictions on test instances. We use these influential triples as adversarial deletions. We further propose a heuristic method to replace one of the two entities in each influential triple to generate adversarial additions. Our experiments show that the proposed strategies outperform the state-of-art data poisoning attacks on KGE models and improve the MRR degradation due to the attacks by up to 62% over the baselines.


Beef Cattle Instance Segmentation Using Fully Convolutional Neural Network

arXiv.org Machine Learning

We present an instance segmentation algorithm trained and applied to a CCTV recording of beef cattle during a winter finishing period. A fully convolutional network was transformed into an instance segmentation network that learns to label each instance of an animal separately. We introduce a conceptually simple framework that the network uses to output a single prediction for every animal. These results are a contribution towards behaviour analysis in winter finishing beef cattle for early detection of animal welfare-related problems.


Visual Salience and Reference Resolution in Situated Dialogues: A Corpus-based Evaluation

AAAI Conferences

Dialogues between humans and robots are necessarily situated. Exophoric references to objects in the shared visual context are very frequent in situated dialogues, for example when a human is verbally guiding a tele-operated mobile robot. We present an approach to automatically resolving exophoric referring expressions in a situated dialogue based on the visual salience of possible referents. We evaluate the effectiveness of this approach and a range of different salience metrics using data from the SCARE corpus which we have augmented with visual information. The results of our evaluation show that our computationally lightweight approach is successful, and so promising for use in human-robot dialogue systems.


Situating Spatial Templates for Human-Robot Interaction

AAAI Conferences

Through empirical validation and computational application, template-based models of situated spatial term meaning have proven their usefulness to human-robot dialogue, but we argue in this paper that important contextual features are being ignored; resulting in over-generalization and failure to account for actual usage in situated context. Such a fact is significant to human-robot dialogue in that it constrains the manner in which we create interactive systems which can discuss their own physical actions and surroundings. To this end, in this paper we describe a study which we conducted to determine how acceptability ratings for spatial term meaning altered for oblique landmark orientations. Results demonstrated that spatial term meaning was indeed altered by interlocutor perspective in a way not predicted by current approaches to spatial term semantics.