Delany, Sarah Jane
Wider Vision: Enriching Convolutional Neural Networks via Alignment to External Knowledge Bases
Liu, Xuehao, Delany, Sarah Jane, McKeever, Susan
Deep learning models suffer from opaqueness. For Convolutional Neural Networks (CNNs), current research strategies for explaining models focus on the target classes within the associated training dataset. As a result, the understanding of hidden feature map activations is limited by the discriminative knowledge gleaned during training. The aim of our work is to explain and expand CNNs models via the mirroring or alignment of CNN to an external knowledge base. This will allow us to give a semantic context or label for each visual feature. We can match CNN feature activations to nodes in our external knowledge base. This supports knowledge-based interpretation of the features associated with model decisions. To demonstrate our approach, we build two separate graphs. We use an entity alignment method to align the feature nodes in a CNN with the nodes in a ConceptNet based knowledge graph. We then measure the proximity of CNN graph nodes to semantically meaningful knowledge base nodes. Our results show that in the aligned embedding space, nodes from the knowledge graph are close to the CNN feature nodes that have similar meanings, indicating that nodes from an external knowledge base can act as explanatory semantic references for features in the model. We analyse a variety of graph building methods in order to improve the results from our embedding space. We further demonstrate that by using hierarchical relationships from our external knowledge base, we can locate new unseen classes outside the CNN training set in our embeddings space, based on visual feature activations. This suggests that we can adapt our approach to identify unseen classes based on CNN feature activations. Our demonstrated approach of aligning a CNN with an external knowledge base paves the way to reason about and beyond the trained model, with future adaptations to explainable models and zero-shot learning.
Algorithmic Bias and Regularisation in Machine Learning
Cunningham, Padraig, Delany, Sarah Jane
Often, what is termed algorithmic bias in machine learning will be due to historic bias in the training data. But sometimes the bias may be introduced (or at least exacerbated) by the algorithm itself. The ways in which algorithms can actually accentuate bias has not received a lot of attention with researchers focusing directly on methods to eliminate bias - no matter the source. In this paper we report on initial research to understand the factors that contribute to bias in classification algorithms. We believe this is important because underestimation bias is inextricably tied to regularization, i.e. measures to address overfitting can accentuate bias.
Sentiment Classification Using Negation as a Proxy for Negative Sentiment
Ohana, Bruno (Dublin Institute of Technology) | Tierney, Brendan (Dublin Institute of Technology) | Delany, Sarah Jane (Dublin Institute of Technology)
We explore the relationship between negated text and negative sentiment in the task of sentiment classification. We propose a novel adjustment factor based on negation occurrences as a proxy for negative sentiment that can be applied to lexicon-based classifiers equipped with a negation detection pre-processing step. We performed an experiment on a multi-domain customer reviews dataset obtaining accuracy improvements over a baseline, and we further improved our results using out-of-domain data to calibrate the adjustment factor. We see future work possibilities in exploring negation detection refinements, and expanding the experiment to a broader spectrum of opinionated discourse, beyond that of customer reviews.
Report on the 21st International Conference on Case-Based Reasoning
Ontanon, Santiago (Drexel University) | Delany, Sarah Jane (Dublin Institute of Technology) | Cheetham, William E. (Capital District Physicians')
In cooperation with the Association for the Advancement of Artificial Intelligence (AAAI), the twenty-first International Conference on Case-Based Reasoning (ICCBR), the premier international meeting on research and applications in Case-Based Reasoning (CBR), was held in July 2013 in Saratoga Springs, NY. ICCBR is the annual meeting of the CBR community and the leading conference on this topic. This year ICCBR featured the Industry Day, the fifth annual Doctoral Consortium and three workshops.
Report on the 21st International Conference on Case-Based Reasoning
Ontanon, Santiago (Drexel University) | Delany, Sarah Jane (Dublin Institute of Technology) | Cheetham, William E. (Capital District Physicians')
Springs, NY. ICCBR is the annual meeting of the CBR community and the ICCBR also featured a workshop program consisting of three workshops. The main conference track featured 16 research paper presentations, nine posters, and two invited speakers. The papers and posters reflected the state of the art of case-based reasoning, dealing both with open problems at the core of CBR (especially in similarity assessment, case adaptation, and case-based maintenance), as well as trending applications of CBR (especially recommender systems and computer games) and the intersections of CBR with other areas such as multiagent systems. The first invited speaker, Igor Jurisica from the Ontario Cancer Institute and the University of Toronto, spoke about how to scale up case-based reasoning for "big data" applications. The Case-Based Reasoning in Health Sciences workshop, organized by Isabelle Bichindaritz, Cindy Marling, and Stefania Montani, and the EXPPORT workshop (Experience Reuse: Provenance, Process-Orientation and Traces), organized by David Leake, Béatrice Fuchs, Juan A. Recio Garcia, and Stefania Montani, were held jointly and dealt with how to deal with data represented CDPHP, was the local chair; William E. University, and Jonathan Rubin, from Registration information is available at www.aaai.org/Symposia/ the Palo Alto Research Center, were the Spring/ sss14.php.