AAAI Conferences


Relational Forward Backward Algorithm for Multiple Queries

AAAI Conferences

The lifted dynamic junction tree algorithm (LDJT) efficiently answers filtering and prediction queries for probabilistic relational temporal models by building and then reusing a first-order cluster representation of a knowledge base for multiple queries and time steps. Specifically, this paper contributes (i) a relational forward backward algorithm with LDJT, (ii) smoothing for hindsight queries, and (iii) different approaches to instantiate a first-order cluster representation during a backward pass. Further, our relational forward backward algorithm makes hindsight queries with huge lags feasible. LDJT answers multiple temporal queries faster than the static lifted junction tree algorithm on an unrolled model, which performs smoothing during message passing.


On Rational Monotony and Weak Rational Monotony for Inference Relations Induced by Sets of Minimal C-Representations

AAAI Conferences

Reasoning in the context of a conditional knowledge base containing rules of the form ’If A then usually B’ can be defined in terms of preference relations on possible worlds. These preference relations can be modeled by ranking functions that assign a degree of disbelief to each possible world. In general, there are multiple ranking functions that accept a given knowledge base. Several nonmonotonic inference relations have been proposed using c-representations, a subset of all ranking functions. These inference relations take subsets of all c-representations based on various notions of minimality into account, and they operate in different inference modes, i.e., skeptical, weakly skeptical, or credulous. For nonmonotonic inference relations, weaker versions of monotonicity like rational monotony (RM) and weak rational monotony (WRM) have been developed. In this paper, we investigate which of the inference relations induced by sets of minimal c-representations satisfy rational monotony or weak rational monotony.


Reliable Discretization of Deterministic Equations in Bayesian Networks

AAAI Conferences

We focus on the problem of modeling deterministic equations over continuous variables in discrete Bayesian networks. This is typically achieved by a discretisation of both input and output variables and a degenerate quantification of the corresponding conditional probability tables. This approach, based on classical probabilities, cannot properly model the information loss induced by the discretisation. We show that a reliable modeling of such epistemic uncertainty can be instead achieved by credal sets, i.e., convex sets of probability mass functions. This transforms the original Bayesian network in a credal network, possibly returning interval-valued inferences, that are robust with respect to the information loss induced by the discretisation. Algorithmic strategies for an optimal choice of the discretisation bins are also discussed.


Identifying the Focus of Negation Using Discourse Structure

AAAI Conferences

This paper presents experimental results showing that discourse structure is a useful element in identifying the focus of negation. We define features extracted from RST-like discourse trees. We experiment with the largest publicly available corpus and an off-the-shelf discourse parser. Results show that discourse structure is especially beneficial when predicting the focus of negations in long sentences.


On the Winograd Schema: Situating Language Understanding in the Data-Information-Knowledge Continuum

AAAI Conferences

The Winograd Schema (WS) challenge, proposed as an alternative to the Turing Test, has become the new standard for evaluating progress in natural language understanding (NLU). In this paper we will not however be concerned with how this challenge might be addressed. Instead, our aim here is threefold: (i) we will fir st formally „situate‟ the WS challenge in the data-information-knowledge continuum, suggesting where in that continuum a good WS resides; (ii) we will show that a WS is just a special case of a more general phenomenon in language understanding, namely the missing text phenomenon (henceforth, MTP) - in particular, we will argue that what we usually call thinking in the process of language understanding involves discovering a significant amount of „missing text‟ - text that is not explicitly stated, but is often implicitly assumed as shared background knowledge; and (iii) we conclude with a brief discussion on why MTP is inconsistent with the data-driven and machine learning approach to language understanding.


Flexible Approach for Computer-Assisted Reading and Analysis of Texts

AAAI Conferences

A Computer-Assisted Reading and Analysis of Texts (CARAT) process is a complex technology that connects language, text, information and knowledge theories with computational formalizations, statistical approaches, symbolic approaches, standard and non-standard logics, etc. This process should be, always, under the control of the user according to his subjectivity, his knowledge and the purpose of his analysis. It becomes important to design platforms to support the design of CARAT tools, their management, their adaptation to new needs and the experiments. Even, in the last years, several platforms for digging data, including textual data have emerged; they lack flexibility and sound formal foundations. We propose, in this paper, a formal model with strong logical foundations, based on typed applicative systems.


Expanding Controllability of Hybrid Recommender Systems: From Positive to Negative Relevance

AAAI Conferences

A hybrid recommender system fuses multiple data sources, usually with static and nonadjustable weightings, to deliver recommendations. One limitation of this approach is the problem to match user preference in all situations. In this paper, we present two user-controllable hybrid recommender interfaces, which offer a set of sliders to dynamically tune the impact of different sources of relevance on the final ranking. Two user studies were performed to design and evaluate the proposed interfaces.


Modeling the Dynamics of User Preferences for Sequence-Aware Recommendation Using Hidden Markov Models

AAAI Conferences

In a variety of online settings involving interaction with end-users it is critical for the systems to adapt to changes in user preference. User preferences on items tend to change over time due to a variety of factors such as change in context, the task being performed, or other short-term or long-term external factors. Recommender systems, in particular need to be able to capture these dynamics in user preferences in order to remain tuned to the most current interests of users. In this work we present a recommendation framework which takes into account the dynamics of user preferences. We propose an approach based on Hidden Markov Models (HMM) to identify change-points in the sequence of user interactions which reflect significant changes in preference according to the sequential behavior of all the users in the data. The proposed framework leverages the identified change points to generate recommendations using a sequence-aware non-negative matrix factorization model. We empirically demonstrate the effectiveness of the HMM-based change detection method as compared to standard baseline methods. Additionally, we evaluate the performance of the proposed recommendation method and show that it compares favorably to state-of-the-art sequence-aware recommendation models.


Convolutional Adversarial Latent Factor Model for Recommender System

AAAI Conferences

The accuracy of Top-N recommendation task is challenged in the systems with mainly implicit user feedback considered. Adversarial training has presented successful results in identifying real data distributions in various domains (e.g. image processing). Nonetheless, adversarial training applied to recommendation is still challenged especially by interpretation of negative implicit feedback causing it to converge slowly as well as affecting its convergence stability. This is often attributed to high sparsity of the implicit feedback and discrete values characteristic from items recommendation. To face these challenges, we propose a novel model named convolutional adversarial latent factor model (CALF), which uses adversarial training in generative and discriminative models for implicit feedback recommendations. We assume that users prefer observed items over generated items and then apply pairwise product to model the user-item interactions. Additionally, the latent features become input data of our convolutional neural network (CNN) to learn correlations among embedding dimensions. Finally, Rao-Blackwellized sampling is adopted to deal with the discrete values optimizing CALF and stabilizing the training step. We conducted extensive experiments on three different benchmark datasets, where our proposed model demonstrates its efficiency for item recommendation.


Managing Popularity Bias in Recommender Systems with Personalized Re-Ranking

AAAI Conferences

Many recommender systems suffer from popularity bias: popular items are recommended frequently while less popular, niche products, are recommended rarely or not at all. However, recommending the ignored products in the ``long tail'' is critical for businesses as they are less likely to be discovered. In this paper, we introduce a personalized diversification re-ranking approach to increase the representation of less popular items in recommendations while maintaining acceptable recommendation accuracy. Our approach is a post-processing step that can be applied to the output of any recommender system. We show that our approach is capable of managing popularity bias more effectively, compared with an existing method based on regularization. We also examine both new and existing metrics to measure the coverage of long-tail items in the recommendation.