Goto

Collaborating Authors

Islam

AAAI Conferences

Weight learning is a challenging problem in Markov Logic Networks (MLNs) due to the large size of the ground propositional probabilistic graphical model that underlies the first-order representation of MLNs. Though more sophisticated weight learning methods that use lifted inference have been proposed, such methods can typically scale up only in the absence of evidence, namely in generative weight learning. In discriminative learning, where the evidence typically destroys symmetries, existing approaches are lacking in scalability. In this paper, we propose a novel, intuitive approach for learning MLNs discriminatively by utilizing approximate symmetries. Specifically, we reduce the size of the training database by clustering approximately symmetric atoms together and selecting a representative atom from each cluster. However, each choice made from the clusters induces a different distribution, increasing the uncertainty in our learned model. To reduce this uncertainty, we learn a finite mixture model by stacking the different distributions, where the parameters of the model are learned using an EM approach. Our results on several benchmarks show that our approach is much more scalable and accurate as compared to existing state-of-the-art MLN learning methods.


Li

AAAI Conferences

Domain generalization aims to apply knowledge gained from multiple labeled source domains to unseen target domains. The main difficulty comes from the dataset bias: training data and test data have different distributions, and the training set contains heterogeneous samples from different distributions. Let X denote the features, and Y be the class labels. Existing domain generalization methods address the dataset bias problem by learning a domain-invariant representation h(X) that has the same marginal distribution P(h(X)) across multiple source domains. The functional relationship encoded in P(Y X) is usually assumed to be stable across domains such that P(Y h(X)) is also invariant.


Probability Distributions in Data Science

#artificialintelligence

Having a sound statistical background can be greatly beneficial in the daily life of a Data Scientist. Every time we start exploring a new dataset, we need to first do an Exploratory Data Analysis (EDA) in order to get a feeling of what are the main characteristics of certain features. If we are able to understand if it's present any pattern in the data distribution, we can then tailor-made our Machine Learning models to best fit our case study. In this way, we will be able to get a better result in less time (reducing the optimisation steps). In fact, some Machine Learning models are designed to work best under some distribution assumptions.


Aggregating Learned Probabilistic Beliefs

arXiv.org Artificial Intelligence

We consider the task of aggregating beliefs of severalexperts. We assume that these beliefs are represented as probabilitydistributions. We argue that the evaluation of any aggregationtechnique depends on the semantic context of this task. We propose aframework, in which we assume that nature generates samples from a`true' distribution and different experts form their beliefs based onthe subsets of the data they have a chance to observe. Naturally, theideal aggregate distribution would be the one learned from thecombined sample sets. Such a formulation leads to a natural way tomeasure the accuracy of the aggregation mechanism.We show that the well-known aggregation operator LinOP is ideallysuited for that task. We propose a LinOP-based learning algorithm,inspired by the techniques developed for Bayesian learning, whichaggregates the experts' distributions represented as Bayesiannetworks. Our preliminary experiments show that this algorithmperforms well in practice.


Cross-Domain Sentiment Classification via Topic-Related TrAdaBoost

AAAI Conferences

Cross-domain sentiment classification aims to tag sentiments for a target domain by labeled data from a source domain. Due to the difference between domains, the accuracy of a trained classifier may be very low. In this paper, we propose a boosting-based learning framework named TR-TrAdaBoost for cross-domain sentiment classification. We firstly explore the topic distribution of documents, and then combine it with the unigram TrAdaBoost. The topic distribution captures the domain information of documents, which is valuable for cross-domain sentiment classification. Experimental results indicate that TR-TrAdaBoost represents documents well and boost the performance and robustness of TrAdaBoost.