Flexible Models for Microclustering with Application to Entity Resolution

Neural Information Processing Systems

Most generative models for clustering implicitly assume that the number of data points in each cluster grows linearly with the total number of data points. Finite mixture models, Dirichlet process mixture models, and Pitman-Yor process mixture models make this assumption, as do all other infinitely exchangeable clustering models. However, for some applications, this assumption is inappropriate. For example, when performing entity resolution, the size of each cluster should be unrelated to the size of the data set, and each cluster should contain a negligible fraction of the total number of data points. These applications require models that yield clusters whose sizes grow sublinearly with the size of the data set. We address this requirement by defining the microclustering property and introducing a new class of models that can exhibit this property. We compare models within this class to two commonly used clustering models using four entity-resolution data sets.

Flexible sampling of discrete data correlations without the marginal distributions

Neural Information Processing Systems

Learning the joint dependence of discrete variables is a fundamental problem in machine learning, with many applications including prediction, clustering and dimensionality reduction. More recently, the framework of copula modeling has gained popularity due to its modular parametrization of joint distributions. Among other properties, copulas provide a recipe for combining flexible models for univariate marginal distributions with parametric families suitable for potentially high dimensional dependence structures. More radically, the extended rank likelihood approach of Hoff (2007) bypasses learning marginal models completely when such information is ancillary to the learning task at hand as in, e.g., standard dimensionality reduction problems or copula parameter estimation. The main idea is to represent data by their observable rank statistics, ignoring any other information from the marginals. Inference is typically done in a Bayesian framework with Gaussian copulas, and it is complicated by the fact this implies sampling within a space where the number of constraints increase quadratically with the number of data points. The result is slow mixing when using off-the-shelf Gibbs sampling. We present an efficient algorithm based on recent advances on constrained Hamiltonian Markov chain Monte Carlo that is simple to implement and does not require paying for a quadratic cost in sample size.

Using Rough Sets as Tools for Knowledge Discovery

AAAI Conferences

We presented an approach to knowledge discovery which provides an effective way to discover hidden patterns and to transform information in a database into simplified, easily understood form. During the information generalization stage, undesired attributes are eliminated, primitive data are generalized to higher level concepts according to concept hierarchies and the number of tuples in the generalized information system is decreased compared with the original relation. The rough sets technique, used at the data analysis and reduction stages, provides an effective tool to analyze the attribute dependencies and identify irrelevant attributes during the information reduction process. The rules computed from reduced information system are usually concise, expressive and strong because they are in the most generalized form and only use necessary attributes. The rules represent data dependencies occurring in the database.

When Are Description Logic Knowledge Bases Indistinguishable?

AAAI Conferences

Deciding inseparability of description logic knowledge bases (KBs) with respect to conjunctive queries is fundamental for many KB engineering and maintenance tasks including versioning, module extraction, knowledge exchange and forgetting. We study the combined and data complexity of this inseparability problem for fragments of Horn-ALCHI, including the description logics underpinning OWL 2 QL and OWL 2 EL.

Hills can't stop this all-wheel-drive robot lawn mower


This week at MWC, Husqvarna announced its first all-wheel drive (AWD) option with the 435X. In addition to some other unique features, this new "automower" works with Amazon's Alexa and Google Home to fit in with the rest of your smart home devices. And yes, the integration with virtual assistants means you can control the robotic landscaper with your voice. AWD adds the ability to handle slopes and rough terrain better. Husqvarna says the 435X can handle an incline of up to 70 percent, which is quite steep.