Flexible Models for Microclustering with Application to Entity Resolution

Neural Information Processing Systems

Most generative models for clustering implicitly assume that the number of data points in each cluster grows linearly with the total number of data points. Finite mixture models, Dirichlet process mixture models, and Pitman-Yor process mixture models make this assumption, as do all other infinitely exchangeable clustering models. However, for some applications, this assumption is inappropriate. For example, when performing entity resolution, the size of each cluster should be unrelated to the size of the data set, and each cluster should contain a negligible fraction of the total number of data points. These applications require models that yield clusters whose sizes grow sublinearly with the size of the data set. We address this requirement by defining the microclustering property and introducing a new class of models that can exhibit this property. We compare models within this class to two commonly used clustering models using four entity-resolution data sets.


Constraint-free Graphical Model with Fast Learning Algorithm

arXiv.org Machine Learning

In this paper, we propose a simple, versatile model for learning the structure and parameters of multivariate distributions from a data set. Learning a Markov network from a given data set is not a simple problem, because Markov networks rigorously represent Markov properties, and this rigor imposes complex constraints on the design of the networks. Our proposed model removes these constraints, acquiring important aspects from the information geometry. The proposed parameter- and structure-learning algorithms are simple to execute as they are based solely on local computation at each node. Experiments demonstrate that our algorithms work appropriately.


Flexible sampling of discrete data correlations without the marginal distributions

Neural Information Processing Systems

Learning the joint dependence of discrete variables is a fundamental problem in machine learning, with many applications including prediction, clustering and dimensionality reduction. More recently, the framework of copula modeling has gained popularity due to its modular parametrization of joint distributions. Among other properties, copulas provide a recipe for combining flexible models for univariate marginal distributions with parametric families suitable for potentially high dimensional dependence structures. More radically, the extended rank likelihood approach of Hoff (2007) bypasses learning marginal models completely when such information is ancillary to the learning task at hand as in, e.g., standard dimensionality reduction problems or copula parameter estimation. The main idea is to represent data by their observable rank statistics, ignoring any other information from the marginals. Inference is typically done in a Bayesian framework with Gaussian copulas, and it is complicated by the fact this implies sampling within a space where the number of constraints increase quadratically with the number of data points. The result is slow mixing when using off-the-shelf Gibbs sampling. We present an efficient algorithm based on recent advances on constrained Hamiltonian Markov chain Monte Carlo that is simple to implement and does not require paying for a quadratic cost in sample size.


DLMS: An Evaluation of KL-ONE in the Automobile Industry

AAAI Conferences

Ford Motor Company's Direct Labor Management System (DLMS) is the knowledge-based subsystem of complex multiphase manufacturing process planning system. Since its original deployment in 1991, DLMS has been utilized by hundreds of users throughout Ford's automobile and truck assembly plants in North America. Currently DLMS is being expanded to Ford's assembly plants around the world. The knowledge that is used to drive the manufacturing assembly process in DLMS is stored in a KL-ONE knowledge representation scheme. This paper will discuss the long-term implications of utilizing KL-ONE in a dynamic environment such as automobile assembly planning. Issues such as knowledge base validation and verification, maintenance and adaptability to changing market conditions will also be discussed.


The General-Motors Variation-Reduction Adviser

AI Magazine

TheGeneral Motors Variation-Reduction Adviser is a knowledge system built on case-based reasoning principles that is currently in use in eighteen General Motors asssembly centers. This article reviews the overall characteristics of the system and then focuses on various AI elements critical to support its deployment to a production system. A key AI enabler is ontology-guided search using domainspecific ontologies.