Mitchell, Tom M.


Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach

arXiv.org Machine Learning

We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.


A Probabilistic Generative Grammar for Semantic Parsing

arXiv.org Machine Learning

We present a framework that couples the syntax and semantics of natural language sentences in a generative model, in order to develop a semantic parser that jointly infers the syntactic, morphological, and semantic representations of a given sentence under the guidance of background knowledge. To generate a sentence in our framework, a semantic statement is first sampled from a prior, such as from a set of beliefs in a knowledge base. Given this semantic statement, a grammar probabilistically generates the output sentence. A joint semantic-syntactic parser is derived that returns the $k$-best semantic and syntactic parses for a given sentence. The semantic prior is flexible, and can be used to incorporate background knowledge during parsing, in ways unlike previous semantic parsing approaches. For example, semantic statements corresponding to beliefs in a knowledge base can be given higher prior probability, type-correct statements can be given somewhat lower probability, and beliefs outside the knowledge base can be given lower probability. The construction of our grammar invokes a novel application of hierarchical Dirichlet processes (HDPs), which in turn, requires a novel and efficient inference approach. We present experimental results showing, for a simple grammar, that our parser outperforms a state-of-the-art CCG semantic parser and scales to knowledge bases with millions of beliefs.


Instructable Intelligent Personal Agent

AAAI Conferences

Unlike traditional machine learning methods, humans often learn from natural language instruction. As users become increasingly accustomed to interacting with mobile devices using speech, their interest in instructing these devices in natural language is likely to grow. We introduce our Learning by Instruction Agent (LIA), an intelligent personal agent that users can teach to perform new action sequences to achieve new commands, using solely natural language interaction. LIA uses a CCG semantic parser to ground the semantics of each command in terms of primitive executable procedures defining sensors and effectors of the agent. Given a natural language command that LIA does not understand, it prompts the user to explain how to achieve the command through a sequence of steps, also specified in natural language. A novel lexicon induction algorithm enables LIA to generalize across taught commands, e.g., having been taught how to "forward an email to Alice," LIA can correctly interpret the command "forward this email to Bob." A user study involving email tasks demonstrates that users voluntarily teach LIA new commands, and that these taught commands significantly reduce task completion time. These results demonstrate the potential of natural language instruction as a significant, under-explored paradigm for machine learning.


Never-Ending Learning

AAAI Conferences

Whereas people learn many different types of knowledge from diverse experiences over many years, most current machine learning systems acquire just a single function or data model from just a single data set. We propose a never-ending learning paradigm for machine learning, to better reflect the more ambitious and encompassing type of learning performed by humans. As a case study, we describe the Never-Ending Language Learner (NELL), which achieves some of the desired properties of a never-ending learner, and we discuss lessons learned. NELL has been learning to read the web 24 hours/day since January 2010, and so far has acquired a knowledge base with over 80 million confidence-weighted beliefs (e.g., servedWith(tea, biscuits) ). NELL has also learned millions of features and parameters that enable it to read these beliefs from the web. Additionally, it has learned to reason over these beliefs to infer new beliefs, and is able to extend its ontology by synthesizing new relational predicates. NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL.


Scoup-SMT: Scalable Coupled Sparse Matrix-Tensor Factorization

arXiv.org Machine Learning

How can we correlate neural activity in the human brain as it responds to words, with behavioral data expressed as answers to questions about these same words? In short, we want to find latent variables, that explain both the brain activity, as well as the behavioral responses. We show that this is an instance of the Coupled Matrix-Tensor Factorization (CMTF) problem. We propose Scoup-SMT, a novel, fast, and parallel algorithm that solves the CMTF problem and produces a sparse latent low-rank subspace of the data. In our experiments, we find that Scoup-SMT is 50-100 times faster than a state-of-the-art algorithm for CMTF, along with a 5 fold increase in sparsity. Moreover, we extend Scoup-SMT to handle missing data without degradation of performance. We apply Scoup-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human subjects) tensor and a (nouns, properties) matrix, with coupling along the nouns dimension. Scoup-SMT is able to find meaningful latent variables, as well as to predict brain activity with competitive accuracy. Finally, we demonstrate the generality of Scoup-SMT, by applying it on a Facebook dataset (users, friends, wall-postings); there, Scoup-SMT spots spammer-like anomalies.


Toward an Architecture for Never-Ending Language Learning

AAAI Conferences

We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74% after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.


In Honor of Marvin Minsky's Contributions on his 80th Birthday

AI Magazine

This article seizes an opportune time to honor Marvin and his contributions and influence in artificial intelligence, science, and beyond. The article provides readers with some personal insights of Minsky from Danny Hillis, John McCarthy, Tom Mitchell, Erik Mueller, Doug Riecken, Aaron Sloman, and Patrick Henry Winston -- all members of the AI community that Minsky helped to found. The article continues with a brief resume of Minsky's research, which spans an enormous range of fields. It concludes with a short biographical account of Minsky's personal history.


In Honor of Marvin Minsky's Contributions on his 80th Birthday

AI Magazine

Marvin Lee Minsky, a founder of the field of artificial intelligence and professor at MIT, celebrated his 80th birthday on August 9, 2007. This article seizes an opportune time to honor Marvin and his contributions and influence in artificial intelligence, science, and beyond. The article provides readers with some personal insights of Minsky from Danny Hillis, John McCarthy, Tom Mitchell, Erik Mueller, Doug Riecken, Aaron Sloman, and Patrick Henry Winston -- all members of the AI community that Minsky helped to found. The article continues with a brief resume of Minsky's research, which spans an enormous range of fields. It concludes with a short biographical account of Minsky's personal history.


Does Machine Learning Really Work?

AI Magazine

Does machine learning really work? Over the past decade, machine learning has evolved from a field of laboratory demonstrations to a field of significant commercial value. Machine-learning algorithms have now learned to detect credit card fraud by mining data on past transactions, learned to steer vehicles driving autonomously on public highways at 70 miles an hour, and learned the reading interests of many individuals to assemble personally customized electronic newsAbstracts. This article, based on the keynote talk presented at the Thirteenth National Conference on Artificial Intelligence, samples a number of recent accomplishments in machine learning and looks at where the field might be headed.


Does Machine Learning Really Work?

AI Magazine

Does machine learning really work? Yes. Over the past decade, machine learning has evolved from a field of laboratory demonstrations to a field of significant commercial value. Machine-learning algorithms have now learned to detect credit card fraud by mining data on past transactions, learned to steer vehicles driving autonomously on public highways at 70 miles an hour, and learned the reading interests of many individuals to assemble personally customized electronic newsAbstracts. A new computational theory of learning is beginning to shed light on fundamental issues, such as the trade-off among the number of training examples available, the number of hypotheses considered, and the likely accuracy of the learned hypothesis. Newer research is beginning to explore issues such as long-term learning of new representations, the integration of Bayesian inference and induction, and life-long cumulative learning. This article, based on the keynote talk presented at the Thirteenth National Conference on Artificial Intelligence, samples a number of recent accomplishments in machine learning and looks at where the field might be headed. [Copyright restrictions preclude electronic publication of this article.]