Goto

Collaborating Authors

 Communications: Instructional Materials


Sequential Information Guided Sensing

arXiv.org Machine Learning

We study the value of information in sequential compressed sensing by characterizing the performance of sequential information guided sensing in practical scenarios when information is inaccurate. In particular, we assume the signal distribution is parameterized through Gaussian or Gaussian mixtures with estimated mean and covariance matrices, and we can measure compressively through a noisy linear projection or using one-sparse vectors, i.e., observing one entry of the signal each time. We establish a set of performance bounds for the bias and variance of the signal estimator via posterior mean, by capturing the conditional entropy (which is also related to the size of the uncertainty), and the additional power required due to inaccurate information to reach a desired precision. Based on this, we further study how to estimate covariance based on direct samples or covariance sketching. Numerical examples also demonstrate the superior performance of Info-Greedy Sensing algorithms compared with their random and non-adaptive counterparts.


Reports on the 2015 AAAI Workshop Program

AI Magazine

AAAI's 2015 Workshop Program was held Sunday and Monday, January 25โ€“26, 2015 at the Hyatt Regency Austin Hotel in Austion, Texas, USA. The AAAI-15 workshop program included 15 workshops covering a wide range of topics in artificial intelligence. Most workshops were held on a single day. The titles of the workshops included AI and Ethics, AI for Cities, AI for Transportation: Advice, Interactivity and Actor Modeling, Algorithm Configuration, Artificial Intelligence Applied to Assistive Technologies and Smart Environments, Beyond the Turing Test, Computational Sustainability, Computer Poker and Imperfect Information, Incentive and Trust in E-Communities, Multiagent Interaction without Prior Coordination, Planning, Search, and Optimization, Scholarly Big Data: AI Perspectives, Challenges, and Ideas, Trajectory-Based Behaviour Analytics, World Wide Web and Public Health Intelligence, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, and Learning for General Competency in Video Games.


Becoming the Expert - Interactive Multi-Class Machine Teaching

arXiv.org Machine Learning

Compared to machines, humans are extremely good at classifying images into categories, especially when they possess prior knowledge of the categories at hand. If this prior information is not available, supervision in the form of teaching images is required. To learn categories more quickly, people should see important and representative images first, followed by less important images later - or not at all. However, image-importance is individual-specific, i.e. a teaching image is important to a student if it changes their overall ability to discriminate between classes. Further, students keep learning, so while image-importance depends on their current knowledge, it also varies with time. In this work we propose an Interactive Machine Teaching algorithm that enables a computer to teach challenging visual concepts to a human. Our adaptive algorithm chooses, online, which labeled images from a teaching set should be shown to the student as they learn. We show that a teaching strategy that probabilistically models the student's ability and progress, based on their correct and incorrect answers, produces better 'experts'. We present results using real human participants across several varied and challenging real-world datasets.


Detecting Change Points in the Large-Scale Structure of Evolving Networks

AAAI Conferences

Interactions among people or objects are often dynamic in nature and can be represented as a sequence of networks, each providing a snapshot of the interactions over a brief period of time. An important task in analyzing such evolving networks is change-point detection, in which we both identify the times at which the large-scale pattern of interactions changes fundamentally and quantify how large and what kind of change occurred. Here, we formalize for the first time the network change-point detection problem within an online probabilistic learning framework and introduce a method that can reliably solve it. This method combines a generalized hierarchical random graph model with a Bayesian hypothesis test to quantitatively determine if, when, and precisely how a change point has occurred. We analyze the detectability of our method using synthetic data with known change points of different types and magnitudes, and show that this method is more accurate than several previously used alternatives. Applied to two high-resolution evolving social networks, this method identifies a sequence of change points that align with known external ``shocks'' to these networks.


Mechanisation of Thought Processes

AI Classics

Biology seems to be a science in its own right, or set of sciences having common aims, and so it should have its own language and explanatory concepts; yet when any specifically biological concept is suggested and used as an explanatory concept it seems to be unsatisfactory and even mystical. There are many biological concepts of this kind: Purpose, Drive, elan vital, Entelechy, Gestalten.* Physicists and engineers seem, on the other hand, to have clearly defined concepts having great power within biology.



Report 77-27 Overview and Bibliography of Distributed Stanford -- KSL Databases

AI Classics

Because of the recent - echnological advances in computer networks and communications, and because of the cost reduction of computer hardware, there has been a great interest in distributed data bases including some attempts at actual implementations. In this paper, we will first define what we mean by a distributed data base. Then we will give some of the reasons why people are so interested in this new field. After classifying the different types of distributed data bases, we will describe the current areas of research. Finally, we will give an annotated bibliography that lists the most important papers in thi:3 area.


STEP: A Scalable Testing and Evaluation Platform

AAAI Conferences

The emergence of online crowdsourcing sites, online work platforms, and evenMassive Open Online Courses (MOOCs), has created an increasing need for reliably evaluating the skills of the participating users in a scalable way.Many platforms already allow users to take online tests and verify their skills, but the existing approaches face many problems. First of all, cheating is very common in online testing without supervision, as the test questions often "leak" and become easily available online together with the answers.Second, technical skills, such as programming, require the tests to be frequently updated in order to reflect the current state-of-the-art. Third,there is very limited evaluation of the tests themselves, and how effectively they measure the skill that the users are tested for. In this paper, we present a Scalable Testing and Evaluation Platform (STEP),that allows continuous generation and evaluation of test questions. STEP leverages already available content, on Question Answering sites such as StackOverflow and re-purposes these questions to generate tests. The system utilizes a crowdsourcing component for the editing of the questions, while it uses automated techniques for identifying promising QA threads that can be successfully re-purposed for testing. This continuous question generation decreases the impact of cheating and also creates questions that are closer to the real problems that the skill holder is expected to solve in real life.STEP also leverages the use of Item Response Theory to evaluate the quality of the questions. We also use external signals about the quality of the workers.These identify the questions that have the strongest predictive ability in distinguishing workers that have the potential to succeed in the online job marketplaces. Existing approaches contrast in using only internal consistency metrics to evaluate the questions. Finally, our system employs an automatic "leakage detector" that queries the Internet to identify leaked versions of our questions. We then mark these questions as "practice only," effectively removing them from the pool of questions used for evaluation. Our experimental evaluation shows that our system generates questions of comparable or higher quality compared to existing tests, with a cost of approximately 3-5 dollars per question, which is lower than the cost of licensing questions from existing test banks.


Preface

AAAI Conferences

Welcome to the Second AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2014) held November 2-4, 2014, in Pittsburgh, Pennsylvania. This conference is an opportunity to build on the success of the First AAAI Human Computation and Crowdsourcing conference, and to promote the best scholarship in this vibrant and fast emerging, multidisciplinary area. The conference also comes on the heels of four HCOMP workshops, including two workshops hosted at the annual AAAI conference. The HCOMP conference is designed to be a venue for exchanging ideas and developments on principles, experiments, and implementations of systems that rely on programmatic access to human intellect to perform some aspect of computation, or where human perception, knowledge, reasoning, or coordinated activity contributes to the operation of larger systems and applications. Topics relevant to the discipline of human computation and crowdsourcing include human-computer interaction (HCI), computer-supported collaborative work (CSCW), cognitive psychology, organizational behavior, economics, information retrieval, databases, computer systems and programming languages, and optimization.


Statistical Estimation: From Denoising to Sparse Regression and Hidden Cliques

arXiv.org Machine Learning

These notes review six lectures given by Prof. Andrea Montanari on the topic of statistical estimation for linear models. The first two lectures cover the principles of signal recovery from linear measurements in terms of minimax risk. Subsequent lectures demonstrate the application of these principles to several practical problems in science and engineering. Specifically, these topics include denoising of error-laden signals, recovery of compressively sensed signals, reconstruction of low-rank matrices, and also the discovery of hidden cliques within large networks.