Goto

Collaborating Authors

 Overview


Social Recommendation Using Low-Rank Semidefinite Program

AAAI Conferences

The most critical challenge for the recommendation system is to achieve the high prediction quality on the large scale sparse data contributed by the users. In this paper, we present a novel approach to the social recommendation problem, which takes the advantage of the graph Laplacian regularization to capture the underlying social relationship among the users. Differently from the previous approaches, that are based on the conventional gradient descent optimization, we formulate the presented graph Laplacian regularized social recommendation problem into a low-rank semidefinite program, which is able to be efficiently solved by the quasi-Newton algorithm. We have conducted the empirical evaluation on a large scale dataset of high sparsity, the promising experimental results show that our method is very effective and efficient for the social recommendation task.


Large Scale Spectral Clustering with Landmark-Based Representation

AAAI Conferences

Spectral clustering is one of the most popular clustering approaches. Despite its good performance, it is limited in its applicability to large-scale problems due to its high computational complexity. Recently, many approaches have been proposed to accelerate the spectral clustering. Unfortunately, these methods usually sacrifice quite a lot information of the original data, thus result in a degradation of performance. In this paper, we propose a novel approach, called Landmark-based Spectral Clustering (LSC), for large scale clustering problems. Specifically, we select $p\ (\ll n)$ representative data points as the landmarks and represent the original data points as the linear combinations of these landmarks. The spectral embedding of the data can then be efficiently computed with the landmark-based representation. The proposed algorithm scales linearly with the problem size. Extensive experiments show the effectiveness and efficiency of our approach comparing to the state-of-the-art methods.


Termination and Correctness Analysis of Cyclic Control

AAAI Conferences

The utility of including cyclic flows of control in plans has been long recognized by the planning community. Loops in a plan help increase both its applicability and the compactness of representation. However, progress in finding such plans has been limited largely due to lack of methods for reasoning about the correctness and safety properties of loops of actions. We present an overview of recent results for determining the class of problems that a plan with loops can solve. These methods can be used to direct the construction of a rich new form of generalized plans that solve a desired class of problems.


Modeling and Monitoring Crop Disease in Developing Countries

AAAI Conferences

Information about the spread of crop disease is vital in developing countries, and as a result the governments of such countries devote scarce resources to gathering such data. Unfortunately, current surveys tend to be slow and expensive, and hence also tend to gather insufficient quantities of data. In this work we describe three general methods for improving the use of survey resources by performing data collection with mobile devices and by directing survey progress through the application of AI techniques. First, we describe a spatial disease density model based on Gaussian process ordinal regression, which offers a better representation of the disease level distribution, as compared to the statistical approaches typically applied. Second, we show how this model can be used to dynamically route survey teams to obtain the most valuable survey possible given a fixed budget. Third, we demonstrate that the diagnosis of plant disease can be automated using images taken by a camera phone, enabling data collection by survey workers with only basic training. We have applied our methods to the specific challenge of viral cassava disease monitoring in Uganda, for which we have implemented a real-time mobile survey system that will soon see practical use.


Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function

arXiv.org Machine Learning

In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O(\tfrac{n}{\epsilon} \log \tfrac{1}{\rho})$ iterations, where $n$ is the number of blocks. For strongly convex functions the method converges linearly. This extends recent results of Nesterov [Efficiency of coordinate descent methods on huge-scale optimization problems, CORE Discussion Paper #2010/2], which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing $\epsilon$ from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving true iteration complexity bounds. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Finally, we demonstrate numerically that the algorithm is able to solve huge-scale $\ell_1$-regularized least squares and support vector machine problems with a billion variables.


Scalable Event-Based Clustering of Social Media Via Record Linkage Techniques

AAAI Conferences

We tackle the problem of grouping content available in social media applications such as Flickr, Youtube, Panoramino etc. into clusters of documents describing the same event. This task has been referred to as event identi๏ฌcation before. We present a new formalization of the event identi๏ฌcation task as a record linkage problem and show that this formulation leads to a principled and highly ef๏ฌcient solution to the problem. We present results on two datasets derived from Flickr โ€” last.fm and upcoming โ€” comparing the results in terms of Normalized Mutual Information and F-Measure with respect to several baselines, showing that a record linkage approach outperforms all baselines as well as a state-of-the-art system. We demonstrate that our approach can scale to large amounts of data, reducing the processing time considerably compared to a state-of-the-art approach. The scalability is achieved by applying an appropriate blocking strategy and relying on a Single Linkage clustering algorithm which avoids the exhaustive computation of pairwise similarities.


Introduction to the Articles on Innovative Applications of Artificial Intelligence

AI Magazine

We are proud to continue this tradition with the presentation of five articles from the Twenty Second IAAI conference that was held in Atlanta, Georgia, from July 11-14, 2010. We were especially honored to have Jay M. (Marty) Tenenbaum accept the Robert S. Engelmore Memorial Award for his exceptional contributions to AI in computer vision and manufacturing as well as his visionary role in the birth of electronic commerce. This issue of AI Magazine includes an article based on his lecture Cancer: A Computational Disease That AI Can Cure. In this article, Jay Tenenbaum and Jeff Shrager provide a personal view of their work in the development of an AIbased system that addresses the challenge of helping to find a cure for cancer. As a cancer survivor himself, Tenenbaum has a unique insight into the shortcomings of current approaches to treating this disease.


An Application of Transfer to American Football: From Observation of Raw Video to Control in a Simulated Environment

AI Magazine

Automatic transfer of learned knowledge from one task or domain to another offers great potential to simplify and expedite the construction and deployment of intelligent systems. In practice however, there are many barriers to achieving this goal. In this article, we present a prototype system for the real-world context of transferring knowledge of American football from video observation to control in a game simulator. We trace an example play from the raw video through execution and adaptation in the simulator, highlighting the system's component algorithms along with issues of complexity, generality, and scale. We then conclude with a discussion of the implications of this work for other applications, along with several possible improvements.


Human Computation

Morgan & Claypool Publishers

This book is aimed at achieving four goals: (1) defining human computation as a research area; (2) providing a comprehensive review of existing work; (3) drawing connections to a wide variety of disciplines, including AI, Machine Learning, HCI, Mechanism/Market Design and Psychology, and capturing their unique perspectives on the core research questions in human computation; and (4) suggesting promising research directions for the future. ISBN 9781608455164, 121 pages.


Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap

arXiv.org Artificial Intelligence

Hidden Markov models (HMMs) and partially observable Markov decision processes (POMDPs) provide useful tools for modeling dynamical systems. They are particularly useful for representing the topology of environments such as road networks and office buildings, which are typical for robot navigation and planning. The work presented here describes a formal framework for incorporating readily available odometric information and geometrical constraints into both the models and the algorithm that learns them. By taking advantage of such information, learning HMMs/POMDPs can be made to generate better solutions and require fewer iterations, while being robust in the face of data reduction. Experimental results, obtained from both simulated and real robot data, demonstrate the effectiveness of the approach.