Goto

Collaborating Authors

 Country


Fast Information Value for Graphical Models

Neural Information Processing Systems

Calculations that quantify the dependencies between variables are vital to many operations with graphical models, e.g., active learning and sensitivity analysis.Previously, pairwise information gain calculation has involved a cost quadratic in network size. In this work, we show how to perform a similar computation with cost linear in network size. The loss function that allows this is of a form amenable to computation by dynamic programming. The message-passing algorithm that results is described and empirical results demonstrate large speedups without decrease inaccuracy. In the cost-sensitive domains examined, superior accuracy isachieved.


Saliency Based on Information Maximization

Neural Information Processing Systems

A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation isbased on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existentin the primate visual cortex. It is further shown that the proposed saliency measure may be extended to address issues that currently eludeexplanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing theefficacy of the model in predicting the deployment of overt attention as compared with existing efforts. 1 Introduction There has long been interest in the nature of eye movements and fixation behavior following earlystudies by Buswell [I] and Yarbus [2]. However, a complete description of the mechanisms underlying these peculiar fixation patterns remains elusive.


Learning Multiple Related Tasks using Latent Independent Component Analysis

Neural Information Processing Systems

We propose a probabilistic model based on Independent Component Analysis for learning multiple related tasks. In our model the task parameters areassumed to be generated from independent sources which account for the relatedness of the tasks. We use Laplace distributions to model hidden sources which makes it possible to identify the hidden, independent components instead of just modeling correlations. Furthermore, ourmodel enjoys a sparsity property which makes it both parsimonious and robust. We also propose efficient algorithms for both empirical Bayes method and point estimation. Our experimental results on two multi-label text classification data sets show that the proposed approach is promising.



A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955

AI Magazine

The 1956 Dartmouth summer research project on artificial intelligence was initiated by this August 31, 1955 proposal, authored by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The original typescript consisted of 17 pages plus a title page. Copies of the typescript are housed in the archives at Dartmouth College and Stanford University. The first 5 papers state the proposal, and the remaining pages give qualifications and interests of the four who proposed the study. In the interest of brevity, this article reproduces only the proposal itself, along with the short autobiographical statements of the proposers.


Report on the Nineteenth International FLAIRS Conference

AI Magazine

The special tracks chair was Barry O'Sullivan, FLAIRS Conference (FLAIRS-19) was Reversals via Representational Refinements"; held 11-13 May 2006 at the Crowne Bob Morris of the NASA Ames The 20th International FLAIRS Conference (FLAIRS-20) will be held May 7 - 9, 2007 at the Casa Marina Resort, which is directly on the beach in Key West, Florida, USA. FLAIRS-20 will feature technical papers, special tracks, and invited speakers on artificial intelligence. The conference is hosted by the Florida Artificial Intelligence Research General Chair Society, in cooperation with AAAI. Geoff Sutcliffe In addition to the general conference, FLAIRS offers numerous special conference tracks. Special tracks provide researchers in focused areas the opportunity to meet and present University of Miami their work.



Happy Silver Anniversary, AI!

AI Magazine

Artificial intelligence (AI), on the twenty-fifth anniversary of its naming, is a "kid, finally grown up." In this letter to his field, Feigenbaum recounts AI's stumbles and successes, its growing pains and maturation, to a place of preeminence among the sciences; standing with molecular biology, particle physics, and cosmology as owners of the best questions of science.


AI in the News

AI Magazine

The computer The articles collected for this special 82-85. "Lexington is the home of Massachusetts Bolitho and Dr. Martin Klein, who set a superintelligent electronic'brain' collage in 1956. Please note that: (1) an the machine to music, emphasized that which reduces to a bare minimum the human excerpt may not reflect the overall tenor of their accomplishment was'an experiment element in the complex problem of the article, nor contain all of the relevant of mathematical importance only.' It is tracking and destroying an attacking airplane. Science News more popularly known -- can sight the approach - Jon Glick, Webmaster, AI TOPICS Letter. "A gambling of an attacker, compute its course, As is now well known, in operation at the Bell Telephone Laboratories, kill.


(AA)AI More than the Sum of Its Parts

AI Magazine

This is a wonderful opportunity, yet a position is very hard to match in any other. The first AAAI conference was held at Stanford University; it was very much a research conference, a scientific event that generated a lot of excitement. The conference was small and intimate, with few parallel sessions. There were excellent opportunities for us to talk to one another. AAAI-80 gave real substance to the organization, clearly getting AAAI off on the right foot, and it gave new identity and cohesiveness to the field. This year--2006--has also been a big year, celebrating the 50th anniversary of the original meeting at Dartmouth College, where the name "artificial intelligence" first came into common use. Numerous events around the world, including a celebratory symposium at Dartmouth and an AAAI Fellows Symposium associated with AAAI-05, have marked this important milestone in the history of the field. Progress since our first AAAI conference has The First AAAI Conference was Held at Stanford University. While each year's results may have seemed incremental, when we look back over the entire period we see some truly amazing plate the big picture and, perhaps more importantly things. In job at DARPA), to identify gaps in our national hindsight this may no longer look so exciting computing research agenda. It also occurred to (purists will say that it was not an "AI" system me that that perspective was a very special that beat Garry Kasparov but rather a highly asset to use in drafting this presidential engineered special-purpose machine largely address. Looking forward from back then, no want to raise a broad issue and consider matter how Deep Blue actually worked, playing some larger questions regarding the nature of chess well was clearly an AI problem--in fact, a the field itself and the role that AAAI as an classical one--and our success was historic.