Goto

Collaborating Authors

 Saund, Eric


Logic and MRF Circuitry for Labeling Occluding and Thinline Visual Contours

Neural Information Processing Systems

This paper presents representation and logic for labeling contrast edges and ridges in visual scenes in terms of both surface occlusion (border ownership) and thinline objects. In natural scenes, thinline objects include sticksand wires, while in human graphical communication thinlines include connectors, dividers, and other abstract devices. Our analysis is directed at both natural and graphical domains. The basic problem is to formulate the logic of the interactions among local image events, specifically contrast edges, ridges, junctions, and alignment relations, such as to encode the natural constraints among these events in visual scenes. In a sparse heterogeneous Markov Random Field framework, we define a set of interpretation nodes and energy/potential functions among them. The minimum energy configuration found by Loopy Belief Propagation isshown to correspond to preferred human interpretation across a wide range of prototypical examples including important illusory contour figuressuch as the Kanizsa Triangle, as well as more difficult examples. Inpractical terms, the approach delivers correct interpretations of inherently ambiguous hand-drawn box-and-connector diagrams at low computational cost.


Reports on the 2004 AAAI Fall Symposia

AI Magazine

The Association for the Advancement of Artificial Intelligence presented its 2004 Fall Symposium Series Friday through Sunday, October 22-24 at the Hyatt Regency Crystal City in Arlington, Virginia, adjacent to Washington, DC. The symposium series was preceded by a one-day AI funding seminar. The topics of the eight symposia in the 2004 Fall Symposia Series were: (1) Achieving Human-Level Intelligence through Integrated Systems and Research; (2) Artificial Multiagent Learning; (3) Compositional Connectionism in Cognitive Science; (4) Dialogue Systems for Health Communications; (5) The Intersection of Cognitive Science and Robotics: From Interfaces to Intelligence; (6) Making Pen-Based Interaction Intelligent and Natural; (7) Real- Life Reinforcement Learning; and (8) Style and Meaning in Language, Art, Music, and Design.


Reports on the 2004 AAAI Fall Symposia

AI Magazine

Learning) are also available as AAAI be integrated and (2) architectures Technical Reports. There through Sunday, October 22-24 at an opportunity for new and junior researchers--as was consensus among participants the Hyatt Regency Crystal City in Arlington, well as students and that metrics in machine learning, Virginia, adjacent to Washington, postdoctoral fellows--to get an inside planning, and natural language processing DC. The symposium series was look at what funding agencies expect have driven advances in those preceded on Thursday, October 21 by in proposals from prospective subfields, but that those metrics have a one-day AI funding seminar, which grantees. Representatives and program also distracted attention from how to was open to all registered attendees. The topic is of increasing interest Domains for motivating, testing, large numbers of agents, more complex with the advent of peer-to-peer network and funding this research were agent behaviors, partially observable services and with ad-hoc wireless proposed (some during our joint session environments, and mutual adaptation.


Unsupervised Learning of Mixtures of Multiple Causes in Binary Data

Neural Information Processing Systems

This paper presents a formulation for unsupervised learning of clusters reflectingmultiple causal structure in binary data. Unlike the standard mixture model, a multiple cause model accounts for observed databy combining assertions from many hidden causes, each of which can pertain to varying degree to any subset of the observable dimensions.A crucial issue is the mixing-function for combining beliefs from different cluster-centers in order to generate data reconstructions whose errors are minimized both during recognition and learning. We demonstrate a weakness inherent to the popular weighted sum followed by sigmoid squashing, and offer an alternative formof the nonlinearity. Results are presented demonstrating the algorithm's ability successfully to discover coherent multiple causal representat.ions of noisy test data and in images of printed characters. 1 Introduction The objective of unsupervised learning is to identify patterns or features reflecting underlying regularities in data. Single-cause techniques, including the k-means algorithm andthe standard mixture-model (Duda and Hart, 1973), represent clusters of data points sharing similar patterns of Is and Os under the assumption that each data point belongs to, or was generated by, one and only one cluster-center; output activity is constrained to sum to 1. In contrast, a multiple-cause model permits more than one cluster-center to become fully active in accounting for an observed data vector. The advantage of a multiple cause model is that a relatively small number 27 28 Saund of hidden variables can be applied combinatorially to generate a large data set.


Unsupervised Learning of Mixtures of Multiple Causes in Binary Data

Neural Information Processing Systems

This paper presents a formulation for unsupervised learning of clusters reflecting multiple causal structure in binary data. Unlike the standard mixture model, a multiple cause model accounts for observed data by combining assertions from many hidden causes, each of which can pertain to varying degree to any subset of the observable dimensions. A crucial issue is the mixing-function for combining beliefs from different cluster-centers in order to generate data reconstructions whose errors are minimized both during recognition and learning. We demonstrate a weakness inherent to the popular weighted sum followed by sigmoid squashing, and offer an alternative form of the nonlinearity. Results are presented demonstrating the algorithm's ability successfully to discover coherent multiple causal representat.ions of noisy test data and in images of printed characters. 1 Introduction The objective of unsupervised learning is to identify patterns or features reflecting underlying regularities in data. Single-cause techniques, including the k-means algorithm and the standard mixture-model (Duda and Hart, 1973), represent clusters of data points sharing similar patterns of Is and Os under the assumption that each data point belongs to, or was generated by, one and only one cluster-center; output activity is constrained to sum to 1. In contrast, a multiple-cause model permits more than one cluster-center to become fully active in accounting for an observed data vector.