Goto

Collaborating Authors

 Rangarajan, Anand



A Convergence Proof for the Softassign Quadratic Assignment Algorithm

Neural Information Processing Systems

The softassign quadratic assignment algorithm has recently emerged as an effective strategy for a variety of optimization problems in pattern recognition and combinatorial optimization. While the effectiveness of the algorithm was demonstrated in thousands of simulations, there was no known proof of convergence. Here, we provide a proof of convergence for the most general form of the algorithm.


A Convergence Proof for the Softassign Quadratic Assignment Algorithm

Neural Information Processing Systems

The softassign quadratic assignment algorithm has recently emerged as an effective strategy for a variety of optimization problems inpattern recognition and combinatorial optimization. While the effectiveness of the algorithm was demonstrated in thousands of simulations, there was no known proof of convergence. Here, we provide a proof of convergence for the most general form of the algorithm.


Softassign versus Softmax: Benchmarks in Combinatorial Optimization

Neural Information Processing Systems

Steven Gold Department of Computer Science Yale University New Haven, CT 06520-8285 AnandRangarajan Dept. of Diagnostic Radiology Yale University New Haven, CT 06520-8042 Abstract A new technique, termed soft assign, is applied for the first time to two classic combinatorial optimization problems, the traveling salesmanproblem and graph partitioning. Softassign, which has emerged from the recurrent neural network/statistical physics framework, enforces two-way (assignment) constraints without the use of penalty terms in the energy functions. The softassign can also be generalized from two-way winner-take-all constraints to multiple membership constraints which are required for graph partitioning. Thesoftassign technique is compared to the softmax (Potts glass). Within the statistical physics framework, softmax and a penalty term has been a widely used method for enforcing the two-way constraints common within many combinatorial optimization problems.The benchmarks present evidence that softassign has clear advantages in accuracy, speed, parallelizabilityand algorithmic simplicityover softmax and a penalty term in optimization problems with two-way constraints. 1 Introduction In a series of papers in the early to mid 1980's, Hopfield and Tank introduced techniques which allowed one to solve combinatorial optimization problems with recurrent neural networks [Hopfield and Tank, 1985].


A Framework for Non-rigid Matching and Correspondence

Neural Information Processing Systems

Matching feature point sets lies at the core of many approaches to object recognition. We present a framework for nonrigid matching thatbegins with a skeleton module, affine point matching, and then integrates multiple features to improve correspondence and develops an object representation based on spatial regions to model local transformations.


A Framework for Non-rigid Matching and Correspondence

Neural Information Processing Systems

Matching feature point sets lies at the core of many approaches to object recognition. We present a framework for nonrigid matching that begins with a skeleton module, affine point matching, and then integrates multiple features to improve correspondence and develops an object representation based on spatial regions to model local transformations.


Softassign versus Softmax: Benchmarks in Combinatorial Optimization

Neural Information Processing Systems

A new technique, termed soft assign, is applied for the first time to two classic combinatorial optimization problems, the traveling salesman problem and graph partitioning. Soft assign, which has emerged from the recurrent neural network/statistical physics framework, enforces two-way (assignment) constraints without the use of penalty terms in the energy functions. The soft assign can also be generalized from two-way winner-take-all constraints to multiple membership constraints which are required for graph partitioning. The soft assign technique is compared to the softmax (Potts glass). Within the statistical physics framework, softmax and a penalty term has been a widely used method for enforcing the two-way constraints common within many combinatorial optimization problems.


Learning with Preknowledge: Clustering with Point and Graph Matching Distance Measures

Neural Information Processing Systems

Recently, the importance of such preknowledge for learning has been convincingly argued from a statistical framework [Geman et al., 1992]. Researchers have proposed that our brains may incorporate preknowledge in the form of distance measures [Shepard, 1989]. The neural network community has begun to explore this idea via tangent distance [Simard et al., 1993], model learning [Williams et al., 1993] and point matching distances [Gold et al., 1994]. However, only the point matching distances have been invariant under permutations. Here we extend that work by enhancing both the scope and function of those distance measures, significantly expanding the problem domains where learning may take place. We learn objects consisting of noisy 2-D point-sets or noisy weighted graphs by clustering with point matching and graph matching distance measures. The point matching measure is approx.


New Algorithms for 2D and 3D Point Matching: Pose Estimation and Correspondence

Neural Information Processing Systems

A fundamental open problem in computer vision-determining pose and correspondence between two sets of points in spaceis solvedwith a novel, robust and easily implementable algorithm. The technique works on noisy point sets that may be of unequal sizes and may differ by nonrigid transformations. A 2D variation calculatesthe pose between point sets related by an affine transformation-translation, rotation, scale and shear. A 3D to 3D variation calculates translation and rotation. An objective describing theproblem is derived from Mean field theory. The objective is minimized with clocked (EMlike) dynamics. Experiments with both handwritten and synthetic data provide empirical evidence for the method. 1 Introduction