This paper entertains the hypothesis that the primary purpose of the cells of the primary visual cortex (V1) is to perceive motions and predict changes of local image contents. Specifically, we propose a model that couples the vector representations of local image contents with the matrix representations of local pixel displacements caused by the relative motions between the agent and the surrounding objects and scene. When the image changes from one time frame to the next due to pixel displacements, the vector at each pixel is multiplied by a matrix that represents the displacement of this pixel. We show that by learning from pair of images that are deformed versions of each other, we can learn both vector and matrix representations. The units in the learned vector representations resemble V1 cells. The learned vector-matrix representations enable prediction of image frames over time, and more importantly, inference of the local pixel displacements caused by relative motions.
Learning by induction is one important means of learning classification rules for expert systems [Buchanan and Mitchell, 1978; Michalski, 19831. The major assumption in learning by induction is that a source of training examples exists. In many domains for which one wants to build expert systems, however, assembling libraries of training cases can present significant practical problems. Meta-DENDRAL, for example, worked with available mass spectra of just a few organic chemical compounds at a time because only a few compounds of the classes under consideration had been analyzed, and additional spectra were nearly impossible to obtain. In the present paper we illustrate one way of overcoming these kinds of problems by using a simulator to generate large numbers of training examples, and we discuss some implications of doing so.
The main aim of this paper is to provide an analysis of gradient descent (GD) algorithms with gradient errors that do not necessarily vanish, asymptotically. In particular, sufficient conditions are presented for both stability (almost sure boundedness of the iterates) and convergence of GD with bounded, (possibly) non-diminishing gradient errors. In addition to ensuring stability, such an algorithm is shown to converge to a small neighborhood of the minimum set, which depends on the gradient errors. It is worth noting that the main result of this paper can be used to show that GD with asymptotically vanishing errors indeed converges to the minimum set. The results presented herein are not only more general when compared to previous results, but our analysis of GD with errors is new to the literature to the best of our knowledge. Our work extends the contributions of Mangasarian & Solodov, Bertsekas & Tsitsiklis and Tadic & Doucet. Using our framework, a simple yet effective implementation of GD using simultaneous perturbation stochastic approximations (SP SA), with constant sensitivity parameters, is presented. Another important improvement over many previous results is that there are no `additional' restrictions imposed on the step-sizes. In machine learning applications where step-sizes are related to learning rates, our assumptions, unlike those of other papers, do not affect these learning rates. Finally, we present experimental results to validate our theory.
We present a frame-invariant method for detecting coherent structures from Lagrangian flow trajectories that can be sparse in number, as is the case in many fluid mechanics applications of practical interest. The method, based on principles used in graph coloring and spectral graph drawing algorithms, examines a measure of the kinematic dissimilarity of all pairs of fluid trajectories, either measured experimentally, e.g. using particle tracking velocimetry; or numerically, by advecting fluid particles in the Eulerian velocity field. Coherence is assigned to groups of particles whose kinematics remain similar throughout the time interval for which trajectory data is available, regardless of their physical proximity to one another. Through the use of several analytical and experimental validation cases, this algorithm is shown to robustly detect coherent structures using significantly less flow data than is required by existing spectral graph theory methods.
What kind of existential problems does AI bring about? The medium-term challenge of AI is not killer robots, it's job replacement. This dynamic is already underway and the literature suggests it's a more powerful driver of job loss than trade, though trade receives much more attention. True AI has not arrived, and automation is not AI, but robots and human-written code are a reasonable preview of what employment challenges genuine AI will bring. Computers already manage warehouses, can drive reasonably well, and are making meaningful progress into areas like basic lawyering and radiology that we long considered to be immune to change.