Results 1 
8 of
8
Distortion invariant object recognition in the dynamic link architecture
 IEEE Transactions on Computers
, 1993
"... Abstract—We present an object recognition system based on the Dynamic Link Architecture, which is an extension to classical Artificial Neural Networks. The Dynamic Link Architecture exploits correlations in the finescale temporal structure of cellular signals in order to group neurons dynamically ..."
Abstract

Cited by 614 (82 self)
 Add to MetaCart
(Show Context)
Abstract—We present an object recognition system based on the Dynamic Link Architecture, which is an extension to classical Artificial Neural Networks. The Dynamic Link Architecture exploits correlations in the finescale temporal structure of cellular signals in order to group neurons dynamically into higherorder entities. These entities represent a very rich structure and can code for high level objects. In order to demonstrate the capabilities of the Dynamic Link Architecture we implemented a program that can recognize human faces and other objects from video images. Memorized objects are represented by sparse graphs, whose vertices are labeled by a multiresolution description in terms of a local power spectrum, and whose edges are labeled by geometrical distance vectors. Object recognition can be formulated as elastic graph matching, which is performed here by stochastic optimization of a matching cost function. Our implementation on a transputer network successfully achieves recognition of human faces and office objects from gray level camera images. The performance of the program is evaluated by a statistical analysis of recognition results from a portrait gallery comprising images of 87 persons. Index Terms—Computer vision, distortion invariance, dynamic link architecture, elastic graph matching, object recognition, neural network, wavelet. I.
Learning With Preknowledge: Clustering With Point and Graph Matching Distance Measures
 Neural Computation
, 1996
"... Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point matching and graph matching distance measures. The point matching distance measure is approx. invariant under affine transform ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
(Show Context)
Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point matching and graph matching distance measures. The point matching distance measure is approx. invariant under affine transformationstranslation, rotation, scale and shearand permutations. It operates between noisy images with missing and spurious points. The graph matching distance measure operates on weighted graphs and is invariant under permutations. Learning is formulated as an optimization problem. Large objectives so formulated (¸ million variables) are efficiently minimized using a combination of optimization techniquessoftassign, algebraic transformations, clocked objectives, and deterministic annealing. 1 Introduction While few biologists today would subscribe to Locke's description of the nascent mind as a tabula rasa, the nature of the inherent constraintsKant's preknowledgethat helps org...
A Comparison of Two ComputerBased Face Identification Systems With Human Perceptions of Faces.
, 1997
"... The performance of two different computer systems for representing faces was compared with human ratings of similarity and distinctiveness, and human memory performance, on a specific set of face images. The systems compared were a graphmatching system (e.g. Lades et al., 1993) and coding based on P ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
The performance of two different computer systems for representing faces was compared with human ratings of similarity and distinctiveness, and human memory performance, on a specific set of face images. The systems compared were a graphmatching system (e.g. Lades et al., 1993) and coding based on Principal Components Analysis (PCA) of image pixels (e.g. Turk
Bayesian inference on visual grammars by neural nets that optimize
 YALE COMPUTER SCIENCE DEPARTMENT
, 1991
"... We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model inf ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model information, such as object labels, as it produces an image; correspondence problems and other noise removal tasks result. The neural nets that arise most directly are generalized assignment networks. Also there are transformations which naturally yield improved algorithms such as correlation matching in scale space and the Frameville neural nets for highlevel vision. Networks derived this way generally have objective functions with spurious local minima; such minima may commonly be avoided by dynamics that include deterministic annealing, for example recent improvements to Mean Field Theory dynamics. The grammatical method of neural net design allows domain knowledge to enter from all levels of the grammar, including "abstract" levels remote from the final image data, and
SelfOrganization of Networks in the Visual System
, 1995
"... This course deals with selforganization of networks with respect to two aspects: On the one hand the setup of patterned neural connections during ontogenesis and on the other hand an invariant object recognition system that is robust against distortion. Both aspects are solved by similar principle ..."
Abstract
 Add to MetaCart
This course deals with selforganization of networks with respect to two aspects: On the one hand the setup of patterned neural connections during ontogenesis and on the other hand an invariant object recognition system that is robust against distortion. Both aspects are solved by similar principles of selforganization which are based on sychronization of nervous activity and modification of synapses. The former is explained by using the development of a retinotopical mapping between retina and tectum as an example, the latter by the example of face recognition. The selforganization process is quantitatively described and also formalized by a system of differential equations, which will be simulated in the course. Additionally, representations of real objects with extracted features is discussed in the second part.
5 Perception as Unconscious Inference
"... Consider for a moment the spatial and chromatic dimensions of your visual experience. Suppose that as you gaze about the room you see a table, some books, and papers. Ignore for now the fact that you immediately recognize these objects to be a table with books and papers on it. Concentrate on how th ..."
Abstract
 Add to MetaCart
(Show Context)
Consider for a moment the spatial and chromatic dimensions of your visual experience. Suppose that as you gaze about the room you see a table, some books, and papers. Ignore for now the fact that you immediately recognize these objects to be a table with books and papers on it. Concentrate on how the table looks to you: its top spreads out in front of you, stopping at edges beyond which lies unfilled space, leading to more or less distant chairs, shelves, or expanses of floor. The books and paper on the table top create shaped visual boundaries between areas of different color, within which there may be further variation of color or visual texture. Propelled by a slight breeze, a sheet of paper slides across the table, and you experience its smooth motion before it floats out of sight. The aspects of visual perception to which I’ve drawn your attention are objects of study in contemporary perceptual psychology, which considers the perception of size, shape, distance, motion, and color. These phenomenal aspects of vision are sometimes contrasted with other, more typically cognitive aspects of perception, including our recognition that the objects in front of us include the table, books, and paper, our seeing that the table is old and well crafted, and our identifying the sheets of paper as the draft of an article in progress. All of these elements of our visual experience, whether characterized here as phenomenal or cognitive, 1 seem to arise effortlessly as we direct our gaze here and there. Yet we know that the cognitive aspects must depend on previously attained knowledge. We are not born recognizing books and tables, but we learn to categorize these artifacts and to determine at a glance that a table is an old one of good quality. What about the phenomenal aspects? A persistent theme in the history of visual theory has been that the phenomenal aspects of visual perception are produced by inferences or judgments, which are
VIEW Communicated by Lawrence Jackel Neural Networks and the BiadVariance Dilemma
"... Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We ill ..."
Abstract
 Add to MetaCart
(Show Context)
Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that currentgeneration feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallelversusserial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals. 1