Results 1 
5 of
5
Distortion invariant object recognition in the dynamic link architecture
 IEEE Transactions on Computers
, 1993
"... AbstractWe present an object recognition system based ..."
Abstract

Cited by 491 (54 self)
 Add to MetaCart
AbstractWe present an object recognition system based
Learning With Preknowledge: Clustering With Point and Graph Matching Distance Measures
 Neural Computation
, 1996
"... Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point matching and graph matching distance measures. The point matching distance measure is approx. invariant under affine transform ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point matching and graph matching distance measures. The point matching distance measure is approx. invariant under affine transformationstranslation, rotation, scale and shearand permutations. It operates between noisy images with missing and spurious points. The graph matching distance measure operates on weighted graphs and is invariant under permutations. Learning is formulated as an optimization problem. Large objectives so formulated (¸ million variables) are efficiently minimized using a combination of optimization techniquessoftassign, algebraic transformations, clocked objectives, and deterministic annealing. 1 Introduction While few biologists today would subscribe to Locke's description of the nascent mind as a tabula rasa, the nature of the inherent constraintsKant's preknowledgethat helps org...
A Comparison of Two ComputerBased Face Identification Systems With Human Perceptions of Faces.
, 1997
"... The performance of two different computer systems for representing faces was compared with human ratings of similarity and distinctiveness, and human memory performance, on a specific set of face images. The systems compared were a graphmatching system (e.g. Lades et al., 1993) and coding based on P ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
The performance of two different computer systems for representing faces was compared with human ratings of similarity and distinctiveness, and human memory performance, on a specific set of face images. The systems compared were a graphmatching system (e.g. Lades et al., 1993) and coding based on Principal Components Analysis (PCA) of image pixels (e.g. Turk
Bayesian inference on visual grammars by neural nets that optimize
 YALE COMPUTER SCIENCE DEPARTMENT
, 1991
"... We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model inf ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model information, such as object labels, as it produces an image; correspondence problems and other noise removal tasks result. The neural nets that arise most directly are generalized assignment networks. Also there are transformations which naturally yield improved algorithms such as correlation matching in scale space and the Frameville neural nets for highlevel vision. Networks derived this way generally have objective functions with spurious local minima; such minima may commonly be avoided by dynamics that include deterministic annealing, for example recent improvements to Mean Field Theory dynamics. The grammatical method of neural net design allows domain knowledge to enter from all levels of the grammar, including "abstract" levels remote from the final image data, and
SelfOrganization of Networks in the Visual System
, 1995
"... This course deals with selforganization of networks with respect to two aspects: On the one hand the setup of patterned neural connections during ontogenesis and on the other hand an invariant object recognition system that is robust against distortion. Both aspects are solved by similar principle ..."
Abstract
 Add to MetaCart
This course deals with selforganization of networks with respect to two aspects: On the one hand the setup of patterned neural connections during ontogenesis and on the other hand an invariant object recognition system that is robust against distortion. Both aspects are solved by similar principles of selforganization which are based on sychronization of nervous activity and modification of synapses. The former is explained by using the development of a retinotopical mapping between retina and tectum as an example, the latter by the example of face recognition. The selforganization process is quantitatively described and also formalized by a system of differential equations, which will be simulated in the course. Additionally, representations of real objects with extracted features is discussed in the second part.