Results 1 
3 of
3
Learning compatibility coefficients for relaxation labeling processes
 IEEE Trans. Pattern Anal. Machine Intell
, 1994
"... AbstractRelaxation labeling processes have been widely used in many different domains including image processing, pattern recognition, and artificial intelligence. They are iterative procedures that aim at reducing local ambiguities and achieving global consistency through a parallel exploitation o ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
AbstractRelaxation labeling processes have been widely used in many different domains including image processing, pattern recognition, and artificial intelligence. They are iterative procedures that aim at reducing local ambiguities and achieving global consistency through a parallel exploitation of contextual information, which is quantitatively expressed in terms of a set of “compatibility coefficients. ” The problem of determining compatibility coefficients has received a considerable attention in the past and many heuristic, statisticalbased methods have been suggested. In this paper, we propose a rather different viewpoint to solve this problem: we derive them attempting to optimize the performance of the relaxation algorithm over a sample of training data; no statistical interpretation is given: compatibility coefficients are simply interpreted as real numbers, for which performance is optimal. Experimental results over a novel application of relaxation are given, which prove the effectiveness of the proposed approach. Index Terms Compatibility coefficients, constraint satisfaction, gradient projection, learning, neural networks, nonlinear
The Dynamics of Nonlinear Relaxation Labeling Processes
, 1997
"... We present some new results which definitively explain the behavior of the classical, heuristic nonlinear relaxation labeling algorithm of Rosenfeld, Hummel, and Zucker in terms of the HummelZucker consistency theory and dynamical systems theory. In particular, it is shown that, when a certain symm ..."
Abstract

Cited by 31 (10 self)
 Add to MetaCart
We present some new results which definitively explain the behavior of the classical, heuristic nonlinear relaxation labeling algorithm of Rosenfeld, Hummel, and Zucker in terms of the HummelZucker consistency theory and dynamical systems theory. In particular, it is shown that, when a certain symmetry condition is met, the algorithm possesses a Liapunov function which turns out to be (the negative of) a wellknown consistency measure. This follows almost immediately from a powerful result of Baum and Eagon developed in the context of Markov chain theory. Moreover, it is seen that most of the essential dynamical properties of the algorithm are retained when the symmetry restriction is relaxed. These properties are also shown to naturally generalize to higherorder relaxation schemes. Some applications and implications of the presented results are finally outlined.
A Computational Theory of Visual Word Recognition
, 1988
"... A computational theory of the visual recognition of words of text is developed. The theory, based on previous studies of how people read, includes three stages: hypothesis generation, hypothesis testing, and global contextual analysis. Hypothesis generation uses gross visual features, such as those ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
A computational theory of the visual recognition of words of text is developed. The theory, based on previous studies of how people read, includes three stages: hypothesis generation, hypothesis testing, and global contextual analysis. Hypothesis generation uses gross visual features, such as those that could be extracted from the peripheral presentation of a word, to provide expectations about word identity. Hypothesis testing integrates the information
determined by hypothesis generation with more detailed features that are extracted from the word image. Global contextual analysis provides syntactic and semantic information that inﬂuences hypothesis testing.
Algorithmic realization of the computational theory also consists of three stages. Hypothesis generation is implemented by extracting simple features from an input word and using those features to ﬁnd a set of dictionary words with those features in common. Hypothesis testing uses this set of words to drive further selective image analysis that matches the input to one of the members of this set. This is done with a tree of feature tests that can be executed in several different ways to recognize an input word. Global contextual analysis is implemented with a process that uses knowledge of typical wordclass transitions to improve the
performance of the hypothesis testing stage. This is executable in parallel with hypothesis testing.
This methodology is in sharp contrast to conventional machine reading algorithms which usually segment a word into characters and recognize the individual characters. Thus, a word decision is arrived at as a composite of character decisions. The algorithm presented here avoids the segmentation stage and does not require an exhaustive analysis of each character and thus is a character recognition algorithm.
Statistical projections show the viability of all three stages of the proposed approach. Experiments with images of text show that the methodology performs well in difﬁcult
situations, such as touching and overlapping characters.