Results 1 
5 of
5
Learning compatibility coefficients for relaxation labeling processes
 IEEE Trans. Pattern Anal. Machine Intell
, 1994
"... AbstractRelaxation labeling processes have been widely used in many different domains including image processing, pattern recognition, and artificial intelligence. They are iterative procedures that aim at reducing local ambiguities and achieving global consistency through a parallel exploitation o ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
AbstractRelaxation labeling processes have been widely used in many different domains including image processing, pattern recognition, and artificial intelligence. They are iterative procedures that aim at reducing local ambiguities and achieving global consistency through a parallel exploitation of contextual information, which is quantitatively expressed in terms of a set of “compatibility coefficients. ” The problem of determining compatibility coefficients has received a considerable attention in the past and many heuristic, statisticalbased methods have been suggested. In this paper, we propose a rather different viewpoint to solve this problem: we derive them attempting to optimize the performance of the relaxation algorithm over a sample of training data; no statistical interpretation is given: compatibility coefficients are simply interpreted as real numbers, for which performance is optimal. Experimental results over a novel application of relaxation are given, which prove the effectiveness of the proposed approach. Index Terms Compatibility coefficients, constraint satisfaction, gradient projection, learning, neural networks, nonlinear
The Dynamics of Nonlinear Relaxation Labeling Processes
, 1997
"... We present some new results which definitively explain the behavior of the classical, heuristic nonlinear relaxation labeling algorithm of Rosenfeld, Hummel, and Zucker in terms of the HummelZucker consistency theory and dynamical systems theory. In particular, it is shown that, when a certain symm ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
We present some new results which definitively explain the behavior of the classical, heuristic nonlinear relaxation labeling algorithm of Rosenfeld, Hummel, and Zucker in terms of the HummelZucker consistency theory and dynamical systems theory. In particular, it is shown that, when a certain symmetry condition is met, the algorithm possesses a Liapunov function which turns out to be (the negative of) a wellknown consistency measure. This follows almost immediately from a powerful result of Baum and Eagon developed in the context of Markov chain theory. Moreover, it is seen that most of the essential dynamical properties of the algorithm are retained when the symmetry restriction is relaxed. These properties are also shown to naturally generalize to higherorder relaxation schemes. Some applications and implications of the presented results are finally outlined.
A Computational Theory of Visual Word Recognition
, 1988
"... A computational theory of the visual recognition of words of text is developed. The theory, based on previous studies of how people read, includes three stages: hypothesis generation, hypothesis testing, and global contextual analysis. Hypothesis generation uses gross visual features, such as those ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
A computational theory of the visual recognition of words of text is developed. The theory, based on previous studies of how people read, includes three stages: hypothesis generation, hypothesis testing, and global contextual analysis. Hypothesis generation uses gross visual features, such as those that could be extracted from the peripheral presentation of a word, to provide expectations about word identity. Hypothesis testing integrates the information
determined by hypothesis generation with more detailed features that are extracted from the word image. Global contextual analysis provides syntactic and semantic information that inﬂuences hypothesis testing.
Algorithmic realization of the computational theory also consists of three stages. Hypothesis generation is implemented by extracting simple features from an input word and using those features to ﬁnd a set of dictionary words with those features in common. Hypothesis testing uses this set of words to drive further selective image analysis that matches the input to one of the members of this set. This is done with a tree of feature tests that can be executed in several different ways to recognize an input word. Global contextual analysis is implemented with a process that uses knowledge of typical wordclass transitions to improve the
performance of the hypothesis testing stage. This is executable in parallel with hypothesis testing.
This methodology is in sharp contrast to conventional machine reading algorithms which usually segment a word into characters and recognize the individual characters. Thus, a word decision is arrived at as a composite of character decisions. The algorithm presented here avoids the segmentation stage and does not require an exhaustive analysis of each character and thus is a character recognition algorithm.
Statistical projections show the viability of all three stages of the proposed approach. Experiments with images of text show that the methodology performs well in difﬁcult
situations, such as touching and overlapping characters.
Breaking Substitution Ciphers Using a. Relaxation Algorithm
"... In this paper, a completely automatic method for breaking substitution ciphers is presented, based on relaxation methods. Relaxation algorithms have recently been introduced in image processing [4, 6]. They are iterative parallel classification algorithms, where every element in a graph structure tr ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, a completely automatic method for breaking substitution ciphers is presented, based on relaxation methods. Relaxation algorithms have recently been introduced in image processing [4, 6]. They are iterative parallel classification algorithms, where every element in a graph structure tries to estimate its class membership probabilities based on those of its neighbors. The process is iterated until a satisfactory classification is achieved. A new formulation of relaxation [4, 5], based on probability theory, paves the way for more general applications of relaxation. The use of relaxation for domains other than image classification is demonstrated in this paper. Section 2 describes the relaxation approach to probabilistic graph labeling; Section 4 discusses the application of this approach to substitution ciphers; and Section 5 summarizes the results obtained. Substitution ciphers are codes in which each letter of the alphabet has one fixed substitute, and the word divisions do not change. In this paper the problem of breaking substitution ciphers is represented as a probabilistic labeling problem. Every code letter is assigned probabilities of representing plaintext letters. These probabilities are updated in parallel for all code letters, using joint letter probabilities. Iterating the updating scheme results in improved estimates that finally lead to breaking the cipher. The method is applied successfully to two examples.
Maximal Derivations for Probabilistic Strings in Stochastic Languages*
"... A probabilistic string is a sequence of probability vectors. Each vector specifies a probability distribution over the possible symbols at its location in the string. In a probabilistic grammar a probability is assigned to every derivation. Given a probabilistic string and a probabilistic grammar th ..."
Abstract
 Add to MetaCart
A probabilistic string is a sequence of probability vectors. Each vector specifies a probability distribution over the possible symbols at its location in the string. In a probabilistic grammar a probability is assigned to every derivation. Given a probabilistic string and a probabilistic grammar the concept of a maximal derivation is defined. Algorithms for finding the maximal derivation for probabilistic finite state and linear grammars are given. The case where a waveform can be segmented into several possible probabilistic strings is also considered. 1.