Results 1 
6 of
6
Optimal perceptual inference
 In CVPR, Washington DC
, 1983
"... When a vision system creates an interpretation of some input datn, it assigns truth values or probabilities to intcrnal hypothcses about the world. We present a nondctcrministic method for assigning truth values that avoids many of the problcms encountered by existing relaxation methods. Instead of ..."
Abstract

Cited by 91 (14 self)
 Add to MetaCart
When a vision system creates an interpretation of some input datn, it assigns truth values or probabilities to intcrnal hypothcses about the world. We present a nondctcrministic method for assigning truth values that avoids many of the problcms encountered by existing relaxation methods. Instead of rcprcscnting probabilitics with realnumbers, we usc a more dircct encoding in which thc probability associated with a hypotlmis is rcprcscntcd by the probability hat it is in one of two states, true or false. Wc give a particular nondeterministic operator, based on statistical mechanics, for updating the truth values of hypothcses. The operator ensures that the probability of discovering a particular combination of hypothcscs is a simplc function of how good that combination is. Wc show that thcrc is a simple relationship bctween this operator and Bayesian inference, and we describe a learning rule which allows a parallel system to converge on a set ofweights that optimizes its perccptt~al inferences. lnt roduction One way of interpreting images is to formulate hypotheses about parts or aspects of the imagc and then decide which of these hypotheses are likely to be correct. Thc probability that each hypothesis is correct is determined partly by its fit to the imagc and partly by its fit to other hypothcses (hat are taken to be correct, so the truth'value of an individual hypothesis cannot be decided in isolation. One method of searching for the most plausible combination of hypotheses is to use a rclaxation process in which a probability is associated with each hypothesis, and the probabilities arc then iteratively modified on the basis of the fit to the imagc and the known relationships bctwcen hypotheses. An attractive property of rclaxation methods is that they can be implemented in parallel hardwarc where one computational unit is used for each possible hypothcsis, and the interactions betwcen hypotheses are implemented by dircct hardwarc connections betwcen the units. Many variations of the basic relaxation idea have becn However, all the current methods suffer from one or more of the following problems:
ContextBased Vision: Recognizing Objects Using Information From Both 2d And 3d Imagery
 IEEE PAMI
, 1991
"... This paper describes results from an ongoing project concerned with recognizing objects in complex scene domains, and especially in the domain that includes the natural outdoor world. Traditional machine recognition paradigms assume either (1) that all objects of interest are definable by a relative ..."
Abstract

Cited by 69 (1 self)
 Add to MetaCart
This paper describes results from an ongoing project concerned with recognizing objects in complex scene domains, and especially in the domain that includes the natural outdoor world. Traditional machine recognition paradigms assume either (1) that all objects of interest are definable by a relatively small number of explicit shape models, or (2) that all objects of interest have characteristic, locally measurable features. The failure of both assumptions in a complex domain such as the natural outdoor world has a dramatic impact on the form of an acceptable architecture for an object recognition system. In our work, we make the use of contextual information a central issue, and explicitly design a system to identify and use context as an integral part of recognition. In so doing, we provide a new paradigm for visual recognition that eliminates the traditional dependence on stored geometric models and universal image partitioning algorithms. This paradigm combines the results of many s...
PhysicsBased Segmentation of Complex Objects Using Multiple Hypotheses of Image Formation
, 1997
"... this paper, for governmental be described by a subspace of the general models; each purposes, is acknowledged ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
this paper, for governmental be described by a subspace of the general models; each purposes, is acknowledged
A methodology for the development of general knowledgebased vision systems
 In Proceedings of the IEEE Workshop on Principles of KnowledgeBased Systems
, 1984
"... This excerpt is provided, in screenviewable form, for personal use only by ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
This excerpt is provided, in screenviewable form, for personal use only by
Segmentation and Interpretation Using Multiple Physical Hypotheses of Image Formation
, 1996
"... One of the first, and most important tasks in single image analysis is segmentation: finding groups of pixels in an image that "belong" together. A segmentation specifies regions of an image that we can reason about and analyze. Having an accurate segmentation is a prerequisite for vision tasks such ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
One of the first, and most important tasks in single image analysis is segmentation: finding groups of pixels in an image that "belong" together. A segmentation specifies regions of an image that we can reason about and analyze. Having an accurate segmentation is a prerequisite for vision tasks such as shapefromshading. A generalpurpose segmentation algorithm, however, does not currently exist. Furthermore, the output of many segmentation algorithms is simply a set of pixel groupings; no attempt is made to provide a physical description of or make a connection between the image regions and objects in the scene. Physicsbased segmentation algorithms are based upon identifying coherent regions of an image according to a model of object appearance. These models have usually assumed that a scene contains a single material type, restricted forms of illumination, and uniformly colored objects. This work challenges these assumptions by considering multiple physical hypotheses for simple i...
Biophysics Deportment
"... When a vision system creates an interpretation of some input data, it assigns truth values or probabilities to internal hypotheses about the world. \Ve present a nondeterministic method for assigning truth values that avoids many of the problems encountered by existing relaxation methods. Instead o ..."
Abstract
 Add to MetaCart
When a vision system creates an interpretation of some input data, it assigns truth values or probabilities to internal hypotheses about the world. \Ve present a nondeterministic method for assigning truth values that avoids many of the problems encountered by existing relaxation methods. Instead of representing probabilities with realnumbers. we use a more direct encoding in which the probability associated with a hypothesis is represented by the probability that it is in one of two states. true or false. We give a particular nondeterministic operator. based on statistlcal mechanics. for updating the truth values of hypotheses. The operator ensures that the probability of discovering a particular combination of hypotheses is a simple function of how good that combination is. We show that there is a simple relationship between this operato.r and Bayesian inference. and we describe a learning rule which allows a parallel system to converge on a set"of weights that optimizes its perceptual inferences. Int roduction One way of interpreting images is to formulate hypotheses about pans or aspects of the image and then decide which of these hypotheses likely to be correct. The probability that each hypothesis is correct is determined partly by its fit to the image and partly by its fit to other hypotheses that are taken to be correct, so the truth · value of an individual hypothesis cannot be decided in isolation. One method of searching for the most plausible combination of hypotheses is to use a relaxation process in which a probability is associated with each hypothesis. and the probabilities arc then iteratively modified on the basis of the fit to the image and the known relationships between hypotheses. An attractive property of relaxation methods is that they can be implemented in parallel hardware where one computational unit is used for each possible hypothesis, and the interactions between hypotheses arc implemented by direct hardware connections between the units. Many variations of the basic relaxation idea have been sur.,gested.H However. all the current methods suffer from one or more of the arc following problems: