Results 1  10
of
16
Embedding Gestalt Laws in Markov Random Fields  a theory for shape modeling and perceptual organization
, 1999
"... The goal of this paper is to study a mathematical framework of 2D object shape modeling and learning for middle level vision problems, such as image segmentation and perceptual organization. For this purpose, we pursue generic shape models which characterize the most common features of 2D object sha ..."
Abstract

Cited by 65 (9 self)
 Add to MetaCart
The goal of this paper is to study a mathematical framework of 2D object shape modeling and learning for middle level vision problems, such as image segmentation and perceptual organization. For this purpose, we pursue generic shape models which characterize the most common features of 2D object shapes. In this paper, shape models are learned from observed natural shapes based on a minimax entropy learning theory (Zhu and Mumford 1997, Zhu, Wu and Mumford 1997)[31, 32]. The learned shape models are Gibbs distributions dened on Markov random elds (MRFs). The neighborhood structures of these MRFs correspond to Gestalt laws {colinearity, cocircularity, proximity, parallelism, and symmetry. Thus both contourbased and regionbased features are accounted for. Stochastic Markov chain Monte Carlo (MCMC) algorithms are proposed for learning and model verication. Furthermore, this paper provides a quantitative measure for the socalled nonaccidental statistics, and thus justies some empi...
An algebra of human concept learning
 Journal of Mathematical Psychology
, 2006
"... An important element of learning from examples is the extraction of patterns and regularities from data. This paper investigates the structure of patterns in data defined over discrete features, i.e. features with two or more qualitatively distinct values. Any such pattern can be algebraically decom ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
An important element of learning from examples is the extraction of patterns and regularities from data. This paper investigates the structure of patterns in data defined over discrete features, i.e. features with two or more qualitatively distinct values. Any such pattern can be algebraically decomposed into a spectrum of component patterns, each of which is a simpler or more atomic ‘‘regularity.’ ’ Each component regularity involves a certain number of features, referred to as its degree. Regularities of lower degree represent simpler or more coarse patterns in the original pattern, while regularities of higher degree represent finer or more idiosyncratic patterns. The full spectral breakdown of a pattern into component regularities of minimal degree, referred to as its power series, expresses the original pattern in terms of the regular rules or patterns it obeys, amounting to a kind of ‘‘theory’ ’ of the pattern. The number of regularities at various degrees necessary to represent the pattern is tabulated in its power spectrum, which expresses how much of a pattern’s structure can be explained by regularities of various levels of complexity. A weighted mean of the pattern’s spectral power gives a useful numeric summary of its overall complexity, called its algebraic complexity. The basic theory of algebraic decomposition is extended in several ways, including algebraic accounts of the typicality of individual objects within concepts, and estimation of the power series from noisy data. Finally some relations between these algebraic quantities and empirical data are discussed.
Identifying the perceptual dimensions of visual complexity in scenes
 Proceedings of the 26th Annual Meeting of the Cognitive Science Society
, 2004
"... Scenes are composed of numerous objects, textures and colors which are arranged in a variety of spatial layouts. This presents the question of how visual complexity is represented by a cognitive system. In this paper, we aim to study the representation of visual complexity for realworld scene image ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Scenes are composed of numerous objects, textures and colors which are arranged in a variety of spatial layouts. This presents the question of how visual complexity is represented by a cognitive system. In this paper, we aim to study the representation of visual complexity for realworld scene images. Is visual complexity a perceptual property simple enough so that it can be compressed along a unique perceptual dimension? Or is visual complexity better represented by a multidimensional space? Thirtyfour participants performed a hierarchical grouping task in which they divided scenes into successive groups of decreasing complexity, describing the criteria they used at each stage. Half of the participants were told that complexity was related to the structure of the image whereas the instructions in the other half were unspecified. Results are consistent with a multidimensional representation of visual complexity (quantity of objects, clutter, openness, symmetry, organization, variety of colors) with task constraints modulating the shape of the complexity space (e.g. the weight of a specific dimension).
Qualitative Probabilities for Image Interpretation
"... Two basic problems in image interpretation are: a) determining which interpretations are the most plausible amoungst many possibilities; and b) controlling the search for plausible interpretations. We address these issues using a Bayesian approach, with the plausibility ordering and search pruning b ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Two basic problems in image interpretation are: a) determining which interpretations are the most plausible amoungst many possibilities; and b) controlling the search for plausible interpretations. We address these issues using a Bayesian approach, with the plausibility ordering and search pruning based on the posterior probabilities of interpretations. However, due to the need for detailed quantitative prior probabilities and the need to evaluate complex integrals over various conditional distributions, a full Bayesian approach is currently impractical except in tightly constrained domains. To circumvent these difficulties we introduce the notion of qualitative probabilistic analysis. In particular, given spatial and contrast resolution parameters, we consider only the asymptotic order of the posterior probability for any interpretation as these resolutions are made finer. We introduce this approach for a simple cardworld domain, and present computational results for blocksworld ima...
Bias toward regular form in mental shape spaces
 Journal of Experimental Psychology: Human Perception & Performance
, 2000
"... The distribution of figural "goodness " in 2 mental shape spaces, the space of triangles and the space of quadrilaterals, was examined. In Experiment 1, participants were asked to rate the typicality of visually presented triangles and quadrilaterals (perceptual task). In Experiment 2, participants ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The distribution of figural "goodness " in 2 mental shape spaces, the space of triangles and the space of quadrilaterals, was examined. In Experiment 1, participants were asked to rate the typicality of visually presented triangles and quadrilaterals (perceptual task). In Experiment 2, participants were asked to draw triangles and quadrilaterals by hand (production task). The rated typicality of a particular shape and the probability that that shape was generated by participants were each plotted as a function of shape parameters, yielding estimates of the subjective distribution of shape goodness in shape space. Compared with neutral distributions of random shapes in the same shape spaces, these distributions showed a marked bias toward regular forms (equilateral triangles and squares). Such psychologically medal shapes apparently represent ideal forms that maximize the perceptual preference for regularity and symmetry. Shape classification, like many classification tasks, can be regarded as a decision among somewhat fuzzy categories. In
Formation of visual “objects ” in the early computation of spatial relations
"... Perceptual grouping is the process by which elements in the visual image are aggregated into larger and more complex structures, i.e., “objects. ” This paper reports a study of the spatial factors and timecourse of the development of objects over the course of the first few hundred milliseconds of ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Perceptual grouping is the process by which elements in the visual image are aggregated into larger and more complex structures, i.e., “objects. ” This paper reports a study of the spatial factors and timecourse of the development of objects over the course of the first few hundred milliseconds of visual processing. The methodology uses the now wellestablished idea of an “object benefit ” for certain kinds of tasks (here, faster withinobject than betweenobjects probe comparisons) to test what the visual system in fact treats as an object at each point during processing. The study tested line segment pairs in a wide variety of spatial configurations at a range of exposure times, in each case measuring the strength of perceptual grouping as reflected in the magnitude of the object benefit. Factors tested included nonaccidental properties such as collinearity, cotermination, and parallelism; contour relatability; Gestalt factors such as symmetry and skew symmetry, and several others, all tested at fine (25 msec) timeslices over the course of processing. The data provide detailed information about the comparative strength of these factors in inducing grouping at each point in processing. The result is a vivid picture of the chronology of object formation, as objects progressively coalesce, with fully bound visual objects completed by about 200 msec of processing. The organization of the initially inchoate visual field into coherent units or “objects, ” called perceptual grouping or binding, is of fundamental importance in visual perception, influencing the perception of lightness (Adelson, 1993; Gilchrist, 1977), motion (Shimojo, Silverman, & Nakayama, 1988), and recognition of objects (Biederman, 1987). Much has been learned about the grouping factors originally identified by the Gestaltists, such as spatial
Perceptual Grouping by Selection of a Logically Minimal Model
, 2003
"... This paper presents a logicbased approach to grouping and perceptual organization, called Minimal Model theory, and presents efficient methods for computing interpretations in this framework. Grouping interpretations are first defined as logical structures, built out of atomic qualitative scene des ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper presents a logicbased approach to grouping and perceptual organization, called Minimal Model theory, and presents efficient methods for computing interpretations in this framework. Grouping interpretations are first defined as logical structures, built out of atomic qualitative scene descriptors ("regularities") that are derived from considerations of nonaccidentalness. These interpretations can then be partially ordered by their degree of regularity or constraint (measured numerically by their logical depth). The Genericity Constraintthe principle that interpretations should minimize coincidences in the observed configurationdictates that the preferred interpretation will be the minimum in this partial order, i.e. the interpretation with maximum depth. This maximumdepth interpretation, also called the minimal model or minimal interpretation, is in a sense the "simplest" (algebraically minimal) interpretation available of the image configuration. As a sideeffect, the "most salient" or most structured part of the scene can be identified, as the maximumdepth subtree of the minimal model. An efficient (O(n )) method for computing the minimal interpretation is presented, along with examples. Computational experiments show that the algorithm performs well under a wide range of parameter settings.
2001. Towards the Integration of Perceptual Organization and Visual Attention: The Inferential Attentional Allocation Model.Carleton
, 2001
"... Objectbased models of visual attention purport to explain why it is easier to process information within one object or perceptual group than across two or more groups. Perceptual groups are generally defined in terms of Gestalt grouping principles. These models of attention have been used to explai ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Objectbased models of visual attention purport to explain why it is easier to process information within one object or perceptual group than across two or more groups. Perceptual groups are generally defined in terms of Gestalt grouping principles. These models of attention have been used to explain the phenomenon of cognitive tunneling within HeadsUp Displays (HUDs), on the assumption that the symbology of a HeadsUp Display (HUD) in a cockpit forms a single perceptual group and the outside scene forms another. Despite extensive empirical support, objectbased models have various shortcomings. In particular, the use of Gestalt grouping principles to define the notion of objects does not allow for an operational measure of what an object is to the visual system. Also, the Gestalt principles do not allow for a systematic distinction between spatial and objectbased mechanisms of attention. Finally, it is generally assumed that Gestalt grouping occurs preattentively, whereas there is evidence that perceptual grouping requires attentional resources. The proposed line of research aims to develop an account of objectbased attention that does not rely on these premises. Rather, it is assumed that the cost of dividing attention between
Formal Constraints on Cognitive Interpretations of Causal
 In IEEE Workshop on Architectures for Semiotic Modeling and Situation Analysis in Large Complex Systems
, 1995
"... Human observers are remarkably proficient at extracting the causal structure of both natural and artifactual worlds on the basis of extremely impoverished observations, but a rigorous theory of how they achieve this is elusive. This paper investigates the formal structure of this problem, collapsing ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Human observers are remarkably proficient at extracting the causal structure of both natural and artifactual worlds on the basis of extremely impoverished observations, but a rigorous theory of how they achieve this is elusive. This paper investigates the formal structure of this problem, collapsing the distinction between human and automatic inference about complex systems, and considering an abstract observer in an abstract world. We introduce a structure algebra, a reduced description logic that is designed to be biased towards the recovery of causal structure in much the same way as are human observers. We illustrate how the algebra captures human intuitions about the "natural" interpretation of certain canonic types of observed property covariation. Finally we propose a formal criterion for the adequacy of a given description language to capture algebraically the "true" causal structure of a particular closed world.
Simplicity Versus Likelihood Principles in Perception
"... Discussions of the foundations of perceptual inference have often centered on 2 governing principles, the likelihood principle and the simplicity principle. Historically, these principles have usually been seen as opposed, but contemporary statistical (e.g., Bayesian) theory tends to see them as con ..."
Abstract
 Add to MetaCart
Discussions of the foundations of perceptual inference have often centered on 2 governing principles, the likelihood principle and the simplicity principle. Historically, these principles have usually been seen as opposed, but contemporary statistical (e.g., Bayesian) theory tends to see them as consistent, because for a variety of reasons simpler models (i.e., those with fewer dimensions or free parameters) make better predictors than more complex ones. In perception, many interpretation spaces are naturally hierarchical, meaning that they consist of a set of mutually embedded model classes of various levels of complexity, including simpler (lower dimensional) classes that are special cases of more complex ones. This article shows how such spaces can be regarded as algebraic structures, for example, as partial orders or lattices, with interpretations ordered in terms of dimensionality. The natural inference rule in such a space is a kind of simplicity rule: Among all interpretations qualitatively consistent with the image, draw the one that is lowest in the partial order, called the maximumdepth interpretation. This interpretation also maximizes the Bayesian posterior under certain simplifying assumptions, consistent with a unification of simplicity and likelihood principles. Moreover, the algebraic approach brings out the compositional structure inherent in such spaces, showing how perceptual interpretations are composed from a lexicon of primitive perceptual descriptors.