Results 1  10
of
145
Fast approximate energy minimization via graph cuts
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when v ..."
Abstract

Cited by 2132 (62 self)
 Add to MetaCart
(Show Context)
In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an αβswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an αexpansion: for a label α, this move assigns an arbitrary set of pixels the label α. Our second
On the Unification Line Processes, Outlier Rejection, and Robust Statistics with Applications in Early Vision
, 1996
"... The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent ..."
Abstract

Cited by 273 (9 self)
 Add to MetaCart
The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent interest in the use of robust statistical techniques to account for discontinuities. This paper unifies the two approaches. To achieve this we generalize the notion of a "line process" to that of an analog "outlier process" and show how a problem formulated in terms of outlier processes can be viewed in terms of robust statistics. We also characterize a class of robust statistical problems for which an equivalent outlierprocess formulation exists and give a straightforward method for converting a robust estimation problem into an outlierprocess formulation. We show how prior assumptions about the spatial structure of outliers can be expressed as constraints on the recovered analog outlier processes and how traditional continuation methods can be extended to the explicit outlierprocess formulation. These results indicate that the outlierprocess approach provides a general framework which subsumes the traditional lineprocess approaches as well as a wide class of robust estimation problems. Examples in surface reconstruction, image segmentation, and optical flow are presented to illustrate the use of outlier processes and to show how the relationship between outlier processes and robust statistics can be exploited. An appendix provides a catalog of common robust error norms and their equivalent outlierprocess formulations.
Unsupervised Learning from Dyadic Data
, 1998
"... Dyadic data refers to a domain with two finite sets of objects in which observations are made for dyads, i.e., pairs with one element from either set. This includes event cooccurrences, histogram data, and single stimulus preference data as special cases. Dyadic data arises naturally in many applic ..."
Abstract

Cited by 122 (12 self)
 Add to MetaCart
Dyadic data refers to a domain with two finite sets of objects in which observations are made for dyads, i.e., pairs with one element from either set. This includes event cooccurrences, histogram data, and single stimulus preference data as special cases. Dyadic data arises naturally in many applications ranging from computational linguistics and information retrieval to preference analysis and computer vision. In this paper, we present a systematic, domainindependent framework for unsupervised learning from dyadic data by statistical mixture models. Our approach covers different models with flat and hierarchical latent class structures and unifies probabilistic modeling and structure discovery. Mixture models provide both, a parsimonious yet flexible parameterization of probability distributions with good generalization performance on sparse data, as well as structural information about datainherent grouping structure. We propose an annealed version of the standard Expectation Maximization algorithm for model fitting which is empirically evaluated on a variety of data sets from different domains.
Unsupervised Texture Segmentation in a Deterministic Annealing Framework
, 1998
"... We present a novel optimization framework for unsupervised texture segmentation that relies on statistical tests as a measure of homogeneity. Texture segmentation is formulated as a data clustering problem based on sparse proximity data. Dissimilarities of pairs of textured regions are computed from ..."
Abstract

Cited by 105 (9 self)
 Add to MetaCart
(Show Context)
We present a novel optimization framework for unsupervised texture segmentation that relies on statistical tests as a measure of homogeneity. Texture segmentation is formulated as a data clustering problem based on sparse proximity data. Dissimilarities of pairs of textured regions are computed from a multiscale Gabor filter image representation. We discuss and compare a class of clustering objective functions which is systematically derived from invariance principles. As a general optimization framework we propose deterministic annealing based on a meanfield approximation. The canonical way to derive clustering algorithms within this framework as well as an efficient implementation of meanfield annealing and the closely related Gibbs sampler are presented. We apply both annealing variants to Brodatzlike microtexture mixtures and realword images.
Estimating Optical Flow in Segmented Images using Variableorder Parametric Models with Local Deformations
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1996
"... This paper presents a new model for estimating optical flow based on the motion of planar regions plus local deformations. The approach exploits brightness information to organize and constrain the interpretation of the motion by using segmented regions of piecewise smooth brightness to hypothesize ..."
Abstract

Cited by 103 (4 self)
 Add to MetaCart
(Show Context)
This paper presents a new model for estimating optical flow based on the motion of planar regions plus local deformations. The approach exploits brightness information to organize and constrain the interpretation of the motion by using segmented regions of piecewise smooth brightness to hypothesize planar regions in the scene. Parametric flow models are estimated in these regions in a two step process which first computes a coarse fit and estimates the appropriate parameterization of the motion of the region (two, six, or eight parameters). The initial fit is refined using a generalization of the standard areabased regression approaches. Since the assumption of planarity is likely to be violated, we allow local deformations from the planar assumption in the same spirit as physicallybased approaches which model shape using coarse parametric models plus local deformations. This parametric+deformation model exploits the strong constraints of parametric approaches while retaining the ada...
Nonparametric similarity measures for unsupervised texture segmentation and image retrieval
, 1997
"... In this paper we propose and examine non–parametric statistical tests to define similarity and homogeneity measures for textures. The statistical tests are applied to the coefficients of images filtered by a multi–scale Gabor filter bank. We will demonstrate that these similarity measures are useful ..."
Abstract

Cited by 102 (3 self)
 Add to MetaCart
(Show Context)
In this paper we propose and examine non–parametric statistical tests to define similarity and homogeneity measures for textures. The statistical tests are applied to the coefficients of images filtered by a multi–scale Gabor filter bank. We will demonstrate that these similarity measures are useful for both, texture based image retrieval and for unsupervised texture segmentation, and hence offer an unified approach to these closely related tasks. We present results on Brodatz–like micro–textures and a collection of real–word images. 1
Image segmentation based on oscillatory correlation
 Neural Computation
, 1997
"... We study image segmentation on the basis of locally excitatory globally inhibitory oscillator networks (LEGION), whereby the phases of oscillators encode the binding of pixels. We introduce a potential for each oscillator so that only those oscillators with strong connections from their neighborhood ..."
Abstract

Cited by 98 (23 self)
 Add to MetaCart
(Show Context)
We study image segmentation on the basis of locally excitatory globally inhibitory oscillator networks (LEGION), whereby the phases of oscillators encode the binding of pixels. We introduce a potential for each oscillator so that only those oscillators with strong connections from their neighborhood can develop high potentials. Based on the concept of potential, a solution to remove noisy regions in an image is proposed for LEGION, so that it suppresses the oscillators corresponding to noisy regions, without affecting those corresponding to major regions. We show analytically that the resulting oscillator network separates an image into several major regions, plus a background consisting of all noisy regions, and illustrate network properties by computer simulation. The network exhibits a natural capacity in segmenting images. The oscillatory dynamics leads to a computer algorithm, which is applied successfully to segmenting real graylevel images. A number of issues regarding biological plausibility and perceptual organization are discussed. We argue that LEGION provides a novel and effective framework for image segmentation and figureground segregation. DeLiang Wang and David Terman Image Segmentation 1.
An Active Contour Model For Mapping The Cortex
 IEEE TRANS. ON MEDICAL IMAGING
, 1995
"... A new active contour model for finding and mapping the outer cortex in brain images is developed. A crosssection of the brain cortex is modeled as a ribbon, and a constant speed mapping of its spine is sought. A variational formulation, an associated force balance condition, and a numerical approac ..."
Abstract

Cited by 91 (15 self)
 Add to MetaCart
A new active contour model for finding and mapping the outer cortex in brain images is developed. A crosssection of the brain cortex is modeled as a ribbon, and a constant speed mapping of its spine is sought. A variational formulation, an associated force balance condition, and a numerical approach are proposed to achieve this goal. The primary difference between this formulation and that of snakes is in the specification of the external force acting on the active contour. A study of the uniqueness and fidelity of solutions is made through convexity and frequency domain analyses, and a criterion for selection of the regularization coefficient is developed. Examples demonstrating the performance of this method on simulated and real data are provided.
NonRedundant Data Clustering
, 2004
"... Data clustering is a popular approach for automatically finding classes, concepts, or groups of patterns. In practice this discovery process should avoid redundancies with existing knowledge about class structures or groupings, and reveal novel, previously unknown aspects of the data. In order to de ..."
Abstract

Cited by 90 (3 self)
 Add to MetaCart
Data clustering is a popular approach for automatically finding classes, concepts, or groups of patterns. In practice this discovery process should avoid redundancies with existing knowledge about class structures or groupings, and reveal novel, previously unknown aspects of the data. In order to deal with this problem, we present an extension of the information bottleneck framework, called coordinated conditional information bottleneck, which takes negative relevance information into account by maximizing a conditional mutual information score subject to constraints. Algorithmically, one can apply an alternating optimization scheme that can be used in conjunction with different types of numeric and nonnumeric attributes. We present experimental results for applications in text mining and computer vision.
A Generic Grouping Algorithm and its Quantitative Analysis
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1998
"... This paper presents a generic method for perceptual grouping, and an analysis of its expected grouping quality. The grouping method is fairly general: it may be used for the grouping of various types of data features, and to incorporate different grouping cues, operating over feature sets of diff ..."
Abstract

Cited by 63 (4 self)
 Add to MetaCart
(Show Context)
This paper presents a generic method for perceptual grouping, and an analysis of its expected grouping quality. The grouping method is fairly general: it may be used for the grouping of various types of data features, and to incorporate different grouping cues, operating over feature sets of different sizes. The proposed method is divided into two parts: Constructing a graph representation of the available perceptual grouping evidence, and then finding the "best" partition of the graph into groups. The first stage includes a cue enhancement procedure, which integrates the information available from multifeature cues into very reliable bifeature cues. Both stages are implemented using known statistical tools such as Wald's SPRT algorithm and the Maximum Likelihood criterion. The accompanying theoretical analysis of this grouping criterion quantifies intuitive expectations and predicts that the expected grouping quality increases with cue reliability. It also shows that investing more computational effort in the grouping algorithm leads to better grouping results. This analysis, which quantifies the grouping power of the Maximum Likelihood criterion, is independent of the grouping domain. To our best knowledge, such an analysis of a grouping process is given here for the first time. Three grouping algorithms, in three different domains, are synthesized as instances of the generic method, They demonstrate the applicability and generality of this grouping method. Keywords : Perceptual Grouping, Grouping Analysis, Graph Clustering, Maximum Likelihood, Wald's SPRT, Performance Prediction, Generic Grouping Algorithm. 1