Results 1  10
of
40
Mean shift: A robust approach toward feature space analysis
 In PAMI
, 2002
"... A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence ..."
Abstract

Cited by 1461 (34 self)
 Add to MetaCart
A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust Mestimators of location is also established. Algorithms for two lowlevel vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.
Support Vector Clustering
, 2001
"... We present a novel clustering method using the approach of support vector machines. Data points are mapped by means of a Gaussian kernel to a high dimensional feature space, where we search for the minimal enclosing sphere. This sphere, when mapped back to data space, can separate into several compo ..."
Abstract

Cited by 162 (1 self)
 Add to MetaCart
We present a novel clustering method using the approach of support vector machines. Data points are mapped by means of a Gaussian kernel to a high dimensional feature space, where we search for the minimal enclosing sphere. This sphere, when mapped back to data space, can separate into several components, each enclosing a separate cluster of points. We present a simple algorithm for identifying these clusters. The width of the Gaussian kernel controls the scale at which the data is probed while the soft margin constant helps coping with outliers and overlapping clusters. The structure of a dataset is explored by varying the two parameters, maintaining a minimal number of support vectors to assure smooth cluster boundaries. We demonstrate the performance of our algorithm on several datasets.
Bayesian Approaches to Gaussian Mixture Modelling
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1998
"... A Bayesianbased methodology is presented which automatically penalises overcomplex models being fitted to unknown data. We show that, with a Gaussian mixture model, the approach is able to select an `optimal' number of components in the model and so partition data sets. The performance of the Baye ..."
Abstract

Cited by 73 (2 self)
 Add to MetaCart
A Bayesianbased methodology is presented which automatically penalises overcomplex models being fitted to unknown data. We show that, with a Gaussian mixture model, the approach is able to select an `optimal' number of components in the model and so partition data sets. The performance of the Bayesian method is compared to other methods of optimal model selection and found to give good results. The methods are tested on synthetic and real data sets. Introduction Scientific disciplines generate data. In the attempt to understand the patterns present in such data sets methods which perform some form of unsupervised partitioning or modelling are particularly useful. Such an approach is only of use, however, if it offers a less complex representation of the data than the data set itself. This introduces an apparent conflict, however, as any model improves its fit to the data monotonically with increases in its complexity (the number of model parameters)  a model as complex as the data...
Beyond tracking: modelling activity and understanding behaviour
 International Journal of Computer Vision
, 2006
"... In this work, we present a unified bottomup and topdown automatic model selection based approach for modelling complex activities of multiple objects in cluttered scenes. An activity of multiple objects is represented based on discrete scene events and their behaviours are modelled by reasoning ab ..."
Abstract

Cited by 48 (12 self)
 Add to MetaCart
In this work, we present a unified bottomup and topdown automatic model selection based approach for modelling complex activities of multiple objects in cluttered scenes. An activity of multiple objects is represented based on discrete scene events and their behaviours are modelled by reasoning about the temporal and causal correlations among different events. This is significantly different from the majority of the existing techniques that are centred on object tracking followed by trajectory matching. In our approach, objectindependent events are detected and classified by unsupervised clustering using ExpectationMaximisation (EM) and classified using automatic model selection based on Schwarz’s Bayesian Information Criterion (BIC). Dynamic Probabilistic Networks (DPNs) are formulated for modelling the temporal and causal correlations among discrete events for robust and holistic scenelevel behaviour interpretation. In particular, we developed a Dynamically MultiLinked Hidden Markov Model (DMLHMM) based on the discovery of salient dynamic interlinks among multiple temporal processes corresponding to multiple event classes. A DMLHMM is built using BIC based factorisation resulting in its topology being intrinsically determined by the underlying causality and temporal order among events. Extensive experiments are conducted on modelling activities captured in different indoor and
Modefinding for mixtures of Gaussian distributions
 Dept. of Computer Science, University of Sheffield
, 1999
"... I consider the problem of finding all the modes of a mixture of multivariate Gaussian distributions, which has applications in clustering and regression. I derive exact formulas for the gradient and Hessian and give a partial proof that the number of modes cannot be more than the number of component ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
I consider the problem of finding all the modes of a mixture of multivariate Gaussian distributions, which has applications in clustering and regression. I derive exact formulas for the gradient and Hessian and give a partial proof that the number of modes cannot be more than the number of components, and are contained in the convex hull of the component centroids. Then, I develop two exhaustive mode search algorithms: one based on combined quadratic maximisation and gradient ascent and the other one based on a fixedpoint iterative scheme. Appropriate values for the search control parameters are derived by taking into account theoretical results regarding the bounds for the gradient and Hessian of the mixture. The significance of the modes is quantified locally (for each mode) by error bars, or confidence intervals (estimated using the values of the Hessian at each mode); and globally by the sparseness of the mixture, measured by its differential entropy (estimated through bounds). I conclude with some reflections about bumpfinding.
A Formulation of Boundary Mesh Segmentation
, 2004
"... We present a formulation of boundary mesh segmentation as an optimization problem. Previous segmentation solutions are classified according to the different segmentation goals, the optimization criteria and the various algorithmic techniques used. We identify two primarily distinct types of mesh seg ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
We present a formulation of boundary mesh segmentation as an optimization problem. Previous segmentation solutions are classified according to the different segmentation goals, the optimization criteria and the various algorithmic techniques used. We identify two primarily distinct types of mesh segmentation, namely parts segmentation and patch segmentation. We also define generic algorithms for the major techniques used for segmentation.
Gaussian mean shift is an EM algorithm
 IEEE Trans. on Pattern Analysis and Machine Intelligence
, 2005
"... The meanshift algorithm, based on ideas proposed by Fukunaga and Hostetler (1975), is a hillclimbing algorithm on the density defined by a finite mixture or a kernel density estimate. Meanshift can be used as a nonparametric clustering method and has attracted recent attention in computer vision ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
The meanshift algorithm, based on ideas proposed by Fukunaga and Hostetler (1975), is a hillclimbing algorithm on the density defined by a finite mixture or a kernel density estimate. Meanshift can be used as a nonparametric clustering method and has attracted recent attention in computer vision applications such as image segmentation or tracking. We show that, when the kernel is Gaussian, meanshift is an expectationmaximisation (EM) algorithm, and when the kernel is nongaussian, meanshift is a generalised EM algorithm. This implies that meanshift converges from almost any starting point and that, in general, its convergence is of linear order. For Gaussian meanshift we show: (1) the rate of linear convergence approaches 0 (superlinear convergence) for very narrow or very wide kernels, but is often close to 1 (thus extremely slow) for intermediate widths, and exactly 1 (sublinear convergence) for widths at which modes merge; (2) the iterates approach the mode along the local principal component of the data points from the inside of the convex hull of the data points; (3) the convergence domains are nonconvex and can be disconnected and show fractal behaviour. We suggest ways of accelerating meanshift based on the EM interpretation.
On the number of modes of a Gaussian mixture

, 2003
"... We consider a problem intimately related to the creation of maxima under Gaussian blurring: the number of modes of a Gaussian mixture in D dimensions. To our knowledge, a general answer to this question is not known. We conjecture that if the components of the mixture have the same covariance matr ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
We consider a problem intimately related to the creation of maxima under Gaussian blurring: the number of modes of a Gaussian mixture in D dimensions. To our knowledge, a general answer to this question is not known. We conjecture that if the components of the mixture have the same covariance matrix (or the same covariance matrix up to a scaling factor), then the number of modes cannot exceed the number of components. We demonstrate
SelfOrganised Clustering for Road Extraction in Classified Imagery
, 2001
"... The extraction of road networks from digital imagery is a fundamental image analysis operation. Common problems encountered in automated road extraction include high sensitivity to typical scene clutter in highresolution imagery, and Z. Z. inefficiency to meaningfully exploit multispectral imagery ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
The extraction of road networks from digital imagery is a fundamental image analysis operation. Common problems encountered in automated road extraction include high sensitivity to typical scene clutter in highresolution imagery, and Z. Z. inefficiency to meaningfully exploit multispectral imagery MSI . With a ground sample distance GSD of less than 2 m per pixel, roads can be broadly described as elongated regions. We propose an approach of elongated regionbased analysis for 2D road extraction from highresolution imagery, which is suitable for MSI, and is insensitive to conventional edge Z. definition. A selforganising road map SORM algorithm is presented, inspired from a specialised variation of Kohonen's Z. selforganising map SOM neural network algorithm. A spectrally classified highresolution image is assumed to be the input for our analysis. Our approach proceeds by performing spatial cluster analysis as a midlevel processing technique. This allows us to improve tolerance to road clutter in highresolution images, and to minimise the effect on road extraction of common classification errors. This approach is designed in consideration of the emerging trend towards highresolution multispectral sensors. Preliminary results demonstrate robust road extraction ability due to the nonlocal approach, when presented with noisy input. q 2001 Elsevier Science B.V. All rights reserved.
A Support Vector Method for Clustering
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 13
, 2001
"... We present a novel method for clustering using the support vector machine approach. Data points are mapped to a high dimensional feature space, where support vectors are used to define a sphere enclosing them. The boundary of the sphere forms in data space a set of closed contours containing the ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We present a novel method for clustering using the support vector machine approach. Data points are mapped to a high dimensional feature space, where support vectors are used to define a sphere enclosing them. The boundary of the sphere forms in data space a set of closed contours containing the data. Data points enclosed by each contour are defined as a cluster. As the width parameter of the Gaussian kernel is decreased, these contours fit the data more tightly and splitting of contours occurs. The algorithm works by separating clusters according to valleys in the underlying probability distribution, and thus clusters can take on arbitrary geometrical shapes. As in other SV algorithms, outliers can be dealt with by introducing a soft margin constant leading to smoother cluster boundaries. The structure of the data is explored by varying the two parameters. We investigate the dependence of our method on these parameters and apply it to several data sets.