Results 1  10
of
32
Mean shift: A robust approach toward feature space analysis
 In PAMI
, 2002
"... A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence ..."
Abstract

Cited by 1487 (34 self)
 Add to MetaCart
A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust Mestimators of location is also established. Algorithms for two lowlevel vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.
Improved fast Gauss transform and efficient kernel density estimation
 In ICCV
, 2003
"... Evaluating sums of multivariate Gaussians is a common computational task in computer vision and pattern recognition, including in the general and powerful kernel density estimation technique. The quadratic computational complexity of the summation is a significant barrier to the scalability of this ..."
Abstract

Cited by 105 (7 self)
 Add to MetaCart
Evaluating sums of multivariate Gaussians is a common computational task in computer vision and pattern recognition, including in the general and powerful kernel density estimation technique. The quadratic computational complexity of the summation is a significant barrier to the scalability of this algorithm to practical applications. The fast Gauss transform (FGT) has successfully accelerated the kernel density estimation to linear running time for lowdimensional problems. Unfortunately, the cost of a direct extension of the FGT to higherdimensional problems grows exponentially with dimension, making it impractical for dimensions above 3. We develop an improved fast Gauss transform to efficiently estimate sums of Gaussians in higher dimensions, where a new multivariate expansion scheme and an adaptive space subdivision technique dramatically improve the performance. The improved FGT has been applied to the mean shift algorithm achieving linear computational complexity. Experimental results demonstrate the efficiency and effectiveness of our algorithm. 1
Point Matching under Large Image Deformations and Illumination Changes
 IEEE TRANS. PATTERN ANAL. MACHINE INTELL
, 2004
"... To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust Mestimation framework the traditiona ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust Mestimation framework the traditional optical flow method and matching of local color distributions. These distributions are computed with spatially oriented kernels in the 5D joint spatial/color space. The estimation process is initiated at the third level of a Gaussian pyramid, uses only local information, and the illumination changes between the two images are also taken into account. Subpixel
Trimmed Least Squares Estimation in the Linear Model
 J. Amer. Statist. Assoc
, 1980
"... We consider two methods of defining a regression analogue to a trimmed mean. The first was suggested by Koenker and Bassett and uses their concept of regression quantiles. that of a trimmed mean. Its asymptotic behavior is completely analogous to The second method uses residuals from a preliminary e ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
We consider two methods of defining a regression analogue to a trimmed mean. The first was suggested by Koenker and Bassett and uses their concept of regression quantiles. that of a trimmed mean. Its asymptotic behavior is completely analogous to The second method uses residuals from a preliminary estimator. Its asymptotic behavior depends heavily on the preliminary estimate; it behaves, in general, quite differently than the estimator proposed by Koenker and Bassett, and it can be rather inefficient at the normal model even if the percent trimming is small. However, if the preliminary estimator is the average of the two regression quantiles used with Koenker and Bassett's estimator, then the first and second methods are asymptotically equivalent for sYmmetric error distributions. Key Words and Phrases: regression analogue, trimmed mean, regression quantile, preliminary estimator, linear model, trimmed least squares David Ruppert is an Assistant Professor and Raymond,J. Carroll an
Optimal Stack Filtering and the Estimation and Structural Approaches to Image Processing
, 1989
"... Rankorder based filters such as stack filters, multilevel and multistage median filters, morphological filters, and order statistic filters have all proven to be very effective at enhancing and restoring images. Perhaps the primary reason for their success is that they can suppress noise without d ..."
Abstract

Cited by 30 (11 self)
 Add to MetaCart
Rankorder based filters such as stack filters, multilevel and multistage median filters, morphological filters, and order statistic filters have all proven to be very effective at enhancing and restoring images. Perhaps the primary reason for their success is that they can suppress noise without destroying important image details such as edges and lines. Two approaches have been used in the past to design rankorder based nonlinear filters to enhance or restore images. They may be called the structural approach and the estimation approach. The first approach requires structural descriptions of the image and the process which has altered it, while the second requires statistical descriptions. The many different classes of rankorder based filters that have been developed over the last few decades are reviewed in the context of these two approaches. One of these filter classes, stack filters, then becomes the focus of the rest of the paper. These filters, which are defined by a weak superposition property and an ordering property, contain all compositions of 2D rankorder operations. The recently developed theory of minimum mean absolute error (MMAE) stack filtering is reviewed and extended to two dimensions. Then, a theory of optimal stack filtering under structural constraints and goals is developed for the structural approach to image processing. These two optimal stack filtering theories are then combined into a single design theory for rankorder based filters.
Robust Regression for Data with Multiple Structures
 In 2001 IEEE Conference on Computer Vision and Pattern Recognition, volume I
, 2001
"... In many vision problems (e.g., stereo, motion) multiple structures can occur in the data, in which case several instances of the same model need to be recovered from a single data set. However, once the measurement noise becomes significantly large relative to the separation between the structures, ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
In many vision problems (e.g., stereo, motion) multiple structures can occur in the data, in which case several instances of the same model need to be recovered from a single data set. However, once the measurement noise becomes significantly large relative to the separation between the structures, the robust statistical methods commonly used in the vision community tend to fail. In this paper, we show that all these techniques are special cases of the general class of Mestimators with auxiliary scale, and explain their failure in the presence of noisy multiple structures. To be able to cope with data containing multiple structures the techniques innate to vision (Hough and RANSAC) should be combined with the robust methods customary in statistics. The implications of our analysis are illustrated by introducing a simple procedure for 2D multistructured data problematic for all known current techniques. 1.
Truncated product method for combining Pvalues
 Genetic Epidemiol
, 2002
"... We present a new procedure for combining pvalues from a set of L hypothesis tests. Our procedure is to take the product of only those pvalues less than some specified cutoff value and to evaluate the probability of such a product, or a smaller value, under the overall hypothesis that all L hypoth ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We present a new procedure for combining pvalues from a set of L hypothesis tests. Our procedure is to take the product of only those pvalues less than some specified cutoff value and to evaluate the probability of such a product, or a smaller value, under the overall hypothesis that all L hypotheses are true. We give an explicit formulation for this pvalue, and find by simulation that it can provide high power for detecting departures from the overall hypothesis. We extend the procedure to situations when tests are not independent. We present both real and simulated examples where the method is especially useful. These include exploratory analyses when L is large, such as genomewide scans for markertrait associations and metaanalytic applications that combine information from published studies, with potential for dealing with the “publication bias” phenomenon. Once the overall hypothesis is rejected, an adjustment procedure with strong familywise error protection is available for smaller subsets of hypotheses, down to the individual tests. Key words: meta analysis, multiple tests, genomewide scans, microarrays, Bonferroni. 2 1
Hand Gesture Recognition within a LinguisticsBased Framework
 In Proc. ECCV
, 2004
"... An approach to recognizing human hand gestures from a monocular temporal sequence of images is presented. Of particular concern is the representation and recognition of hand movements that are used in single handed American Sign Language (ASL). The approach exploits previous linguistic analysis of m ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
An approach to recognizing human hand gestures from a monocular temporal sequence of images is presented. Of particular concern is the representation and recognition of hand movements that are used in single handed American Sign Language (ASL). The approach exploits previous linguistic analysis of manual languages that decompose dynamic gestures into their static and dynamic components. The first level of decomposition is in terms of three sets of primitives, hand shape, location and movement. Further levels of decomposition involve the lexical and sentence levels and are part of our plan for future work. We propose and subsequently demonstrate that given a monocular gesture sequence, kinematic features can be recovered from the apparent motion that provide distinctive signatures for 14 primitive movements of ASL. The approach has been implemented in software and evaluated on a database of 592 gesture sequences with an overall recognition rate of 86.00% for fully automated processing and 97.13% for manually initialized processing.
Highly robust estimation of the autocovariance function
 J. Time Ser. Anal
, 1998
"... Abstract. In this paper, the problem of the robustness of the sample autocovariance function is addressed. We propose a new autocovariance estimator, based on a highly robust estimator of scale. Its robustness properties are studied by means of the in¯uence function, and a new concept of temporal br ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Abstract. In this paper, the problem of the robustness of the sample autocovariance function is addressed. We propose a new autocovariance estimator, based on a highly robust estimator of scale. Its robustness properties are studied by means of the in¯uence function, and a new concept of temporal breakdown point. As the theoretical variance of the estimator does not have a closed form, we perform a simulation study. Situations with various size of outliers are tested. They con®rm the robustness properties of the new estimator. An SPlus function for the highly robust autocovariance estimator is made available on the Web at
Selected Training Exemplars for Neural Network Learning
, 1994
"... The dissertation of Mark Plutowski is approved, and it is acceptable in quality and form for publication on microfilm: CoChair CoChair ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
The dissertation of Mark Plutowski is approved, and it is acceptable in quality and form for publication on microfilm: CoChair CoChair