Results 1  10
of
33
Mean shift: A robust approach toward feature space analysis
 In PAMI
, 2002
"... A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence ..."
Abstract

Cited by 2349 (40 self)
 Add to MetaCart
(Show Context)
A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust Mestimators of location is also established. Algorithms for two lowlevel vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.
Constrained Hough Transforms for Curve Detection
 Computer Vision and Image Understanding
, 1998
"... This paper describes techniques to perform fast and accurate curve detection using constrained Hough transforms, in which localization error can be propagated efficiently into the parameter space. We first review a formal definition of Hough transform and modify it to allow the formal treatment loca ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
(Show Context)
This paper describes techniques to perform fast and accurate curve detection using constrained Hough transforms, in which localization error can be propagated efficiently into the parameter space. We first review a formal definition of Hough transform and modify it to allow the formal treatment localization error. We then analyze current Hough transform techniques with respect to this definition. It is shown that the Hough transform can be subdivided into many small subproblems without a decrease in performance, where each subproblem is constrained to consider only those curves that pass through some subset of the edge pixels up to the localization error. This property allows us to accurately and efficiently propagate localization error into the parameter space such that curves are detected robustly without finding false positives. The use of randomization techniques yields an algorithm with a worstcase complexity of O(n), where n is the number of edge pixels in the image, if we are on...
Complete Line Segment Description using the Hough Transform
 Image and Vision Computing
, 1994
"... The Hough transform is a robust method for detecting discontinuous patterns in noisy images. When it is applied to the detection of a straight line, represented by the normal parameters, the transform provides only the length of the normal and the angle it makes with the axis. The transform gives no ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
The Hough transform is a robust method for detecting discontinuous patterns in noisy images. When it is applied to the detection of a straight line, represented by the normal parameters, the transform provides only the length of the normal and the angle it makes with the axis. The transform gives no information about the length or the end points of the line. A few authors have suggested algorithms for the determination of the length and the end points of a line. The suggested methods are iterative in nature and are highly compute bound thereby making them unsuitable for realtime applications. In this paper, we propose an efficient noniterative algorithm to determine the coordinates of the end points, the length, and the normal parameters of a straight line using the Hough transform. The proposed algorithm is based on an analysis of the spread of votes in the accumulator array cells, representing orientations which are different from that of the line under consideration. The algorith...
Finding Lines under Bounded Error
 Pattern Recognition
, 1993
"... A new algorithm for finding lines in images under a bounded error noise model is described. The algorithm is based on a hierarchical and adaptive subdivision of the space of line parameters, but, unlike previous adaptive or hierarchical line finders based on the Hough transform, measures errors in i ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
A new algorithm for finding lines in images under a bounded error noise model is described. The algorithm is based on a hierarchical and adaptive subdivision of the space of line parameters, but, unlike previous adaptive or hierarchical line finders based on the Hough transform, measures errors in image space and thereby guarantees that no solution satisfying the given error bounds will be lost. In addition, the algorithm can find interpretations of all the lines in the image that satisfy the constraint that each image feature supports at most one line hypothesisa constraint that is often useful to impose in practice. The algorithm can be extended to compute the probabilistic Hough transform and the generalized Hough transform a variety of statistical error models efficiently.
On Detecting Spatial Regularity in Noisy Images
 Information Processing Letters
, 1999
"... Detecting spatial regularity in images arises in computer vision, scene analysis, military applications, and other areas. In this paper we present an O(n 5 2 ) algorithm that reports all maximal equallyspaced collinear subsets. The algorithm is robust in that it can tolerate noise or imprecision t ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Detecting spatial regularity in images arises in computer vision, scene analysis, military applications, and other areas. In this paper we present an O(n 5 2 ) algorithm that reports all maximal equallyspaced collinear subsets. The algorithm is robust in that it can tolerate noise or imprecision that may be inherent in the measuring process, where the error threshold is a userspecified parameter. Our method also generalizes to higher dimensions. Keywords: Algorithms, combinatorial problems, computational geometry, pattern recognition. 1 Introduction Spatial regularity detection is an important problem in a number of domains such as computer vision, scene analysis, and landmine detection from infrared terrain images [5]. This paper addresses the problem of recognizing equallyspaced collinear subsets of a given pointset, where there may be imprecision in the input data. Kahng and Robins [5] gave an optimal O(n 2 )time algorithm for the exact version of this problem (i.e., where n...
On the grayscale inverse Hough transform
, 2000
"... This paper proposes a grayscale inverse Hough transform (GIHT) algorithm which is combined with a modified grayscale Hough transform (GHT). Given only the data of the Hough transform (HT) space and the dimensions of the image, the GIHT algorithm reconstructs correctly the original grayscale image ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
This paper proposes a grayscale inverse Hough transform (GIHT) algorithm which is combined with a modified grayscale Hough transform (GHT). Given only the data of the Hough transform (HT) space and the dimensions of the image, the GIHT algorithm reconstructs correctly the original grayscale image. As a first application, the GIHT is used for line detection and filtering according to conditions associated with the polar parameters, the size and the grayscale values of the lines. The main advantage of the GIHT is the determination of the image lines exactly as they appear, i.e. pixel by pixel and with the correct grayscale values. To avoid the quantization effects in the accumulator array of the GHT space, inversion conditions are defined which are associated only with the image size. The GIHT algorithm consists of two phases, which are the collection of grayscale information stored in the accumulator array and the extraction of the final image according to the filtering conditions. Experimental results confirm the efficiency of the proposed method. # 2000 Elsevier Science B.V. All rights reserved.
On the Inverse Hough Transform
 In IEEE Transactions on pattern and machine intelligence, Vol
, 1999
"... AbstractÐIn this paper, an Inverse Hough Transform algorithm is proposed. This algorithm reconstructs correctlythe original image, using onlythe data of the Hough Transform space and it is applicable to anybinaryimage. As a first application, the Inverse Hough Transform algorithm is used for straigh ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
AbstractÐIn this paper, an Inverse Hough Transform algorithm is proposed. This algorithm reconstructs correctlythe original image, using onlythe data of the Hough Transform space and it is applicable to anybinaryimage. As a first application, the Inverse Hough Transform algorithm is used for straightline detection and filtering. The lines are detected not just as continuous straight lines, which is the case of the standard Hough Transform, but as theyreallyappear in the original image, i.e., pixel bypixel. To avoid the quantization effects in the Hough Transform space, inversion conditions are defined, which are associated onlywith the dimensions of the images. Experimental results indicate that the Inverse Hough Transform algorithm is robust and accurate. Index TermsÐHough Transform, edge extraction, line detection, nonlinear filtering. 1
Graphics recognition from binary images : One step or two steps
 In International Conference on Pattern Recognition (ICPR
, 2002
"... Recognizing graphic objects from binary images is an important task in many reallife applications. Generally, there are two ways to do the graphics recognition: onestep methods and twostep methods. The former recognizes graphic objects from binary images directly, while the latter consists of vec ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Recognizing graphic objects from binary images is an important task in many reallife applications. Generally, there are two ways to do the graphics recognition: onestep methods and twostep methods. The former recognizes graphic objects from binary images directly, while the latter consists of vectorization and postprocessing. Neither of them is perfect enough to handle all difficulties. This paper first reviews popular graphics recognition methods to understand their advantages and disadvantages. Next, the performance comparison between two classes of methods is made in two important aspects: the time efficiency and the graphics quality, and the experimental results of timeefficiency comparison of 7 popular methods are also reported. Finally, we propose a new hybrid graphicsrecognition paradigm to integrate the advantages of both onestep methods and twostep methods and minimize their disadvantages. The proposed paradigm is capable of recognizing straight lines, arcs, circles and curves efficiently, and is helpful for extracting text images in textgraphics touching cases. 1.
Analytic Curve Detection from a Noisy Binary Edge Map using Genetic Algorithm
 In Proc. 5th. International Conference on Parallel Problem Solving from Nature (PPSN V
, 1998
"... . Currently Hough transform and its variants are the most common methods for detecting analytic curves from a binary edge image. However, these methods do not scale well when applied to complex noisy images where correct data is very small compared to the amount of incorrect data. We propose a Genet ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
. Currently Hough transform and its variants are the most common methods for detecting analytic curves from a binary edge image. However, these methods do not scale well when applied to complex noisy images where correct data is very small compared to the amount of incorrect data. We propose a Genetic Algorithm in combination with the Randomized Hough Transform, along with a different scoring function, to deal with such environments. This approach is also an improvement over random search and in contrast to standard Hough transform algorithms, is not limited to simple curves like straight line or circle. 1 Introduction Extracting curves from a binary edge image is an important problem in computer vision and robotics. The Hough transform (HT) [7, 21] is recognized as a powerful tool to handle this. Although it gives good results in the presence of small amounts of noise and occlusion, it does not scale well when applied to complex, cluttered scenes, with lot of noise. In a study on the...
Evolutionary Tabu Search for Geometric Primitive Extraction
 in &quot;Soft Computing in Engineering Design and Manufacturing
, 1998
"... Many problems in computer vision can be formulated as an optimization problem. Developping the efficient global optimizational technique adaptive to the vision proplem becomes more and more important. In this paper, we present a geometric primitive extraction method, which plays a crucial role in co ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Many problems in computer vision can be formulated as an optimization problem. Developping the efficient global optimizational technique adaptive to the vision proplem becomes more and more important. In this paper, we present a geometric primitive extraction method, which plays a crucial role in contentbased image retrieval and other vision problems. We formulate the problem as a cost function minimization problem and we present a new optimization technique called Evolutionary Tabu Search (ETS). Genetic algorithm and Tabu Search Algorithm are combined in our method. Specificly, we incorporates "the survival of strongest" idea of evolution algorithm into tabu search. In experiments, we use our method for shape extraction in images and compare our method with other three global optimization methods including genetic algorithm, simulated Annealing and tabu search. The results show that the new algorithm is a practical and effective global optimization method, which can yield good nearo...