Results 1  10
of
33
Building segmentation for densely built urban regions using aerial lidar data
, 2008
"... We present a novel building segmentation system for densely built areas, containing thousands of buildings per square kilometer. We employ solely sparse LIDAR (Light/Laser Detection & Ranging) 3D data, captured from an aerial platform, with resolution less than one point per square meter. The go ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
We present a novel building segmentation system for densely built areas, containing thousands of buildings per square kilometer. We employ solely sparse LIDAR (Light/Laser Detection & Ranging) 3D data, captured from an aerial platform, with resolution less than one point per square meter. The goal of our work is to create segmented and delineated buildings as well as structures on top of buildings without requiring scanning for the sides of buildings. Building segmentation is a critical component in many applications such as 3D visualization, robot navigation and cartography. LIDAR has emerged in recent years as a more robust alternative to 2D imagery because it acquires 3D structure directly, without the shortcomings of stereo in untextured regions and at depth discontinuities. Our main technical contributions in this paper are: (i) a ground segmentation algorithm which can handle both rural regions, and heavily urbanized areas, where the ground is 20 % or less of the data. (ii) a building segmentation technique, which is robust to buildings in close proximity to each other, sparse measurements and nearby structured vegetation clutter, and (iii) an algorithm for estimating the orientation of a boundary contour of a building, based on minimizing the number of vertices in a rectilinear approximation to the building outline, which can cope with significant quantization noise in the outline measurements. We have applied the proposed building segmentation system to several urban regions with areas of hundreds of square kilometers each, obtaining average segmentation speeds of less than three minutes per km 2 on a standard Pentium processor. Extensive qualitative results obtained by overlaying the 3D segmented regions onto 2D imagery indicate accurate performance of our system. 1.
High accuracy computation of rankconstrained fundamental matrix by efficient search
 Proc. 10th Meeting Image Recog. Understand. (MIRU2007
, 2007
"... A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
(Show Context)
A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction. The accuracy achieves the theoretical bound (the KCR lower bound). 1
Projection Based MEstimators
, 2009
"... Random Sample Consensus (RANSAC) is the most widely used robust regression algorithm in computer vision. However, RANSAC has a few drawbacks which make it difficult to use for practical applications. Some of these problems have been addressed through improved sampling algorithms or better cost funct ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Random Sample Consensus (RANSAC) is the most widely used robust regression algorithm in computer vision. However, RANSAC has a few drawbacks which make it difficult to use for practical applications. Some of these problems have been addressed through improved sampling algorithms or better cost functions, but an important difficulty still remains. The algorithm is not user independent, and requires knowledge of the scale of the inlier noise. We propose a new robust regression algorithm, the projection based Mestimator (pbM). The pbM algorithm is derived by building a connection to the theory of kernel density estimation and this leads to an improved cost function, which gives better performance. Furthermore, pbM is user independent and does not require any knowledge of the scale of noise corrupting the inliers. We propose a general framework for the pbM algorithm which can handle heteroscedastic data and multiple linear constraints on each data point through the use of Grassmann manifold theory. The performance of pbM is compared with RANSAC and MEstimator Sample Consensus (MSAC) on various real problems. It is shown that pbM gives better results than RANSAC and MSAC in spite of being user independent.
Extended FNS for constrained parameter estimation
 In: Proc. 10th Meeting Image Recog. Understand
, 2007
"... Abstract We present a new method, called “EFNS ” (“extended FNS”), for linearizable constrained maximum likelihood estimation. This complements the CFNS of Chojnacki et al. and is a true extension of the FNS of Chojnacki et al. to an arbitrary number of intrinsic constraints. Computing the fundament ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
(Show Context)
Abstract We present a new method, called “EFNS ” (“extended FNS”), for linearizable constrained maximum likelihood estimation. This complements the CFNS of Chojnacki et al. and is a true extension of the FNS of Chojnacki et al. to an arbitrary number of intrinsic constraints. Computing the fundamental matrix as an illustration, we demonstrate that CFNS does not necessarily converge to a correct solution, while EFNS converges to an optimal value which nearly satisfies the theoretical accuracy bound (KCR lower bound).
Compact fundamental matrix computation
 Proc. 3rd Pacific Rim Symp. Image and Video Technology
, 2009
"... Abstract. A very compact algorithm is presented for fundamental matrix computation from point correspondences over two images. The computation is based on the strict maximum likelihood (ML) principle, minimizing the reprojection error. The rank constraint is incorporated by the EFNS procedure. Altho ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. A very compact algorithm is presented for fundamental matrix computation from point correspondences over two images. The computation is based on the strict maximum likelihood (ML) principle, minimizing the reprojection error. The rank constraint is incorporated by the EFNS procedure. Although our algorithm produces the same solution as all existing MLbased methods, it is probably the most practical of all, being small and simple. By numerical experiments, we confirm that our algorithm behaves as expected. 1
Compact Algorithm for Strictly ML Ellipse Fitting
"... A very compact algorithm is presented for fitting an ellipse to points in images by maximum likelihood (ML) in the strict sense. Although our algorithm produces the same solution as existing MLbased methods, it is probably the simplest and the smallest of all. By numerical experiments, we show that ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
A very compact algorithm is presented for fitting an ellipse to points in images by maximum likelihood (ML) in the strict sense. Although our algorithm produces the same solution as existing MLbased methods, it is probably the simplest and the smallest of all. By numerical experiments, we show that the strict ML solution practically coincides with the Sampson solution. 1.
Denoising manifold and nonmanifold point clouds
 In British Machine Vision Conference
, 2007
"... The faithful reconstruction of 3D models from irregular and noisy point samples is a task central to many applications of computer vision and graphics. We present an approach to denoising that naturally handles intersections of manifolds, thus preserving highfrequency details without oversmoothing ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
The faithful reconstruction of 3D models from irregular and noisy point samples is a task central to many applications of computer vision and graphics. We present an approach to denoising that naturally handles intersections of manifolds, thus preserving highfrequency details without oversmoothing. This is accomplished through the use of a modified locally weighted regression algorithm that models a neighborhood of points as an implicit product of linear subspaces. By posing the problem as one of energy minimization subject to constraints on the coefficients of a higher order polynomial, we can also incorporate anisotropic error models appropriate for data acquired with a range sensor. We demonstrate the effectiveness of our approach through some preliminary results in denoising synthetic data in 2D and 3D domains. 1
Highest Accuracy Fundamental Matrix Computation
"... Abstract. We compare algorithms for fundamental matrix computation, which we classify into “a posteriori correction”, “internal access”, and “external access”. Doing experimental comparison, we show that the 7parameter LevenbergMarquardt (LM) search and the extended FNS (EFNS) exhibit the best per ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We compare algorithms for fundamental matrix computation, which we classify into “a posteriori correction”, “internal access”, and “external access”. Doing experimental comparison, we show that the 7parameter LevenbergMarquardt (LM) search and the extended FNS (EFNS) exhibit the best performance and that additional bundle adjustment does not increase the accuracy to any noticeable degree. 1
Y.: Renormalization Returns: Hyperrenormalization and Its Applications
 ECCV 2012, Part III. LNCS
, 2012
"... Abstract. The technique of “renormalization ” for geometric estimation attracted much attention when it was proposed in early 1990s for having higher accuracy than any other then known methods. Later, it was replaced by minimization of the reprojection error. This paper points out that renormalizati ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The technique of “renormalization ” for geometric estimation attracted much attention when it was proposed in early 1990s for having higher accuracy than any other then known methods. Later, it was replaced by minimization of the reprojection error. This paper points out that renormalization can be modified so that it outperforms reprojection error minimization. The key fact is that renormalization directly specifies equations to solve, just as the “estimation equation ” approach in statistics, rather than minimizing some cost. Exploiting this fact, we modify the problem so that the solution has zero bias up to high order error terms; we call the resulting scheme hyperrenormalization. We apply it to ellipse fitting to demonstrate that it indeed surpasses reprojection error minimization. We conclude that it is the best method available today. 1