Results 1  10
of
113
Catadioptric camera calibration using geometric invariants
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2004
"... Abstract—Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. In this paper, we propose a novel method for the calibration of central catadioptric cameras using geometric invariants. Lines and spheres in space a ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
(Show Context)
Abstract—Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. In this paper, we propose a novel method for the calibration of central catadioptric cameras using geometric invariants. Lines and spheres in space are all projected into conics in the catadioptric image plane. We prove that the projection of a line can provide three invariants whereas the projection of a sphere can only provide two. From these invariants, constraint equations for the intrinsic parameters of catadioptric camera are derived. Therefore, there are two kinds of variants of this novel method. The first one uses projections of lines and the second one uses projections of spheres. In general, the projections of two lines or three spheres are sufficient to achieve catadioptric camera calibration. One important conclusion in this paper is that the method based on projections of spheres is more robust and has higher accuracy than that based on projections of lines. The performances of our method are demonstrated by both the results of simulations and experiments with real images. Index Terms—Camera calibration, catadioptric camera, geometric invariant, omnidirectional vision, panoramic vision. 1
Robust localization using an omnidirectional appearancebased subspace model of environment
 Robotics and Autonomous Systems
, 2003
"... Appearancebased visual learning and recognition techniques that are based on models derived from a training set of 2D images are being widely used in computer vision applications. In robotics, they have received most attention in visual servoing and navigation. In this paper we discuss a framework ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
(Show Context)
Appearancebased visual learning and recognition techniques that are based on models derived from a training set of 2D images are being widely used in computer vision applications. In robotics, they have received most attention in visual servoing and navigation. In this paper we discuss a framework for visual selflocalization of mobile robots using a parametric model built from panoramic snapshots of the environment. In particular, we propose solutions to the problems related to robustness against occlusions and invariance to the rotation of the sensor. Our principal contribution is an “eigenspace of spinningimages”, i.e., a model of the environment which successfully exploits some of the specific properties of panoramic images in order to efficiently calculate the optimal subspace in terms of principal components analysis (PCA) of a set of training snapshots without actually decomposing the covariance matrix. By integrating a robust recoverandselect algorithm for the computation of image parameters we achieve reliable localization even in the case when the input images are partly occluded or noisy. In this way, the robot is capable of localizing itself in realistic environments.
Radonbased structure from motion without correspondences
 In IEEE Conf. on Computer Vision and Pattern Recognition
, 2005
"... We present a novel approach for the estimation of 3Dmotion directly from two images using the Radon transform. We assume a similarity function defined on the crossproduct of two images which assigns a weight to all feature pairs. This similarity function is integrated over all feature pairs that sat ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
We present a novel approach for the estimation of 3Dmotion directly from two images using the Radon transform. We assume a similarity function defined on the crossproduct of two images which assigns a weight to all feature pairs. This similarity function is integrated over all feature pairs that satisfy the epipolar constraint. This integration is equivalent to filtering the similarity function with a Dirac function embedding the epipolar constraint. The result of this convolution is a function of the five unknown motion parameters with maxima at the positions of compatible rigid motions. The breakthrough is in the realization that the Radon transform is a filtering operator: If we assume that images are defined on spheres and the epipolar constraint is a group action of two rotations on two spheres, then the Radon transform is a convolution/correlation integral. We propose a new algorithm to compute this integral from the spherical harmonics of the similarity and Dirac functions. The resulting resolution in the motion space depends on the bandwidth we keep from the spherical transform. The strength of the algorithm is in avoiding a commitment to correspondences, thus being robust to erroneous feature detection, outliers, and multiple motions. The algorithm has been tested in sequences of real omnidirectional images and it outperforms correspondencebased structure from motion. 1
Imagebased Visual Servoing with Central Catadioptric Camera
 Int. J. Robot. Res
"... Abstract — This paper presents an epipolar based visual servoing for mobile robots equipped with a panoramic camera. The proposed visual servoing is based on the epipolar geometry and exploits the autoepipolar property, a special configuration for the epipoles which occurs when the desired and the ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents an epipolar based visual servoing for mobile robots equipped with a panoramic camera. The proposed visual servoing is based on the epipolar geometry and exploits the autoepipolar property, a special configuration for the epipoles which occurs when the desired and the current views undergo a pure translation. This occurrence is detectable observing when the biosculating mirror conics cointersect at the two epipoles. The autoepipolar condition enables our controller to retrieve the equal orientation between target and current camera. Translation is performed by exploiting the epipoles. Simulated experiments and Lyapunovbased stability analysis demonstrate the parametric robustness of the proposed method. I.
K.: Properties of the catadioptric fundamental matrix
 In: ECCV
, 2002
"... The geometry of two uncalibrated views obtained with a parabolic catadioptric device is the subject of this paper. We introduce the notion of circle space, a natural representation of line images, and the set of incidence preserving transformations on this circle space which happens to equal the Lor ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
The geometry of two uncalibrated views obtained with a parabolic catadioptric device is the subject of this paper. We introduce the notion of circle space, a natural representation of line images, and the set of incidence preserving transformations on this circle space which happens to equal the Lorentz group. In this space, there is a bilinear constraint on transformed image coordinates in two parabolic catadioptric views involving what we call the catadioptric fundamental matrix. We prove that the angle between corresponding epipolar curves is preserved and that the transformed image of the absolute conic is in the kernel of that matrix, enabling thus euclidean reconstruction from two views. We establish the necessary and sufficient conditions for a matrix to be a catadioptric fundamental matrix.
Rotation recovery from spherical images without correspondences
 IEEE Transactions on Pattern Analysis and Machine Intelligence
"... This paper addresses the problem of rotation estimation directly from images defined on the sphere and without correspondence. The method is particularly useful for the alignment of large rotations and has potential impact on 3D shape alignment. The foundation of the method lies in the fact that the ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
This paper addresses the problem of rotation estimation directly from images defined on the sphere and without correspondence. The method is particularly useful for the alignment of large rotations and has potential impact on 3D shape alignment. The foundation of the method lies in the fact that the spherical harmonic coefficients undergo a unitary mapping when the original image is rotated. The correlation between two images is a function of rotations and we show that it has an SO(3)Fourier transform equal to the pointwise product of spherical harmonic coefficients of the original images. The resolution of the rotation space depends on the bandwidth we choose for the harmonic expansion and the rotation estimate is found through a direct search in this 3D discretized space. A refinement of the rotation estimate can be obtained from the conservation of harmonic coefficients in the rotational shift theorem. A novel decoupling of the shift theorem with respect to the Euler angles is presented and exploited in an iterative scheme to refine the initial rotation estimates. Experiments show the suitability of the method for large rotations and the dependence of the method on bandwidth and the choice of the spherical harmonic coefficients. ∗ The authors are grateful for support through the following grants: NSFIIS0083209, NSFIIS0121293, NSF
Rotation estimation from spherical images
 In Proc. Int. Conf. on Pattern Recognition
, 2004
"... Robotic navigation algorithms increasingly make use of the panoramic field of view provided by omnidirectional images to assist with localization tasks. Since the images taken by a particular class of omnidirectional sensors can be mapped to the sphere, the problem of attitude estimation arising fro ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
Robotic navigation algorithms increasingly make use of the panoramic field of view provided by omnidirectional images to assist with localization tasks. Since the images taken by a particular class of omnidirectional sensors can be mapped to the sphere, the problem of attitude estimation arising from 3D rotations of the camera can be treated as a problem of estimating rotations between spherical images. Recently it has been shown that direct signal processing techniques are effective tools in handling rotations of the sphere, but are limited when the signal is altered by larger rotations of omnidirectional cameras. We present an effective solution to the attitude estimation problem under large rotations. Our approach utilizes a Shift Theorem for the Spherical Fourier Transform to produce a solution in the spectral domain. 1
Image processing in catadioptric planes: spatiotemporal derivatives and optical flow computation
 In IEEE Workshop on Omnidirectional Vision
, 2002
"... Images produced by catadioptric sensors contain a significant amount of radial distortion and variation in inherent scale. Blind application of conventional shiftinvariant operators or optical flow estimators yields erroneous results. One could argue that given a calibration of such a sensor we wou ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
Images produced by catadioptric sensors contain a significant amount of radial distortion and variation in inherent scale. Blind application of conventional shiftinvariant operators or optical flow estimators yields erroneous results. One could argue that given a calibration of such a sensor we would always be able to remove distortions and apply any operator in a local perspective plane. In addition to the inefficiency of such an approach, interpolation effects during warping have undesired results in filtering. In this paper, we propose to use the sphere as the underlying domain of image processing in central catadioptric systems. This does not mean that we will warp the catadioptric image into a spherical image. Instead, we will formulate all the operations on the sphere but use the samples from the original catadioptric plane. As an example, we study the convolution with the Gaussian and its derivatives and as well as the computation of optical flow in image sequences acquired with a parabolic catadioptric sensor. 1
Spherical Catadioptric Arrays: Construction, MultiView Geometry, and Calibration
"... This paper introduces a novel imaging system composed of an array of spherical mirrors and a single highresolution digital camera. We describe the mechanical design and construction of a prototype, analyze the geometry of image formation, present a tailored calibration algorithm, and discuss the eff ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
This paper introduces a novel imaging system composed of an array of spherical mirrors and a single highresolution digital camera. We describe the mechanical design and construction of a prototype, analyze the geometry of image formation, present a tailored calibration algorithm, and discuss the effect that design decisions had on the calibration routine. This system is presented as a unique platform for the development of efficient multiview imaging algorithms which exploit the combined properties of camera arrays and noncentral projection catadioptric systems. Initial target applications include data acquisition for imagebased rendering and 3D scene reconstruction. The main advantages of the proposed system include: a relatively simple calibration procedure, a wide field of view, and a single imaging sensor which eliminates the need for color calibration and guarantees time synchronization. 1.
Applications of conformal geometric algebra in computer vision and graphics
 6th International Workshop IWMM 2004
, 2005
"... Abstract. This paper introduces the mathematical framework of conformal geometric algebra (CGA) as a language for computer graphics and computer vision. Specifically it discusses a new method for pose and position interpolation based on CGA which firstly allows for existing interpolation methods to ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper introduces the mathematical framework of conformal geometric algebra (CGA) as a language for computer graphics and computer vision. Specifically it discusses a new method for pose and position interpolation based on CGA which firstly allows for existing interpolation methods to be cleanly extended to pose and position interpolation, but also allows for this to be extended to higherdimension spaces and all conformal transforms (including dilations). In addition, we discuss a method of dealing with conics in CGA and the intersection and reflections of rays with such conic surfaces. Possible applications for these algorithms are also discussed. 1