Results 11  20
of
359
The dualbootstrap iterative closest point algorithm with application to retinal image registration
 IEEE Trans. Med. Img
, 2003
"... Abstract—Motivated by the problem of retinal image registration, this paper introduces and analyzes a new registration algorithm called DualBootstrap Iterative Closest Point (DualBootstrap ICP). The approach is to start from one or more initial, loworder estimates that are only accurate in small ..."
Abstract

Cited by 57 (18 self)
 Add to MetaCart
Abstract—Motivated by the problem of retinal image registration, this paper introduces and analyzes a new registration algorithm called DualBootstrap Iterative Closest Point (DualBootstrap ICP). The approach is to start from one or more initial, loworder estimates that are only accurate in small image regions, called bootstrap regions. In each bootstrap region, the algorithm iteratively: 1) refines the transformation estimate using constraints only from within the bootstrap region; 2) expands the bootstrap region; and 3) tests to see if a higher order transformation model can be used, stopping when the region expands to cover the overlap between images. Steps 1): and 3), the bootstrap steps, are governed by the covariance matrix of the estimated transformation. Estimation refinement [Step 2)] uses a novel robust version of the ICP algorithm. In registering retinal image pairs, DualBootstrap ICP is initialized by automatically matching individual vascular landmarks, and it aligns images based on detected blood vessel centerlines. The resulting quadratic transformations are accurate to less than a pixel. On tests involving approximately 6000 image pairs, it successfully registered 99.5 % of the pairs containing at least one common landmark, and 100 % of the pairs containing at least one common landmark and at least 35 % image overlap. Index Terms—Iterative closest point, medical imaging, registration, retinal imaging, robust estimation.
Structure and Motion from Uncalibrated Catadioptric Views
 In Proc. CVPR
, 2001
"... In this paper we present a new algorithm for structure from motion from point correspondences in images taken from uncalibrated catadioptric cameras with parabolic mirrors. We assume that the unknown intrinsic parameters are three: the combined focal length of the mirror and lens and the intersectio ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
In this paper we present a new algorithm for structure from motion from point correspondences in images taken from uncalibrated catadioptric cameras with parabolic mirrors. We assume that the unknown intrinsic parameters are three: the combined focal length of the mirror and lens and the intersection of the optical axis with the image. We introduce a new representation for images of points and lines in catadioptric images which we call the circle space. This circle space includes imaginary circles, one of which is the image of the absolute conic. We formulate the epipolar constraint in this space and establish a new 4 × 4 catadioptric fundamental matrix. We show that the image of the absolute conic belongs to the kernel of this matrix. This enables us to prove that Euclidean reconstruction is feasible from two views with constant parameters and from three views with varying parameters. In both cases, it is one less than the number of views necessary with perspective cameras.
Camera network calibration from dynamic silhouettes
 in CVPR
, 2004
"... In this paper we present an automatic method for calibrating a network of cameras from only silhouettes. This is particularly useful for shapefromsilhouette or visualhull systems, as no additional data is needed for calibration. The key novel contribution of this work is an algorithm to robustly ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
In this paper we present an automatic method for calibrating a network of cameras from only silhouettes. This is particularly useful for shapefromsilhouette or visualhull systems, as no additional data is needed for calibration. The key novel contribution of this work is an algorithm to robustly compute the epipolar geometry from dynamic silhouettes. We use the fundamental matrices computed by this method to determine the projective reconstruction of the complete camera configuration. This is refined into a metric reconstruction using selfcalibration. We validate our approach by calibrating a four camera visualhull system from archive data where the dynamic object is a moving person. Once the calibration parameters have been computed, we use a visualhull algorithm to reconstruct the dynamic object from its silhouettes. 1
Towards urban 3d reconstruction from video
 in 3DPVT
, 2006
"... The paper introduces a data collection system and a processing pipeline for automatic georegistered 3D reconstruction of urban scenes from video. The system collects multiple video streams, as well as GPS and INS measurements in order to place the reconstructed models in georegistered coordinates. ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
The paper introduces a data collection system and a processing pipeline for automatic georegistered 3D reconstruction of urban scenes from video. The system collects multiple video streams, as well as GPS and INS measurements in order to place the reconstructed models in georegistered coordinates. Besides high quality in terms of both geometry and appearance, we aim at realtime performance. Even though our processing pipeline is currently far from being realtime, we select techniques and we design processing modules that can achieve fast performance on multiple CPUs and GPUs aiming at realtime performance in the near future. We present the main considerations in designing the system and the steps of the processing pipeline. We show results on real video sequences captured by our system. 1
Globally optimal estimates for geometric reconstruction problems
 In ICCV
, 2005
"... We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or nonoptimality or a combination of both we pursue the goal of achieving global solutions of the statistically optimal c ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or nonoptimality or a combination of both we pursue the goal of achieving global solutions of the statistically optimal costfunction. Our approach is based on a hierarchy of convex relaxations to solve nonconvex optimization problems with polynomials. These convex relaxations generate a monotone sequence of lower bounds and we show how one can detect whether the global optimum is attained at a given relaxation. The technique is applied to a number of classical vision problems: triangulation, camera pose, homography estimation and last, but not least, epipolar geometry estimation. Experimental validation on both synthetic and real data is provided. In practice, only a few relaxations are needed for attaining the global optimum. 1
Capturing and animating occluded cloth
 ACM Trans. on Graphics (Proc. of ACM SIGGRAPH
, 2007
"... Figure 1: We reconstruct a stationary sleeve using thousands of markers to estimate the geometry (texture added with bump mapping). We capture the shape of moving cloth using a custom set of color markers printed on the surface of the cloth. The output is a sequence of triangle meshes with static co ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
Figure 1: We reconstruct a stationary sleeve using thousands of markers to estimate the geometry (texture added with bump mapping). We capture the shape of moving cloth using a custom set of color markers printed on the surface of the cloth. The output is a sequence of triangle meshes with static connectivity and with detail at the scale of individual markers in both smooth and folded regions. We compute markers ’ coordinates in space using correspondence across multiple synchronized video cameras. Correspondence is determined from color information in small neighborhoods and refined using a novel strain pruning process. Final correspondence does not require neighborhood information. We use a novel data driven holefilling technique to fill occluded regions. Our results include several challenging examples: a wrinkled shirt sleeve, a dancing pair of pants, and a rag tossed onto a cup. Finally, we demonstrate that cloth capture is reusable by animating a pair of pants using human motion capture data. 1
Towards a View Invariant Gait Recognition Algorithm
 PROCEEDINGS OF IEEE CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE
, 2003
"... Human gait is a spatiotemporal phenomenon and typifies the motion characteristics of an individual. The gait of a person is easily recognizable when extracted from a sideview of the person. Accordingly, gaitrecognition algorithms work best when presented with images where the person walks parallel ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
Human gait is a spatiotemporal phenomenon and typifies the motion characteristics of an individual. The gait of a person is easily recognizable when extracted from a sideview of the person. Accordingly, gaitrecognition algorithms work best when presented with images where the person walks parallel to the camera (i.e. the image plane). However, it is not realistic to expect that this assumption will be valid in most reallife scenarios. Hence it is important to develop methods whereby the sideview can be generated from any other arbitrary view in a simple, yet accurate, manner. That is the main theme of this paper. We show that if the person is far enough from the camera, it is possible to synthesize a side view (referred to as canonical view) from any other arbitrary view using a single camera. Two methods are proposed for doing this: i) by using the perspective projection model, and ii) by using the optical flow based structure from motion equations. A simple camera calibration scheme for this method is also proposed. Examples of synthesized views are presented. Preliminary testing with gait recognition algorithms gives encouraging results. A byproduct of this method is a simple algorithm for synthesizing novel views of a planar scene.
Threading Fundamental Matrices
 IEEE Trans. on PAMI
, 1998
"... We present a new function that operates on Fundamental matrices across a sequence of views. The operation, we call "threading", connects two consecutive Fundamental matrices using the Trilinear tensor as the connecting thread. The threading operation guarantees that consecutive camera matrices are c ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
We present a new function that operates on Fundamental matrices across a sequence of views. The operation, we call "threading", connects two consecutive Fundamental matrices using the Trilinear tensor as the connecting thread. The threading operation guarantees that consecutive camera matrices are consistent with a unique 3D model, without ever recovering a 3D model. Applications include recovery of camera egomotion from a sequence of views, image stabilization (plane stabilization) across a sequence, and multiview imagebased rendering. 1 Introduction Consider the problem of recovering the (uncalibrated) camera trajectory from an extended sequence of images. Since the introduction of multilinear forms across three or more views (see Appendix) there have been several attempts to put together a coherent algebraic framework that would produce a sequence of camera matrices that are consistent with the same 3D (projective) world [25, 4, 23]. The consistency requirement arises from the...
Skeletal Parameter Estimation from Optical Motion Capture Data
, 2005
"... In this paper we present an algorithm for automatically estimating a subject’s skeletal structure from optical motion capture data. Our algorithm consists of a series of steps that cluster markers into segment groups, determine the topological connectivity between these groups, and locate the positi ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
In this paper we present an algorithm for automatically estimating a subject’s skeletal structure from optical motion capture data. Our algorithm consists of a series of steps that cluster markers into segment groups, determine the topological connectivity between these groups, and locate the positions of their connecting joints. Our problem formulation makes use of fundamental distance constraints that must hold for markers attached to an articulated structure, and we solve the resulting systems using a combination of spectral clustering and nonlinear optimization. We have tested our algorithms using data from both passive and active optical motion capture devices. Our results show that the system works reliably even with as few as one or two markers on each segment. For data recorded from human subjects, the system determines the correct topology and qualitatively accurate structure. Tests with a mechanical calibration linkage demonstrate errors for inferred segment lengths on average of only two percent. We discuss applications of our methods for commercial human figure animation, and for identifying human or animal subjects based on their motion independent of marker placement or feature selection.
SelfCalibration of a camera from video of a walking human
 In ICPR
, 2002
"... Analysis of human activity from a video camera is simplified by the knowledge of the camera’s intrinsic and extrinsic parameters. We describe a technique to estimate such parameters from image observations without requiring measurements of scene objects. We first develop a general technique for cali ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
Analysis of human activity from a video camera is simplified by the knowledge of the camera’s intrinsic and extrinsic parameters. We describe a technique to estimate such parameters from image observations without requiring measurements of scene objects. We first develop a general technique for calibration using vanishing points and vanishing line. We then describe a method for estimating the needed points and line by observing the motion of a human in the scene. Experimental results, including error estimates, are presented.