Results 1  10
of
668
Modeling and Rendering Architecture from Photographs
, 1999
"... Contents Thissectionofthecoursenotesisorganizedasfollows: 1.Introductorymaterialforthissection.Thisincludesabriefoverviewofrelatedandcomplimentarymaterialtophotogrammetricmodeling, suchasstructurefrommotion,stereocorrespondence,shapefrom silhouettes,cameracalibration,laserscanning,andimagebasedre ..."
Abstract

Cited by 905 (20 self)
 Add to MetaCart
Contents Thissectionofthecoursenotesisorganizedasfollows: 1.Introductorymaterialforthissection.Thisincludesabriefoverviewofrelatedandcomplimentarymaterialtophotogrammetricmodeling, suchasstructurefrommotion,stereocorrespondence,shapefrom silhouettes,cameracalibration,laserscanning,andimagebasedrendering. 2.Abibliographyofrelatedpapers. 3.Areprintof: PaulE.Debevec,CamilloJ.Taylor,andJitendraMalik.ModelingandRenderingArchitecturefrom Photographs.InSIGGRAPH96,August1996,pp.1120. 4.NotesonphotogrammetricrecoveryofarchesandsurfacesofrevolutionwrittenbyGeorgeBorshukov. 5.Copiesoftheslidesusedforthepresentation. Moreinformationcanbefoundin[10],[5],and[13],availableat: http://www.cs.berkeley.edu/debevec/Thesis 1 Introduction Thecreationofthreedimensionalmodelsofexistingarchitecturalsceneswiththeaidofthecomputerhas beencommonplaceforsometime,andtheresultingmodelshavebeenbothentertainingvirtualenvironments aswellasvaluablevisualizationtools.LargescaleeffortshavepushedthecampusesofI
Plenoptic Modeling: An ImageBased Rendering System
, 1995
"... Imagebased rendering is a powerful new approach for generating realtime photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for imagebased ..."
Abstract

Cited by 671 (18 self)
 Add to MetaCart
Imagebased rendering is a powerful new approach for generating realtime photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for imagebased rendering paradigms, such as morphing and view interpolation. The plenoptic function is a parameterized function for describing everything that is visible from a given point in space. We present an imagebased rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections.
A Robust Technique for Matching Two Uncalibrated Images Through the Recovery of the Unknown Epipolar Geometry
, 1994
"... ..."
Camera self calibration: Theory and experiments
 In Proceedings of the European Conference on Computer Vision
, 1992
"... ..."
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two i ..."
Abstract

Cited by 326 (7 self)
 Add to MetaCart
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3&times;3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
An Efficient Solution to the FivePoint Relative Pose Problem
, 2004
"... An efficient algorithmic solution to the classical fivepoint relative pose problem is presented. The problem is to find the possible solutions for relative camera pose between two calibrated views given five corresponding points. The algorithm consists of computing the coefficients of a tenth degre ..."
Abstract

Cited by 320 (11 self)
 Add to MetaCart
An efficient algorithmic solution to the classical fivepoint relative pose problem is presented. The problem is to find the possible solutions for relative camera pose between two calibrated views given five corresponding points. The algorithm consists of computing the coefficients of a tenth degree polynomial in closed form and subsequently finding its roots. It is the first algorithm well suited for numerical implementation that also corresponds to the inherent complexity of the problem. We investigate the numerical precision of the algorithm. We also study its performance under noise in minimal as well as overdetermined cases. The performance is compared to that of the well known 8 and 7point methods and a 6point scheme. The algorithm is used in a robust hypothesizeandtest framework to estimate structure and motion in realtime with low delay. The realtime system uses solely visual input and has been demonstrated at major conferences.
Estimation of relative camera positions for uncalibrated cameras
, 1992
"... Abstract. This paper considers, the determination of internal camera parameters from two views of a point set in three dimensions. A noniterative algorithm is given for determining the focal lengths of the two cameras, as well as their relative placement, assuming all other internal camera paramete ..."
Abstract

Cited by 282 (22 self)
 Add to MetaCart
Abstract. This paper considers, the determination of internal camera parameters from two views of a point set in three dimensions. A noniterative algorithm is given for determining the focal lengths of the two cameras, as well as their relative placement, assuming all other internal camera parameters to be known. It is shown that this is all the information that may be deduced from a set of image correspondences. 1
The Fundamental matrix: theory, algorithms, and stability analysis
 International Journal of Computer Vision
, 1995
"... In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of th ..."
Abstract

Cited by 235 (14 self)
 Add to MetaCart
In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of the principal points, pixels aspect ratio and focal lengths). This is important for two reasons. First, it is more realistic in applications where these parameters may vary according to the task (active vision). Second, the general case considered here, captures all the relevant information that is necessary for establishing correspondences between two pairs of images. This information is fundamentally projective and is hidden in a confusing manner in the commonly used formalism of the Essential matrix introduced by LonguetHiggins [40]. This paper clarifies the projective nature of the correspondence problem in stereo and shows that the epipolar geometry can be summarized in one 3 \Theta 3 ma...
The development and comparison of robust methods for estimating the fundamental matrix
 International Journal of Computer Vision
, 1997
"... Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mest ..."
Abstract

Cited by 225 (10 self)
 Add to MetaCart
Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mestimators and random sampling, and the paper develops the theory required to apply them to nonlinear orthogonal regression problems. Although a considerable amount of interest has focussed on the application of robust estimation in computer vision, the relative merits of the many individual methods are unknown, leaving the potential practitioner to guess at their value. The second goal is therefore to compare and judge the methods. Comparative tests are carried out using correspondences generated both synthetically in a statistically controlled fashion and from feature matching in real imagery. In contrast with previously reported methods the goodness of fit to the synthetic observations is judged not in terms of the fit to the observations per se but in terms of fit to the ground truth. A variety of error measures are examined. The experiments allow a statistically satisfying and quasioptimal method to be synthesized, which is shown to be stable with up to 50 percent outlier contamination, and may still be used if there are more than 50 percent outliers. Performance bounds are established for the method, and a variety of robust methods to estimate the standard deviation of the error and covariance matrix of the parameters are examined. The results of the comparison have broad applicability to vision algorithms where the input data are corrupted not only by noise but also by gross outliers.
The Computation of Optical Flow
, 1995
"... Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image dis ..."
Abstract

Cited by 224 (10 self)
 Add to MetaCart
Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image displacements. These are usually called the optical flow field or the image velocity field. Provided that optical flow is a reliable approximation to twodimensional image motion, it may then be used to recover the threedimensional motion of the visual sensor (to within a scale factor) and the threedimensional surface structure (shape or relative depth) through assumptions concerning the structure of the optical flow field, the threedimensional environment and the motion of the sensor. Optical flow may also be used to perform motion detection, object segmentation, timetocollision and focus of expansion calculations, motion compensated encoding and stereo disparity measurement. We investiga...