• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Example-Based Photometric Stereo: Shape Reconstruction with General, Varying BRDFs", (2005)

by A Hertzmann, S M Seitz
Venue:IEEE T. Pattern Anal.,
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 88
Next 10 →

3-D depth reconstruction from a single still image

by A. Saxena, S. H. Chung, A. Y. Ng , 2006
"... We consider the task of 3-d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc ..."
Abstract - Cited by 114 (17 self) - Add to MetaCart
We consider the task of 3-d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the value of the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a hierarchical, multiscale Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models the depths and the relation between depths at different points in the image. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. We further propose a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.
(Show Context)

Citation Context

...from texture [14; 15; 16] generally assume uniform color and/or texture, 1 and hence would perform very poorly on the complex, unconstrained, highly textured images that we consider. Hertzmann et al. =-=[17]-=- 1 Also, most of these algorithms assume Lambertian surfaces, which means the appearance of the surface does not change with viewpoint. reconstructed high quality 3-d models from several images, but t...

Shape and spatially-varying BRDFs from photometric stereo

by Dan B Goldman, Brian Curless, Dan B Goldman, Brian Curless, Aaron Hertzmann, Steven M. Seitz , 2004
"... a) b) c) d) e) f) Figure 1 From a) photographs of an object taken under varying illumination (one of ten photographs is shown here), we reconstruct b) its normals and materials, represented as c) a material weight map controlling a mixture of d,e) fundamental materials. Using this representation we ..."
Abstract - Cited by 104 (0 self) - Add to MetaCart
a) b) c) d) e) f) Figure 1 From a) photographs of an object taken under varying illumination (one of ten photographs is shown here), we reconstruct b) its normals and materials, represented as c) a material weight map controlling a mixture of d,e) fundamental materials. Using this representation we can f) re-render the object under novel lighting. This paper describes a photometric stereo method designed for surfaces with spatially-varying BRDFs, including sur-faces with both varying diffuse and specular properties. Our method builds on the observation that most objects are composed of a small number of fundamental materials. This approach recovers not only the shape but also material BRDFs and weight maps, yielding compelling results for a wide variety of objects. We also show examples of interac-tive lighting and editing operations made possible by our method. 1
(Show Context)

Citation Context

...ture [12, 14, 15, 18, 27] to estimate material properties. On the other hand, when the BRDF is arbitrary but known or can be measured using reference objects, example-based photometric stereo methods =-=[7, 19]-=- enable reconstructing shape models. In this paper, we address the problem of computing both shape and spatially-varying BRDFs of objects using a novel photometric stereo approach. We seek to achieve ...

Evaluation of Stereo Matching Costs on Images with Radiometric Differences

by Heiko Hirschmüller, Daniel Scharstein - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2009
"... Stereo correspondence methods rely on matching costs for computing the similarity of image locations. We evaluate the insensitivity of different costs for passive binocular stereo methods with respect to radiometric variations of the input images. We consider both pixel-based and window-based varian ..."
Abstract - Cited by 71 (2 self) - Add to MetaCart
Stereo correspondence methods rely on matching costs for computing the similarity of image locations. We evaluate the insensitivity of different costs for passive binocular stereo methods with respect to radiometric variations of the input images. We consider both pixel-based and window-based variants like the absolute difference, the sampling-insensitive absolute difference, and normalized cross correlation, as well as their zero-mean versions. We also consider filters like LoG, mean, and bilateral background subtraction (BilSub) and non-parametric measures like Rank, SoftRank, Census, and Ordinal. Finally, hierarchical mutual information (HMI) is considered as pixelwise cost. Using stereo datasets with ground-truth disparities taken under controlled changes of exposure and lighting, we evaluate the costs with a local, a semi-global, and a global stereo method. We measure the performance of all costs in the presence of simulated and real radiometric differences, including exposure differences, vignetting, varying lighting and noise. Overall, the ranking of methods across all datasets and experiments appears to be consistent. Among the best costs are BilSub, which performs consistently very well for low radiometric differences; HMI, which is slightly better as pixel-wise matching cost in some cases and for strong image noise; and Census, which showed the best and most robust overall performance.

Dynamic Shape Capture using Multi-View Photometric Stereo

by Daniel Vlasic, Pieter Peers, Ilya Baran, Paul Debevec, Jovan Popović, Szymon Rusinkiewicz, Wojciech Matusik - In ACM Transactions on Graphics
"... Figure 1: Our system rapidly acquires images under varying illumination in order to compute photometric normals from multiple viewpoints. The normals are then used to reconstruct detailed mesh sequences of dynamic shapes such as human performers. We describe a system for high-resolution capture of m ..."
Abstract - Cited by 50 (4 self) - Add to MetaCart
Figure 1: Our system rapidly acquires images under varying illumination in order to compute photometric normals from multiple viewpoints. The normals are then used to reconstruct detailed mesh sequences of dynamic shapes such as human performers. We describe a system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views. The normal maps are captured using active shape-from-shading (photometric stereo), with a large lighting dome providing a series of novel spherical lighting configurations. To compensate for low-frequency deformation, we perform multi-view matching and thin-plate spline deformation on the initial surfaces obtained by integrating the normal maps. Next, the corrected meshes are merged into a single mesh using a volumetric method. The final output is a set of meshes, which were impossible to produce with previous methods. The meshes exhibit details on the order of a few millimeters, and represent the performance over human-size working volumes at a temporal resolution of 60Hz. 1

Mesostructure from specularity

by Tongbo Chen, Michael Goesele, Hans-peter Seidel - IN CVPR ’06: PROCEEDINGS OF THE 2006 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION , 2006
"... We describe a simple and robust method for surface mesostructure acquisition. Our method builds on the observation that specular reflection is a reliable visual cue for surface mesostructure perception. In contrast to most photometric stereo methods, which take specularities as outliers and discard ..."
Abstract - Cited by 47 (5 self) - Add to MetaCart
We describe a simple and robust method for surface mesostructure acquisition. Our method builds on the observation that specular reflection is a reliable visual cue for surface mesostructure perception. In contrast to most photometric stereo methods, which take specularities as outliers and discard them, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects with a wide variety of reflection properties, including translucent, low albedo, and highly specular objects. We show results for a variety of objects including human skin, dried apricot, orange, jelly candy, black leather and dark chocolate.

Photometric stereo with non-parametric and spatially-varying reflectance

by Neil Alldrin, Todd Zickler, David Kriegman - In Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR , 2008
"... We present a method for simultaneously recovering shape and spatially varying reflectance of a surface from photometric stereo images. The distinguishing feature of our approach is its generality; it does not rely on a specific parametric reflectance model and is therefore purely “datadriven”. This ..."
Abstract - Cited by 47 (5 self) - Add to MetaCart
We present a method for simultaneously recovering shape and spatially varying reflectance of a surface from photometric stereo images. The distinguishing feature of our approach is its generality; it does not rely on a specific parametric reflectance model and is therefore purely “datadriven”. This is achieved by employing novel bi-variate approximations of isotropic reflectance functions. By combining this new approximation with recent developments in photometric stereo, we are able to simultaneously estimate an independent surface normal at each point, a global set of non-parametric “basis material ” BRDFs, and per-point material weights. Our experimental results validate the approach and demonstrate the utility of bi-variate reflectance functions for general non-parametric appearance capture. 1.
(Show Context)

Citation Context

...o place reference objects in the scene that have similar reflectance to the test object. This method was used in early photometric stereo research [22] and was later reexamined by Hertzmann and Seitz =-=[8, 9]-=-. The basic idea is that the reference objects provide a direct measurement of the BRDFs in the scene, which is then matched to points on the test object. This works for arbitrary BRDFs, but requires ...

ShadowCuts: Photometric Stereo with Shadows

by Manmohan Chandraker
"... We present an algorithm for performing Lambertian photometric stereo in the presence of shadows. The algorithm has three novel features. First, a fast graph cuts based method is used to estimate per pixel light source visibility. Second, it allows images to be acquired with multiple illuminants, and ..."
Abstract - Cited by 32 (0 self) - Add to MetaCart
We present an algorithm for performing Lambertian photometric stereo in the presence of shadows. The algorithm has three novel features. First, a fast graph cuts based method is used to estimate per pixel light source visibility. Second, it allows images to be acquired with multiple illuminants, and there can be fewer images than light sources. This leads to better surface coverage and improves the reconstruction accuracy by enhancing the signal to noise ratio and the condition number of the light source matrix. The ability to use fewer images than light sources means that the imaging effort grows sublinearly with the number of light sources. Finally, the recovered shadow maps are combined with shading information to perform constrained surface normal integration. This reduces the low frequency bias inherent to the normal integration process and ensures that the recovered surface is consistent with the shadowing configuration The algorithm works with as few as four light sources and four images. We report results for light source visibility detection and high quality surface reconstructions for synthetic and real datasets. 1.
(Show Context)

Citation Context

...etric stereo under complex illumination has been demonstrated in [4]. The assumption of Lambertian BRDF in photometric stereo has been relaxed in a few cases, notably example-based photometric stereo =-=[11]-=-. Deriving shape information from only shadows also has a long history including a series of works by Kender and his colleagues starting with [15] as well as Daum and Dudek [9]. Shadow carving [18] is...

Factored time-lapse video

by Kalyan Sunkavalli, Wojciech Matusik, Hanspeter Pfister, Szymon Rusinkiewicz - ACM Transactions on Graphics (Proc. SIGGRAPH , 2007
"... Figure 1: We decompose a time-lapse sequence of photographs (a) into sun, sky, shadow, and reflectance components. The representation permits re-rendering without shadows (b) and without skylight (c), or modifying the reflectance of surfaces in the scene (d). We describe a method for converting time ..."
Abstract - Cited by 32 (1 self) - Add to MetaCart
Figure 1: We decompose a time-lapse sequence of photographs (a) into sun, sky, shadow, and reflectance components. The representation permits re-rendering without shadows (b) and without skylight (c), or modifying the reflectance of surfaces in the scene (d). We describe a method for converting time-lapse photography captured with outdoor cameras into Factored Time-Lapse Video (FTLV): a video in which time appears to move faster (i.e., lapsing) and where data at each pixel has been factored into shadow, illumination, and reflectance components. The factorization allows a user to easily relight the scene, recover a portion of the scene geometry (normals), and to perform advanced image editing operations. Our method is easy to implement, robust, and provides a compact representation with good reconstruction characteristics. We show results using several publicly available time-lapse sequences. CR Categories: I.4.8 [Image Processing and Computer Vision]: Scene Analysis—Time-varying Imagery I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Color, shading, shadowing, and texture Keywords: Image-based rendering and lighting, inverse problems, computational photography, reflectance 1

Confocal Stereo

by Samuel W. Hasinoff , Kiriakos N. Kutulakos , 2009
"... We present confocal stereo, a new method for computing 3D shape by controlling the focus and aperture of a lens. The method is specifically designed for reconstructing scenes with high geometric complexity or fine-scale texture. To achieve this, we introduce the confocal constancy property, which st ..."
Abstract - Cited by 30 (4 self) - Add to MetaCart
We present confocal stereo, a new method for computing 3D shape by controlling the focus and aperture of a lens. The method is specifically designed for reconstructing scenes with high geometric complexity or fine-scale texture. To achieve this, we introduce the confocal constancy property, which states that as the lens aperture varies, the pixel intensity of a visible in-focus scene point will vary in a scene-independent way, that can be predicted by prior radiometric lens calibration. The only requirement is that incoming radiance within the cone subtended by the largest aperture is nearly constant. First, we develop a detailed lens model that factors out the distortions in high resolution SLR cameras (12MP or more) with large-aperture lenses (e.g., f1.2). This allows us to assemble an A × F aperture-focus image (AFI) for each pixel, that collects the undistorted measurements over all A apertures and F focus settings. In the AFI representation, confocal constancy reduces to color comparisons within regions of the AFI, and leads to focus metrics that can be evaluated separately for each pixel. We propose two such metrics and present initial reconstruction results for complex scenes, as well as for a scene with known ground-truth shape.

Photometric Stereo for Outdoor Webcams

by Jens Ackermann, Fabian Langguth, Simon Fuhrmann, Michael Goesele, Tu Darmstadt
"... We present a photometric stereo technique that operates on time-lapse sequences captured by static outdoor webcams over the course of several months. Outdoor webcams produce a large set of uncontrolled images subject to varying lighting and weather conditions. We first automatically select a suitabl ..."
Abstract - Cited by 20 (0 self) - Add to MetaCart
We present a photometric stereo technique that operates on time-lapse sequences captured by static outdoor webcams over the course of several months. Outdoor webcams produce a large set of uncontrolled images subject to varying lighting and weather conditions. We first automatically select a suitable subset of the captured frames for further processing, reducing the dataset size by several orders of magnitude. A camera calibration step is applied to recover the camera response function, the absolute camera orientation, and to compute the light directions for each image. Finally, we describe a new photometric stereo technique for non-Lambertian scenes and unknown light source intensities to recover normal maps and spatially varying materials of the scene. 1.
(Show Context)

Citation Context

...6, 20]. We use Hayakawa [11] to initially estimate relative light intensities under the assumption of a Lambertian scene. While all these techniques assume Lambertion reflectance, Hertzmann and Seitz =-=[12]-=- use example objects in the scene with known geometry to reconstruct objects with arbitrary, varying BRDFs. Ackermann et al. [1] combine this approach with multi-view stereo to replace the example obj...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University