Results 1  10
of
36
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
, 1997
"... We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a highdimensional space. We take advantage of the observation that the images ..."
Abstract

Cited by 1505 (18 self)
 Add to MetaCart
We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a highdimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space  if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce selfshadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's Linear Discriminant and produces well separated classes in a lowdimensional subspace even under severe variation in lighting and facial expressions. The Eigenface
From Few to many: Illumination cone models for face recognition under variable lighting and pose
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... We present a generative appearancebased method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a smal ..."
Abstract

Cited by 433 (12 self)
 Add to MetaCart
We present a generative appearancebased method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render—or synthesize—images of the face under novel poses and illumination conditions. The pose space is then sampled, and for each pose the corresponding illumination cone is approximated by a lowdimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses ¢ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
Numerical Shape from Shading and Occluding Boundaries
 Artifical Intelligence
, 1981
"... An iterative method for computing shape from shading using occluding boundary information is proposed. Some applications of this method are shown. We employ the stereographic plane to express the orientations of surface patches, rather than the more commonly.used gradient space. Use of the stereogra ..."
Abstract

Cited by 191 (14 self)
 Add to MetaCart
An iterative method for computing shape from shading using occluding boundary information is proposed. Some applications of this method are shown. We employ the stereographic plane to express the orientations of surface patches, rather than the more commonly.used gradient space. Use of the stereographic plane makes it possible to incorporate occluding boundary information, but forces us to employ a smoothness constraint different from the one previously proposed. The new constraint follows directly from a particular definition of surface smoothness. We solve the set of equations arising from the smoothness constraints and the imageirradiance equation iteratively, using occluding boundary information to supply boundary conditions. Good initial values are found at certain points to help reduce the number of iterations required to reach a reasonable solution. Numerical experiments show that the method is effective and robust. Finally, we analyze scanning electron microscope (SEM) pictures using this method. Other applications are also proposed. 1.
ModelBased Recognition in Robot Vision
 ACM Computing Surveys
, 1986
"... This paper presents a comparative study and survey of modelbased objectrecognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the “binpicking ” ..."
Abstract

Cited by 161 (0 self)
 Add to MetaCart
This paper presents a comparative study and survey of modelbased objectrecognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the “binpicking ” problem, in which the parts to be recognized are presented in a jumbled bin. The paper is organized according to 2D, 2&D, and 3D object representations, which are used as the basis for the recognition algorithms. Three
Extended gaussian images
 Proceedings of the IEEE
, 1984
"... This is a primer on extended Gaussian Images. Extended Gaussian Images are useful for representing the shapes of surfaces. They can be computed easily from: 1. Needle maps obtained using photometric stereo, or 2. Depth maps generated by ranging devices or stereo. Importantly, they can also be determ ..."
Abstract

Cited by 148 (3 self)
 Add to MetaCart
This is a primer on extended Gaussian Images. Extended Gaussian Images are useful for representing the shapes of surfaces. They can be computed easily from: 1. Needle maps obtained using photometric stereo, or 2. Depth maps generated by ranging devices or stereo. Importantly, they can also be determined simply from geometric models of the objects. Extended Gaussian images can be of use in at least two of the tasks facing a machine vision system.
(MIT AI Memo 740)
Surface Reflection: Physical and Geometrical Perspectives
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1991
"... Machine vision can greatly benefit from the development of accurate reflectance models. There are two approaches to the study of reflection: physical and geometrical optics. While geometrical models may be consumed as mere approximations to physical models, they possess simpler mathematical forms th ..."
Abstract

Cited by 116 (26 self)
 Add to MetaCart
Machine vision can greatly benefit from the development of accurate reflectance models. There are two approaches to the study of reflection: physical and geometrical optics. While geometrical models may be consumed as mere approximations to physical models, they possess simpler mathematical forms that often render them more usable than physical models. However, geometrical models are applicable only when the wavelength of incident light is small compared to the dimensions of the surface imperfections. Therefore, it is incorrect to use these models to interpret or predict reflections from smooth surfaces, and only physical models are capable of describing the underlying reflection mechanism.
Realtime tracking of image regions with changes in geometry and illumination
, 1996
"... Historically, SSD or correlationbased visual tracking algorithms have been sensitive to changes in illumination and shading across the target region. This paper describes methods for implementing SSD tracking that is both insensitive to illumination variations and computationally e cient. We rst de ..."
Abstract

Cited by 107 (8 self)
 Add to MetaCart
Historically, SSD or correlationbased visual tracking algorithms have been sensitive to changes in illumination and shading across the target region. This paper describes methods for implementing SSD tracking that is both insensitive to illumination variations and computationally e cient. We rst describe a vectorspace formulation of the tracking problem, showing how to recover geometric deformations. We then show that the same vector space formulation can be used to account for changes in illumination. We combine geometry and illumination into an algorithm that tracks large image regions on live video sequences using no more computation than would be required to track with no accommodation for illumination changes. We present experimental results which compare theperformance of SSD tracking with and without illumination compensation. 1
Helmholtz Stereopsis: Exploiting Reciprocity for Surface Reconstruction
 International Journal of Computer Vision
, 2002
"... Abstract. We present a method – termed Helmholtz stereopsis – for reconstructing the geometry of objects from a collection of images. Unlike most existing methods for surface reconstruction (e.g., stereo vision, structure from motion, photometric stereo), Helmholtz stereopsis makes no assumptions ab ..."
Abstract

Cited by 99 (13 self)
 Add to MetaCart
Abstract. We present a method – termed Helmholtz stereopsis – for reconstructing the geometry of objects from a collection of images. Unlike most existing methods for surface reconstruction (e.g., stereo vision, structure from motion, photometric stereo), Helmholtz stereopsis makes no assumptions about the nature of the bidirectional reflectance distribution functions (BRDFs) of objects. This new method of multinocular stereopsis exploits Helmholtz reciprocity by choosing pairs of light source and camera positions that guarantee that the ratio of the emitted radiance to the incident irradiance is the same for corresponding points in the two images. The method provides direct estimates of both depth and field of surface normals, and consequently weds the advantages of both conventional and photometric stereopsis. Results from our implementations lend empirical support to our technique. 1
Shape and spatiallyvarying BRDFs from photometric stereo
, 2004
"... a) b) c) e) f) Figure 1 From a) photographs of an object taken under varying illumination (one of ten photographs is shown here), we reconstruct b) its normals and materials, represented as c) a material weight map controlling a mixture of d,e) fundamental materials. Using this representation we can ..."
Abstract

Cited by 74 (0 self)
 Add to MetaCart
a) b) c) e) f) Figure 1 From a) photographs of an object taken under varying illumination (one of ten photographs is shown here), we reconstruct b) its normals and materials, represented as c) a material weight map controlling a mixture of d,e) fundamental materials. Using this representation we can f) rerender the object under novel lighting. This paper describes a photometric stereo method designed for surfaces with spatiallyvarying BRDFs, including surfaces with both varying diffuse and specular properties. Our method builds on the observation that most objects are composed of a small number of fundamental materials. This approach recovers not only the shape but also material BRDFs and weight maps, yielding compelling results for a wide variety of objects. We also show examples of interactive lighting and editing operations made possible by our method. 1