Results 1  10
of
106
The Design and Use of Steerable Filters
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1991
"... Oriented filters are useful in many early vision and image processing tasks. One often needs to apply the same filter, rotated to different angles under adaptive control, or wishes to calculate the filter response at various orientations. We present an efficient architecture to synthesize filters of ..."
Abstract

Cited by 1079 (11 self)
 Add to MetaCart
Oriented filters are useful in many early vision and image processing tasks. One often needs to apply the same filter, rotated to different angles under adaptive control, or wishes to calculate the filter response at various orientations. We present an efficient architecture to synthesize filters of arbitrary orientations from linear combinations of basis filters, allowing one to adaptively "steer" a filter to any orientation, and to determine analytically the filter output as a function of orientation.
Height and gradient from shading
 International Journal of Computer Vision
, 1990
"... Abstract: The method described here for recovering the shape of a surface from a shaded image can deal with complex, wrinkled surfaces. Integrability can be enforced easily because both surface height and gradient are represented (A gradient field is integrable if it is the gradient of some surface ..."
Abstract

Cited by 136 (1 self)
 Add to MetaCart
Abstract: The method described here for recovering the shape of a surface from a shaded image can deal with complex, wrinkled surfaces. Integrability can be enforced easily because both surface height and gradient are represented (A gradient field is integrable if it is the gradient of some surface height function). The robustness of the method stems in part from linearization of the reflectance map about the current estimate of the surface orientation at each picture cell (The reflectance map gives the dependence of scene radiance on surface orientation). The new scheme can find an exact solution of a given shapefromshading problem even though a regularizing term is included. The reason is that the penalty term is needed only to stabilize the iterative scheme when it is far from the correct solution; it can be turned off as the solution is approached. This is a reflection of the fact that shapefromshading problems are not illposed when boundary conditions are available, or when the image contains singular points. This paper includes a review of previous work on shape from shading and photoclinometry. Novel features of the new scheme are introduced one at a time to make it easier to see what each contributes. Included is a discussion of implementation details that are important if exact algebraic solutions of synthetic shapefromshading problems are to be obtained. The hope is that better performance on synthetic data will lead to better performance on real data.
Statistical Approach to Shape from Shading: Reconstruction of 3D Face Surfaces from Single 2D Images
 Neural Computation
, 1997
"... The human visual system is proficient in perceiving threedimensional shape from the shading patterns in a twodimensional image. How it does this is not well understood and continues to be a question of fundamental and practical interest. In this paper we present a new quantitative approach to shap ..."
Abstract

Cited by 116 (0 self)
 Add to MetaCart
The human visual system is proficient in perceiving threedimensional shape from the shading patterns in a twodimensional image. How it does this is not well understood and continues to be a question of fundamental and practical interest. In this paper we present a new quantitative approach to shapefromshading that may provide some answers. We suggest that the brain, through evolution or prior experience, has discovered that objects can be classified into lowerdimensional objectclasses as to their shape. Extraction of shape from shading is then equivalent to the much simpler problem of parameter estimation in a low dimensional space. We carry out this proposal for an important class of 3D objects; human heads. From an ensemble of several hundred laserscanned 3D heads, we use principal component analysis to derive a lowdimensional parameterization of head shape space. An algorithm for solving shapefromshading using this representation is presented. It works well even on real im...
Ordinal structure in the visual perception and cognition of smoothly curved surface
 Psychological Review
, 1989
"... In theoretical analyses of visual form perception, it is often assumed that the 3dimensional structures of smoothly curved surfaces are perceptually represented as pointbypoint mappings of metric depth and/or orientation relative to the observer. This article describes an alternative theory in wh ..."
Abstract

Cited by 56 (9 self)
 Add to MetaCart
(Show Context)
In theoretical analyses of visual form perception, it is often assumed that the 3dimensional structures of smoothly curved surfaces are perceptually represented as pointbypoint mappings of metric depth and/or orientation relative to the observer. This article describes an alternative theory in which it is argued that our visual knowledge of smoothly curved surfaces can also be denned in terms of local, nonmetric order relations. A fundamental prediction of this analysis is that relative depth judgments between any two surface regions should be dramatically influenced by the monotonicity of depth change (or lack of it) along the intervening portions of the surface through which they are separated. This prediction is confirmed in a series of experiments using surfaces depicted with either shading or texture. Additional experiments are reported, moreover, that demonstrate that smooth occlusion contours are a primary source of information about the ordinal structure of a surface and that the depth extrema in between contours can be optically specified by differences in luminance at the points of occlusion. For many higher organisms, including humans, a primary source of knowledge about objects and events in the surrounding environment is provided by vision. Because of the ecological
Shape from shading: a wellposed problem
, 2004
"... Shape From Shading is known to be an illposed problem. We show in this paper that if we model the problem in a different way than it is usually done, more precisely by taking into account the 1/r2 attenuation term of the illumination, Shape From Shading becomes completely wellposed. Thus the shadin ..."
Abstract

Cited by 50 (4 self)
 Add to MetaCart
(Show Context)
Shape From Shading is known to be an illposed problem. We show in this paper that if we model the problem in a different way than it is usually done, more precisely by taking into account the 1/r2 attenuation term of the illumination, Shape From Shading becomes completely wellposed. Thus the shading allows to recover (almost) any surface from only one image (of this surface) without any additional data (in particular, without the knowledge of the heights of the solution at the local intensity “minima”, contrary to [6, 23, 8, 25, 12]) and without regularity assumptions (contrary to [17, 10], for example). More precisely, we formulate the problem as that of solving a new Partial Differential Equation (PDE), we develop a complete mathematical study of this equation and we design a new provably convergent numerical method. Finally, we present results of our new Shape From Shading method on various synthetic and real images. 1. Introduction and
On 3D Surface Reconstruction Using Shape from Shadows
, 1998
"... In this paper we discuss new results on the Shape From Darkness problem: using the motion of cast shadows to recover scene structure. Our approach is based on collecting a set of images from a fixed viewpoint as a known light source moves "across the sky". Previously published solutions to ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
In this paper we discuss new results on the Shape From Darkness problem: using the motion of cast shadows to recover scene structure. Our approach is based on collecting a set of images from a fixed viewpoint as a known light source moves "across the sky". Previously published solutions to this problem have performed the reconstruction only for cross sections of the scene. In this paper, we present a reconstruction algorithm and discuss the reconstruction of an entire 3D scene under various light source trajectories. We also consider the constraints on reconstruction. We conclude with experimental results that illustrate the convergence properties of the solution process and its robustness properties. I. Introduction In this paper, we consider surface reconstruction from shadow information. That is, to use the shape and geometric properties of observed shadows to infer the shape of the surfaces casting the shadows as well as those that the shadows are cast upon. This problem is somet...
Analysis of Shape from Shading Techniques
 PROC IEEE CVPR
, 1994
"... Since the first shapefromshading technique was developed by Horn in the early 1970s, different approaches have been continuously emerging in the past two decades. Some of them improve existing techniques, while others are completely new approaches. However, there is no literature on the comparison ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
Since the first shapefromshading technique was developed by Horn in the early 1970s, different approaches have been continuously emerging in the past two decades. Some of them improve existing techniques, while others are completely new approaches. However, there is no literature on the comparison and performance analysis of these techniques. This is exactly what is addressed in this paper.
Bayesian Decision Theory and Psychophysics
 In Perception as Bayesian Inference
, 1994
"... We argue that Bayesian decision theory provides a good theoretical framework for visual perception. Such a theory involves a likelihood function specifying how the scene generates the image(s), a prior assumption about the scene, and a decision rule to determine the scene interpretation. This is ill ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
(Show Context)
We argue that Bayesian decision theory provides a good theoretical framework for visual perception. Such a theory involves a likelihood function specifying how the scene generates the image(s), a prior assumption about the scene, and a decision rule to determine the scene interpretation. This is illustrated by describing Bayesian theories for individual visual cues and showing that perceptual biases found in psychophysical experiments can be interpreted as biases towards prior assumptions made by the visual system. We then describe the implications of this framework for the integration of different cues. We argue that the dependence of cues on prior assumptions means that care must be taken to model these dependencies during integration. This suggests that a number of proposed schemes for cue integration, which only allow weak interaction between cues, are not adequate and instead stronger coupling is often required. These theories require the choice of decision rules and we argue that...
Sensitivity to threedimensional orientation in visual search
 Psychological Science
, 1990
"... Abstract—Previous theories of early vision have assumed that visual search is based on simple twodimensional aspects of an image, such as the orientation of edges and lines. It is shown here that search can also be based on threedimensional orientation of objects in the corresponding scene, provid ..."
Abstract

Cited by 44 (9 self)
 Add to MetaCart
(Show Context)
Abstract—Previous theories of early vision have assumed that visual search is based on simple twodimensional aspects of an image, such as the orientation of edges and lines. It is shown here that search can also be based on threedimensional orientation of objects in the corresponding scene, provided that these objects are simple convex blocks. Direct comparison shows that imagebased and scenebased orientation are similar in their ability to facilitate search. These findings support the hypothesis that scenebased properties are represented at preattentive levels in early vision. Visual search is a powerful tool for investigating the representations and processes at the earliest stages of human vision. In this task, observers try to determine as rapidly as possible whether a given target item is present or absent in a display. If the time to detect the target is relatively independent of the number of other items present, the display is considered to contain a distinctive visual feature. Features found in this way (e.g. orientation, color, motion) are taken to be the primitive elements of the visual systems. The most comprehensive theories of visual search (Beck, 1982; Julesz, 1984; Treisman, 1986) hypothesize the existence of two visual subsystems. A preattentive system detects features in parallel across the visual field. Spatial relations between features are not registered at this stage. These can only be determined by an attentive system that inspects serially each collection of features in the image. Recent findings, however, have argued for more sophisticated preattentive processes. For example, numerous reports show features to