#### DMCA

## A HOLISTIC APPROACH TO STRURCTURE FROM MOTION (2006)

### Citations

10580 | Introduction to Algorithms
- Cormen, Leiserson, et al.
- 1990
(Show Context)
Citation Context ...ed approach implementing bottom-up merging. Our algorithm is derived from [73], where it is used for color segmentation. The algorithm is closely related to Kruskal’s minimum spanning tree for graphs =-=[74]-=-. The initial graph is constructed as follows: each color patch set V k (the patches matched across different frames) denotes a vertex in the graph, and each neighboring pair of such vertices (U, V ) ... |

3193 |
A Wavelet Tour of Signal Processing
- Mallat
- 1999
(Show Context)
Citation Context ... 5.4.1 Introduction to PR filter banks Before presenting our algorithm, we first give a brief introduction to 2-channel PR (perfect reconstruction) filter banks, also called wavelet filter banks (see =-=[100]-=- for more details). A two-channel filter bank consists of two parts: an analysis filter bank and a synthesis filter bank. In our case, the signal x̃ is first convolved with a low-pass filter ` and a h... |

2957 |
The fractal geometry of nature
- Mandelbrot
- 1983
(Show Context)
Citation Context ... classical geometry. Fractal geometry was developed, which provides a general framework for studying irregular sets as well as regular sets. The term ”fractal” was coined in 1975 by Benoit Mandelbrot =-=[86]-=-, meaning ”broken” or ”fractured’. Mathematically, a fractal is a geometric object whose Hausdorff dimension greater than its topological dimension. Fractal properties include scale independence, self... |

1714 |
Robot Vision
- Horn
- 1986
(Show Context)
Citation Context ... and motion for general scenes does not seem to be possible. The continuous techniques use as input video, that is sequences of images with small changes in viewing geometry between consecutive views =-=[11, 12, 13]-=-. From the time-varying image brightness pattern the image motion between pairs of views, or at least the image motion field of intensity gradients can be estimated rather easily, 2 although not very ... |

1544 | A taxonomy and evaluation of dense two-frame stereo correspondence algorithms
- Scharstein, Szeliski
- 2002
(Show Context)
Citation Context ...ter we will examine the computations that allow us to recover the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion [9, 22, 23], stereo =-=[24]-=-, texture [25, 26, 27, 28], shading [29, 30] and contours [31, 32] encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. The main reason... |

1092 | Shape and motion from image streams under orthography: a factorization method
- Tomasi, Kanade
- 1992
(Show Context)
Citation Context ...verview In this chapter we will examine the computations that allow us to recover the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion =-=[9, 22, 23]-=-, stereo [24], texture [25, 26, 27, 28], shading [29, 30] and contours [31, 32] encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. Th... |

935 | Efficient graph-based image segmentation
- Felzenszwalb, Huttenlocher
- 2004
(Show Context)
Citation Context ...mage. However, for the purpose of ego-motion estimation, this does not matter. 76 3.5.2 Detailed algorithm We take a graph-based approach implementing bottom-up merging. Our algorithm is derived from =-=[73]-=-, where it is used for color segmentation. The algorithm is closely related to Kruskal’s minimum spanning tree for graphs [74]. The initial graph is constructed as follows: each color patch set V k (t... |

833 |
Techniques in Fractal Geometry
- Falconer
- 1997
(Show Context)
Citation Context ...ation leads to an MFS which is robust to illumination changes, as well as better discrimination. It is also important to point out that there exists an efficient and robust algorithm for computing it =-=[88]-=-. 4.3 MFS for textures The MFS is the vector of the fractal dimensions of some categorization of the image points. Using the idea of the local density function in the categorization has the advantage ... |

653 |
Measurement Error Models
- Fuller
- 1987
(Show Context)
Citation Context ...he inverse of 14 a matrix, the solution of the LS estimator xLS is characterized by xLS = (A TA)−1AT b. (2.4) However, it is well known, that under noisy conditions this estimator generally is biased =-=[42, 43]-=-. What does this mean? Consider a problem for which you have a set of noisy measurements and you make an estimate. Then you choose another set of measurements and make another estimate. Continue many ... |

632 |
Multiple View Geometry
- Hartley, Zisserman
- 2001
(Show Context)
Citation Context ...aches, the discrete and the continuous ones, with major advances in different areas of application. The so-called discrete techniques use views of the scene which are significantly separated in space =-=[5, 6, 7, 8]-=- and require corresponding image features in the different views, usually salient points [9] and lines [10] . Using the correspondences, the discrete rigid displacement of the camera between the views... |

559 |
Three-Dimensional Computer Vision
- Faugeras
- 1993
(Show Context)
Citation Context ...aches, the discrete and the continuous ones, with major advances in different areas of application. The so-called discrete techniques use views of the scene which are significantly separated in space =-=[5, 6, 7, 8]-=- and require corresponding image features in the different views, usually salient points [9] and lines [10] . Using the correspondences, the discrete rigid displacement of the camera between the views... |

420 | Limits on superresolution and how to break them
- Baker, Kanade
- 2002
(Show Context)
Citation Context ...from a(z)x(z) and b(z)x(z) exactly. This observation is not an incident. It actually holds true for general blurring kernels, as we will show next. We follow Baker’s modeling of the blurring kernelH (=-=[99]-=-). The blurring kernel (Point spread function) can be decomposed into two components: H = Ω ∗ C, where Ω(X) models the blurring caused by the optics and C(X) models the spatial 121 integration perform... |

398 | The Geometry of Multiple Images
- Faugeras, Luong, et al.
- 2001
(Show Context)
Citation Context ...aches, the discrete and the continuous ones, with major advances in different areas of application. The so-called discrete techniques use views of the scene which are significantly separated in space =-=[5, 6, 7, 8]-=- and require corresponding image features in the different views, usually salient points [9] and lines [10] . Using the correspondences, the discrete rigid displacement of the camera between the views... |

395 | Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces
- Tsai, Huang
- 1984
(Show Context)
Citation Context ...he so-called discrete techniques use views of the scene which are significantly separated in space [5, 6, 7, 8] and require corresponding image features in the different views, usually salient points =-=[9]-=- and lines [10] . Using the correspondences, the discrete rigid displacement of the camera between the views is computed. Then, the intersection of viewing rays provides the structure. This approach h... |

287 |
Statistical Optimization for Geometric Computation: Theory and Practice
- Kanatani
- 1996
(Show Context)
Citation Context ...he 3D motion and the structure, we can segment the scene and apply our methods to larger amounts of data. A number of previous studies have analyzed the statistics of visual processes. In particular, =-=[45]-=- discussed bias for some visual recovery processes. A few studies analyzed the statistics of structure from motion [46, 19, 12, 47]. However, these analyses stayed at the general level of parameter es... |

258 | Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage
- Chambolle, DeVore, et al.
- 1998
(Show Context)
Citation Context ...tive back116 projection method with a fast and optimal regularization criteria in each iteration step. Wavelet theory has previously been used for image de-noising and de-blurring from static images (=-=[95, 96]-=-). However, it has not been studied much with respect to the super-resolution problem. In recent work wavelet theory has been applied to this problem [97], but only for the purpose of speeding up the ... |

255 |
Geometric Invariance in Computer Vision
- Mundy, Zisserman
- 1992
(Show Context)
Citation Context ...on algebraic and geometric invariants. Great importance has been given to the study of quantities which are invariant to the viewpoint from which the image is taken. A number of projective invariants =-=[76]-=- have been found, which are defined on feature sets of points and lines and planar curves [75], and they have been used for object recognition and calibration. However, none of these descriptors provi... |

253 | Numerical shape from shading and occluding boundaries
- Ikeuchi, Horn
- 1981
(Show Context)
Citation Context ...t allow us to recover the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion [9, 22, 23], stereo [24], texture [25, 26, 27, 28], shading =-=[29, 30]-=- and contours [31, 32] encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. The main reason is that we have to segment the scene while ... |

251 |
Optimization Algorithms and Consistent Approximations
- Polak
- 1997
(Show Context)
Citation Context ... squares minimization with quadratic constraints. There are many standard algorithms dealing with such types of minimization. We used the Mukai-Polak version of the Augmented Lagrangian method (ALS) (=-=[72]-=-) which guarantees super-linear convergence. After having obtained the rpf from (3.12), we need to estimate ‖npf‖, the length of the npf , in order to solve subsequently for translation and rotation. ... |

224 | Motion analysis for image enhancement: resolution, occlusion and transparency,” JVCIP,
- Irani, Peleg
- 1993
(Show Context)
Citation Context ...e critical for the success of the super-resolution reconstruction. There is extensive literatures on solving image alignment problem. Many researchers takes a flow-based approach for image alignment (=-=[89, 90, 91, 92]-=-). Here we adapt the algorithm described in Chapter 3 to the case of large image displacement to provide a homography-based alignment of multiple frames. This is described in Section 5.6. The remained... |

210 | Texture Classification by Wavelet Packet Signatures
- Laine, Fan
- 1993
(Show Context)
Citation Context ...tion. However, none of these descriptors provides a high-level characterization of textures. Numerous texture descriptors have been proposed. Most of them are either statistics-based or filter-based (=-=[77, 78, 79, 80]-=-), which makes them sensitive to changes in viewpoint. Lazebnik, Schmid and Ponce ([3]) proposed a texture representation which is invariant to view-point changes in a weak form (it is locally invaria... |

183 |
What does the occluding contour tell us about solid shape
- Koenderink
- 1984
(Show Context)
Citation Context ...the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion [9, 22, 23], stereo [24], texture [25, 26, 27, 28], shading [29, 30] and contours =-=[31, 32]-=- encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. The main reason is that we have to segment the scene while we recover it. It is c... |

181 |
An Invitation to 3-D Vision
- Ma, Soatto, et al.
- 2004
(Show Context)
Citation Context |

181 |
Theory of reconstruction from image motion”,
- Maybank
- 1992
(Show Context)
Citation Context ... and motion for general scenes does not seem to be possible. The continuous techniques use as input video, that is sequences of images with small changes in viewing geometry between consecutive views =-=[11, 12, 13]-=-. From the time-varying image brightness pattern the image motion between pairs of views, or at least the image motion field of intensity gradients can be estimated rather easily, 2 although not very ... |

165 |
Subspace methods for recovering rigid motion: I. Algorithm and implementation.
- Heeger, Jepson
- 1992
(Show Context)
Citation Context ...stems use image motion, there is evidence for the success of such an approach in real world environments. Thus, naturally, image motion has been used in navigation tasks, such as 3D motion estimation =-=[14]-=-, tracking [15, 16], segmentation and obstacle avoidance for robotic systems [17, 18]. Despite all the tremendous progress, neither image motion by itself nor correspondence by itself is sufficient to... |

157 |
Bundle adjustment—a modern synthesis,” in Vision Algorithms: Theory and Practice
- Triggs, McLauchlan, et al.
(Show Context)
Citation Context ...on framework. The multiple view constraints defined on point correspondences are well understood [10, 54, 55, 56]. Nowadays most point correspondence methods employ the technique of bundle adjustment =-=[57]-=- to refine 3D structure and viewing parameters. Oliensis et al. [58, 59] proposed algorithms, which first eliminate the rotational components and then decompose the residual correspondences into struc... |

129 | Classifying images of materials: Achieving viewpoint and illumination independence
- Varma, Zisserman
- 2002
(Show Context)
Citation Context ...ed as a fixed size 107 (a) (b) (c) Figure 4.17: Classification rate vs. number of training samples. Three methods are compared: the MFS method, the (H+L)(S+R) method in [3] and the VZ-Joint method in =-=[4]-=-. (a) Classification rate for the best class. (b) Mean classification rate for all 25 classes. (c) Classification rate for the worst class. 108 random subset of the class, and all remaining images are... |

129 | Robust dynamic motion estimation over time.
- Black, Anandan
- 1991
(Show Context)
Citation Context ...components and then decompose the residual correspondences into structure and 57 translation. A number of studies considered the estimation from multiple flow fields assuming continuity in the motion =-=[60, 61, 62, 63]-=-. [64] and [65] developed algorithms using line and point correspondences for the reconstruction of scenes with planar objects. The first ones to present a subspace constraint on homographies of plane... |

127 |
Structure from motion using line correspondences
- Spetsakis, Aloimonos
- 1990
(Show Context)
Citation Context ...iscrete techniques use views of the scene which are significantly separated in space [5, 6, 7, 8] and require corresponding image features in the different views, usually salient points [9] and lines =-=[10]-=- . Using the correspondences, the discrete rigid displacement of the camera between the views is computed. Then, the intersection of viewing rays provides the structure. This approach has been shown t... |

113 |
Deternjining Shape and Reflectance of Hybrid Surfaces by Photometric Sampling,"
- Nayar, Ikeuchi, et al.
- 1990
(Show Context)
Citation Context ...t allow us to recover the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion [9, 22, 23], stereo [24], texture [25, 26, 27, 28], shading =-=[29, 30]-=- and contours [31, 32] encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. The main reason is that we have to segment the scene while ... |

104 | Motion deblurring and superresolution from an image sequence
- BASCLE, BLAKE, et al.
- 1996
(Show Context)
Citation Context ...e critical for the success of the super-resolution reconstruction. There is extensive literatures on solving image alignment problem. Many researchers takes a flow-based approach for image alignment (=-=[89, 90, 91, 92]-=-). Here we adapt the algorithm described in Chapter 3 to the case of large image displacement to provide a homography-based alignment of multiple frames. This is described in Section 5.6. The remained... |

103 | A sparse texture representation using affine-invariant regions.
- Lazebnik
- 2003
(Show Context)
Citation Context ...to Fig. 4.13. . . . . . . . . . 104 4.15 Four samples each of the 25 texture classes in Ponce’s data sets. . . . 106 viii 4.16 Retrieval curves for the Ponce database by our method and the methods in =-=[3]-=-. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.17 Classification rate vs. number of training samples. Three methods are compared: the MFS method, the (H+L)(S+R) method in [3... |

95 | Automatic reconstruction of piecewise planar models from multiple views
- Baillard, Zisserman
- 1999
(Show Context)
Citation Context ...n decompose the residual correspondences into structure and 57 translation. A number of studies considered the estimation from multiple flow fields assuming continuity in the motion [60, 61, 62, 63]. =-=[64]-=- and [65] developed algorithms using line and point correspondences for the reconstruction of scenes with planar objects. The first ones to present a subspace constraint on homographies of planes in m... |

89 | Multiple resolution texture analysis and classi cation
- Peleg, Naor, et al.
- 1984
(Show Context)
Citation Context ...ditional texture retrieval and classification on standard texture data sets, but by using much lower dimensional feature vectors. Fractal geometry has been used before in the description of textures (=-=[81, 82]-=-) and texture segmentation ([83, 84, 85]). However, the invariance of the fractal dimension to bi-Lipschitz maps has not been utilized in the vision community. Furthermore, existing approaches either ... |

88 |
High-resolution image reconstruction from lower-resolution image sequences and space-varying image restoration
- Tekalp, Ozkan, et al.
- 1992
(Show Context)
Citation Context ...toration. Despite the difficulties caused by the ill-posedness of super-resolution image restoration, researchers have made great progress toward stable algorithms. Iterative back-projection methods (=-=[89, 93]-=-) have been shown to be effective for highresolution image reconstruction. It is known, however, that the de-blurring process, which is part of this approach, makes it very sensitive to the noise. Thu... |

87 |
Texture gradient as a depth cue
- Bajcsy, Lieberman
- 1976
(Show Context)
Citation Context ...amine the computations that allow us to recover the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion [9, 22, 23], stereo [24], texture =-=[25, 26, 27, 28]-=-, shading [29, 30] and contours [31, 32] encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. The main reason is that we have to segmen... |

80 |
Qualitative detection of motion by a moving observer
- Nelson
- 1991
(Show Context)
Citation Context ...real world environments. Thus, naturally, image motion has been used in navigation tasks, such as 3D motion estimation [14], tracking [15, 16], segmentation and obstacle avoidance for robotic systems =-=[17, 18]-=-. Despite all the tremendous progress, neither image motion by itself nor correspondence by itself is sufficient to develop accurate human-like model building capabilities. It is clear that dense corr... |

75 |
Review: Geometric invariants and object recognition
- Weiss
- 1993
(Show Context)
Citation Context ...iant to environmental changes, such as changes in view-point, illumination and geometry of the underlying surface. In the Computer Vision literature, the search for invariance started in the nineties =-=[75]-=-, when researcher dug up the mathematical literature on algebraic and geometric invariants. Great importance has been given to the study of quantities which are invariant to the viewpoint from which t... |

65 |
Shape from texture.
- Aloimonos, Swain
- 1988
(Show Context)
Citation Context ...amine the computations that allow us to recover the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion [9, 22, 23], stereo [24], texture =-=[25, 26, 27, 28]-=-, shading [29, 30] and contours [31, 32] encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. The main reason is that we have to segmen... |

60 | Space perception in pictures
- Doorn, Wagemans, et al.
- 2011
(Show Context)
Citation Context ... and even when the 3D viewing geometry is estimated correctly, the shape often is estimated incorrectly. It is known also in the psychophysical literature that human shape estimation is not veridical =-=[31, 33]-=-. For a variety of conditions and from a number of cues there is an underestimation of slant. Planar surface patches esti12 mated from texture [34, 27], contour [35], stereopsis [1, 36], and motion of... |

56 |
Detecting moving objects
- Thompson, Pong
- 1990
(Show Context)
Citation Context ...real world environments. Thus, naturally, image motion has been used in navigation tasks, such as 3D motion estimation [14], tracking [15, 16], segmentation and obstacle avoidance for robotic systems =-=[17, 18]-=-. Despite all the tremendous progress, neither image motion by itself nor correspondence by itself is sufficient to develop accurate human-like model building capabilities. It is clear that dense corr... |

54 |
A multi-frame approach to visual motion perception
- Spetsakis, Aloimonos
- 1991
(Show Context)
Citation Context ...verview In this chapter we will examine the computations that allow us to recover the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion =-=[9, 22, 23]-=-, stereo [24], texture [25, 26, 27, 28], shading [29, 30] and contours [31, 32] encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. Th... |

53 |
Extended fractal Analysis for texture classification and segmentation.
- Kaplan
- 1999
(Show Context)
Citation Context ...ification on standard texture data sets, but by using much lower dimensional feature vectors. Fractal geometry has been used before in the description of textures ([81, 82]) and texture segmentation (=-=[83, 84, 85]-=-). However, the invariance of the fractal dimension to bi-Lipschitz maps has not been utilized in the vision community. Furthermore, existing approaches either simply compute a single fractal dimensio... |

52 |
Unsupervised texture segmentation of images using Tuned matched Gabor filters.
- Teuner, Pichler, et al.
- 1995
(Show Context)
Citation Context ...tion. However, none of these descriptors provides a high-level characterization of textures. Numerous texture descriptors have been proposed. Most of them are either statistics-based or filter-based (=-=[77, 78, 79, 80]-=-), which makes them sensitive to changes in viewpoint. Lazebnik, Schmid and Ponce ([3]) proposed a texture representation which is invariant to view-point changes in a weak form (it is locally invaria... |

44 | Combining intensity and motion for incremental segmentation and tracking,
- Black
- 1992
(Show Context)
Citation Context ...components and then decompose the residual correspondences into structure and 57 translation. A number of studies considered the estimation from multiple flow fields assuming continuity in the motion =-=[60, 61, 62, 63]-=-. [64] and [65] developed algorithms using line and point correspondences for the reconstruction of scenes with planar objects. The first ones to present a subspace constraint on homographies of plane... |

42 | Structure from motion using sequential monte carlo methods
- Qian, Chellappa
(Show Context)
Citation Context ...components and then decompose the residual correspondences into structure and 57 translation. A number of studies considered the estimation from multiple flow fields assuming continuity in the motion =-=[60, 61, 62, 63]-=-. [64] and [65] developed algorithms using line and point correspondences for the reconstruction of scenes with planar objects. The first ones to present a subspace constraint on homographies of plane... |

42 | A Markov random field model-based approach to unsupervised texture segmentation using local and global spatial statistics
- Kervrann, Heitz
(Show Context)
Citation Context ...tion. However, none of these descriptors provides a high-level characterization of textures. Numerous texture descriptors have been proposed. Most of them are either statistics-based or filter-based (=-=[77, 78, 79, 80]-=-), which makes them sensitive to changes in viewpoint. Lazebnik, Schmid and Ponce ([3]) proposed a texture representation which is invariant to view-point changes in a weak form (it is locally invaria... |

39 | Real-time visual servoing,”
- Allen, Yoshimi, et al.
- 1991
(Show Context)
Citation Context ... motion, there is evidence for the success of such an approach in real world environments. Thus, naturally, image motion has been used in navigation tasks, such as 3D motion estimation [14], tracking =-=[15, 16]-=-, segmentation and obstacle avoidance for robotic systems [17, 18]. Despite all the tremendous progress, neither image motion by itself nor correspondence by itself is sufficient to develop accurate h... |

38 |
On the trilinear tensor of three perspective views and its underlying geometry
- Shashua, Werman
(Show Context)
Citation Context ... allow for robust estimation of structure and motion. We have the trilinear constraint resulting in 27 and the quadrilinear constraint resulting in 64 parameters whose estimations are very sensitive. =-=[10, 54, 55, 56]-=-. In this chapter we propose to combine multiple motion fields, not through depth or inverse depth values, but through 3D shape. The 3D shape of a scene patch is described by the surface normal of the... |

38 |
Is super-resolution with optical flow feasible
- Zhao, Sawhney
- 2002
(Show Context)
Citation Context ...e critical for the success of the super-resolution reconstruction. There is extensive literatures on solving image alignment problem. Many researchers takes a flow-based approach for image alignment (=-=[89, 90, 91, 92]-=-). Here we adapt the algorithm described in Chapter 3 to the case of large image displacement to provide a homography-based alignment of multiple frames. This is described in Section 5.6. The remained... |

37 |
Analytical results on error sensitivity of motion estimation from two views
- Daniilidis, Nagel
- 1990
(Show Context)
Citation Context ...e is not feasible on the basis of two frames or consecutive frames with small baseline displacement only. On one hand, many researchers showed that camera translation is confused with camera rotation =-=[19, 20, 14, 21]-=-. Furthermore, the recovered scene structure is very sensitive to errors in camera motion estimation. On the other hand, our 6 studies (Chapter 1) on visual estimation process, through which humans pe... |

35 | Dual Computation of Projective Shape and Camera Positions from Multiple Images
- Carlsson, Weinshall
- 1998
(Show Context)
Citation Context ... allow for robust estimation of structure and motion. We have the trilinear constraint resulting in 27 and the quadrilinear constraint resulting in 64 parameters whose estimations are very sensitive. =-=[10, 54, 55, 56]-=-. In this chapter we propose to combine multiple motion fields, not through depth or inverse depth values, but through 3D shape. The 3D shape of a scene patch is described by the surface normal of the... |

34 | Texture classification using windowed fourier filters.
- Azencott, Wang, et al.
- 1997
(Show Context)
Citation Context |

33 | Shape from texture and contour by weak isotropy
- Garding
- 1993
(Show Context)
Citation Context ...amine the computations that allow us to recover the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion [9, 22, 23], stereo [24], texture =-=[25, 26, 27, 28]-=-, shading [29, 30] and contours [31, 32] encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. The main reason is that we have to segmen... |

32 | Multi-frame estimation of planar motion,
- Zelnik, Irani
- 2000
(Show Context)
Citation Context ... In the sequel [67] presented a subspace constraint on the relative homographies of pairs of planes across the different views. Most closely related to our work is the study of Zelnik-Manor and Irani =-=[2]-=-, which introduced a subspace constraint on image motion. The approach assumes that differential motion between a reference frame and any other frame at the same scene points can be obtained. Clearly,... |

32 | Surface orientation from texture: ideal observers, generic observers and the information content of texture cues.
- Knill
- 1998
(Show Context)
Citation Context |

31 |
Optical flow estimation and the interaction between measurement errors at adjacent pixel positions.
- Nagel
- 1995
(Show Context)
Citation Context ...ities. Furthermore, the modeling of the scene as consisting of planar patches is an approximation to the actual surface of the scene. Sensor noise may be considered i.i.d. and is easier to deal with (=-=[51, 52]-=-). But other errors could be more significant, and they are more elaborate, making the statistics rather complicated. It is too difficult to estimate the statistics of the combined noise, which is nec... |

31 | A wavelet-based interpolation-restoration method for super-resolution
- Nguyen, Milanfar
- 2000
(Show Context)
Citation Context ...ising and de-blurring from static images ([95, 96]). However, it has not been studied much with respect to the super-resolution problem. In recent work wavelet theory has been applied to this problem =-=[97]-=-, but only for the purpose of speeding up the computation. Our contribution lies in an analysis that reveals the relationship between the inherent structure of super-resolution reconstruction and the ... |

30 | Automatic 3D modelling of architecture
- Dick, Torr, et al.
- 2000
(Show Context)
Citation Context ...se the residual correspondences into structure and 57 translation. A number of studies considered the estimation from multiple flow fields assuming continuity in the motion [60, 61, 62, 63]. [64] and =-=[65]-=- developed algorithms using line and point correspondences for the reconstruction of scenes with planar objects. The first ones to present a subspace constraint on homographies of planes in multiple v... |

29 | A multi-frame structure-from-motion algorithm under perspective projection
- Oliensis
- 1999
(Show Context)
Citation Context ...ondences are well understood [10, 54, 55, 56]. Nowadays most point correspondence methods employ the technique of bundle adjustment [57] to refine 3D structure and viewing parameters. Oliensis et al. =-=[58, 59]-=- proposed algorithms, which first eliminate the rotational components and then decompose the residual correspondences into structure and 57 translation. A number of studies considered the estimation f... |

27 | Observability of 3D motion - Fermüller, Aloimonos |

26 | A nonlinear method for estimating the projective geometry of 3 views
- Faugeras, Papadopoulo
- 1998
(Show Context)
Citation Context ... allow for robust estimation of structure and motion. We have the trilinear constraint resulting in 27 and the quadrilinear constraint resulting in 64 parameters whose estimations are very sensitive. =-=[10, 54, 55, 56]-=-. In this chapter we propose to combine multiple motion fields, not through depth or inverse depth values, but through 3D shape. The 3D shape of a scene patch is described by the surface normal of the... |

25 | Bayesian structure from motion
- Forsyth, Ioffe, et al.
- 1999
(Show Context)
Citation Context |

25 | Robust shift and add approach to super-resolution
- Farsiu, Robinson, et al.
- 2003
(Show Context)
Citation Context ...deal with the noise issue. However, these methods either are very sensitive to the assumed noise model (Tikhonov regularization) or are computationally expensive (Total-Variation regularization). See =-=[94]-=- for more details. 115 Our contributions to the reconstruction process are two-fold. First, we model the image formation procedure from the point of view of filter bank theory . Then based on this new... |

21 | Multi-view subspace constraints on homographies
- Zelnik-Manor, Irani
- 1999
(Show Context)
Citation Context ...pondences for the reconstruction of scenes with planar objects. The first ones to present a subspace constraint on homographies of planes in multiple views were Shashua and Avidan [66]. In the sequel =-=[67]-=- presented a subspace constraint on the relative homographies of pairs of planes across the different views. Most closely related to our work is the study of Zelnik-Manor and Irani [2], which introduc... |

21 | A projective invariant for texture, in
- Xu, Hui, et al.
- 2006
(Show Context)
Citation Context ...ion information. Let’s check first some typical texture. A typical grass tex90 (a) (b) Figure 4.2: (a) Original texture image. (b) 3D surface visualization of the image with D = 2.79. ture image from =-=[87]-=- is shown in Fig. 4.2 (a). If we represent the image as a 3Dsurface in Fig. 4.2 (b) by taking the image intensity as the height, it is easy to see that such a surface is a highly irregular surface. An... |

19 |
Anisotropies in the perception of stereoscopic surfaces: The role of orientation disparity.
- Cagenello, Rogers
- 1993
(Show Context)
Citation Context ...the anisotropy in the perception of slanted (or tilted) planes. A surface slanted about the horizontal axis is estimated much easier and more accurately than a surface slanted about the vertical axis =-=[48, 1, 49]-=-. In both cases there is an underestimation of slant, but it is much larger for slant about the vertical. [48] argued that this effect is due to orientation disparity, which generally (assuming the te... |

19 |
Cue conflict and stereoscopic surface slant about horizontal and vertical axes
- Ryan, Gillam
- 1994
(Show Context)
Citation Context ...the anisotropy in the perception of slanted (or tilted) planes. A surface slanted about the horizontal axis is estimated much easier and more accurately than a surface slanted about the vertical axis =-=[48, 1, 49]-=-. In both cases there is an underestimation of slant, but it is much larger for slant about the vertical. [48] argued that this effect is due to orientation disparity, which generally (assuming the te... |

19 |
The rank 4 constraint in multiple (≥ 3) view geometry
- Shashua, Avidan
- 1996
(Show Context)
Citation Context ...ine and point correspondences for the reconstruction of scenes with planar objects. The first ones to present a subspace constraint on homographies of planes in multiple views were Shashua and Avidan =-=[66]-=-. In the sequel [67] presented a subspace constraint on the relative homographies of pairs of planes across the different views. Most closely related to our work is the study of Zelnik-Manor and Irani... |

18 | Uncertainty in visual processes predicts geometrical optical illusions.
- Fermuller, Malm
- 2004
(Show Context)
Citation Context ...this chapter we are asking whether there are computational reasons for the mis-estimation. In previous work it has been shown that there is a statistical problem with the estimation of image features =-=[38, 39]-=-. Here we extend these concepts to the visual shape recovery processes. We show that there is bias and thus consistent erroneous mis-estimation in the estimation of shape. The underlying cause is the ... |

17 |
Effects of errors in the viewing geometry on shape estimation
- Cheong, Fermuller, et al.
- 1998
(Show Context)
Citation Context ...ent that attempting to pick a single solution in this valley is futile. Such valleys are ubiquitous, and if we pick an erroneous motion estimate, this results in the estimation of distorted structure =-=[70]-=-. 65 Figure 3.3: The motion valley is the area of smallest values on the error surface in the 2D space of translational directions. The error is found by computing for each translation the optimal rot... |

16 |
3-D interpretation of optical flow by renormalization
- Kanatani
- 1993
(Show Context)
Citation Context ... and motion for general scenes does not seem to be possible. The continuous techniques use as input video, that is sequences of images with small changes in viewing geometry between consecutive views =-=[11, 12, 13]-=-. From the time-varying image brightness pattern the image motion between pairs of views, or at least the image motion field of intensity gradients can be estimated rather easily, 2 although not very ... |

15 |
Mechanisms underlying the anisotropy of stereoscopic tilt perception
- Mitchison, McKee
- 1990
(Show Context)
Citation Context .... 30 2.9 Measurements from the three subjects for a plane slanted about the horizontal axis and textured with lines of 45◦ orientation. . . . . . . . 30 2.10 Slant perceived by one of the subjects in =-=[1]-=- for 45◦ oriented texture lines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 vi 2.11 A plane with a texture of two orientations Ld1 and Ld2 is imaged under motion. σ1 an... |

15 |
Algorithm for analysing optical flow based on the least-squares method
- Maybank
- 1986
(Show Context)
Citation Context ...e is not feasible on the basis of two frames or consecutive frames with small baseline displacement only. On one hand, many researchers showed that camera translation is confused with camera rotation =-=[19, 20, 14, 21]-=-. Furthermore, the recovered scene structure is very sensitive to errors in camera motion estimation. On the other hand, our 6 studies (Chapter 1) on visual estimation process, through which humans pe... |

15 | The visual perception of surface orientation from optical motion
- Todd, Perotti
- 1999
(Show Context)
Citation Context ...onditions and from a number of cues there is an underestimation of slant. Planar surface patches esti12 mated from texture [34, 27], contour [35], stereopsis [1, 36], and motion of various parameters =-=[37]-=- have been found to be estimated with smaller slant, that is, closer in orientation to a front-parallel plane than they are. In this chapter we are asking whether there are computational reasons for t... |

15 | The Ouchi illusion as an artifact of biased flow estimation
- Fermuller, Pless, et al.
- 2000
(Show Context)
Citation Context ...this chapter we are asking whether there are computational reasons for the mis-estimation. In previous work it has been shown that there is a statistical problem with the estimation of image features =-=[38, 39]-=-. Here we extend these concepts to the visual shape recovery processes. We show that there is bias and thus consistent erroneous mis-estimation in the estimation of shape. The underlying cause is the ... |

14 | Statistical error propagation in 3d modeling from monocular video
- Chowdhury, Chellappa
- 2003
(Show Context)
Citation Context ...revious studies have analyzed the statistics of visual processes. In particular, [45] discussed bias for some visual recovery processes. A few studies analyzed the statistics of structure from motion =-=[46, 19, 12, 47]-=-. However, these analyses stayed at the general level of parameter estimation; no one has shown before the effects on the estimated shape. 18 2.2 Shape from multiple views The 3D shape of a surface pa... |

14 | Wavelet-based Fractal Signature Analysis for Automatic Target
- Espinal, Jawerth, et al.
- 1998
(Show Context)
Citation Context ...ditional texture retrieval and classification on standard texture data sets, but by using much lower dimensional feature vectors. Fractal geometry has been used before in the description of textures (=-=[81, 82]-=-) and texture segmentation ([83, 84, 85]). However, the invariance of the fractal dimension to bi-Lipschitz maps has not been utilized in the vision community. Furthermore, existing approaches either ... |

11 |
Visual slant underestimation: A general model. Perception
- Perrone
- 1982
(Show Context)
Citation Context ...pe estimation is not veridical [31, 33]. For a variety of conditions and from a number of cues there is an underestimation of slant. Planar surface patches esti12 mated from texture [34, 27], contour =-=[35]-=-, stereopsis [1, 36], and motion of various parameters [37] have been found to be estimated with smaller slant, that is, closer in orientation to a front-parallel plane than they are. In this chapter ... |

11 | Structure from planar motions with small baselines
- Vidal, Oliensis
- 2002
(Show Context)
Citation Context ...ondences are well understood [10, 54, 55, 56]. Nowadays most point correspondence methods employ the technique of bundle adjustment [57] to refine 3D structure and viewing parameters. Oliensis et al. =-=[58, 59]-=- proposed algorithms, which first eliminate the rotational components and then decompose the residual correspondences into structure and 57 translation. A number of studies considered the estimation f... |

10 | The perception of depth and slant from texture in three-dimensional scenes.
- Andersen, Braunstein, et al.
- 1998
(Show Context)
Citation Context ...ure that human shape estimation is not veridical [31, 33]. For a variety of conditions and from a number of cues there is an underestimation of slant. Planar surface patches esti12 mated from texture =-=[34, 27]-=-, contour [35], stereopsis [1, 36], and motion of various parameters [37] have been found to be estimated with smaller slant, that is, closer in orientation to a front-parallel plane than they are. In... |

10 | Wavelet Deblurring Algorithms for Spatially Varying Blur from High-resolution
- Chan, Chan, et al.
- 2003
(Show Context)
Citation Context ...tive back116 projection method with a fast and optimal regularization criteria in each iteration step. Wavelet theory has previously been used for image de-noising and de-blurring from static images (=-=[95, 96]-=-). However, it has not been studied much with respect to the super-resolution problem. In recent work wavelet theory has been applied to this problem [97], but only for the purpose of speeding up the ... |

8 |
Errors-in-variables modeling in optical flow estimation
- Ng, Solo
- 2001
(Show Context)
Citation Context ...mple i.i.d. additive noise. It is correlated, and this would cause further problems for TLS, causing convergence problems for the corresponding nonlinear non-convex objective function to be minimized =-=[50]-=-. TLS is attractive in the sense that it has an obvious geometrical explanation, but it does not appear advantageous for the vision applications discussed. Its improvement over usual least squares in ... |

7 |
Multifractal characterization of texture-based segmentation
- Conci, Monteiro
- 2000
(Show Context)
Citation Context ...ification on standard texture data sets, but by using much lower dimensional feature vectors. Fractal geometry has been used before in the description of textures ([81, 82]) and texture segmentation (=-=[83, 84, 85]-=-). However, the invariance of the fractal dimension to bi-Lipschitz maps has not been utilized in the vision community. Furthermore, existing approaches either simply compute a single fractal dimensio... |

6 | Eye-in-hand robotic tasks in uncalibrated environments
- Smith, Brandt, et al.
- 1997
(Show Context)
Citation Context ... motion, there is evidence for the success of such an approach in real world environments. Thus, naturally, image motion has been used in navigation tasks, such as 3D motion estimation [14], tracking =-=[15, 16]-=-, segmentation and obstacle avoidance for robotic systems [17, 18]. Despite all the tremendous progress, neither image motion by itself nor correspondence by itself is sufficient to develop accurate h... |

5 |
Stochastic pertubation theory
- Stewart
- 1990
(Show Context)
Citation Context ...ce σ2. Then under quite general conditions the expected value E(xLS) of the estimate amounts to [42] E(xLS) = x ′ − σ2( lim n→∞( 1 n A′TA′))−1x′, (2.5) which implies that xLS is asymptotically biased =-=[43, 44]-=-. The bias here is σ2(limn→∞( 1nA ′TA′))−1x′. Please note, the bias does not depend on n, the number of measurements, which only shows up for the purpose of normalization, because A′TA′ is proportiona... |

5 |
Moving 2D patterns capture the perceived direction of both lower and higher spatial frequencies
- Yo, Wilson
- 1992
(Show Context)
Citation Context ...◦. The denser pattern appears to be estimated closer to the veridical than the sparser pattern. many optical illusions weakens after extended viewing, in particular when subjects are asked to fixate (=-=[53]-=-). In these cases, we can assume that the noise parameters stay fixed, and the visual system can reasonably well estimate them. We can draw conclusions by varying the covariance (of the estimator) in ... |

5 |
Motion segmentation: a synergistic approach
- Brodsky, Fermüller, et al.
- 1999
(Show Context)
Citation Context ...i + 1)) + (Ixiyi − Iyixi)). Our method starts with 3D motion estimates from individual flow fields. In principal, many of the algorithm from the literature could be employed. We used the algorithm in =-=[68]-=-, which is based on the equations above. Consider a segmentation of the scene into P planar patches V 1, V 2 · · ·V P . Combining equations (3.5) for all the patches we obtain an over-determined bilin... |

5 | A 3D shape constraint on video
- Ji, Fermüller
- 2006
(Show Context)
Citation Context ...he motion valley of solutions. Each translation direction in the motion valley, along with its best corresponding rotation and structure will agree with the observed noisy flow field. Figure 3.3 from =-=[69]-=- shows error functions (residual of the minimization) plotted on the 2D spherical surface. The best solutions lie in the bright area of the surface. The error function makes it evident that attempting... |

4 | Noise causes slant under-estimation in stereo and motion”, Vision Research
- Ji, Fermüller, et al.
(Show Context)
Citation Context ...with the empirical findings. In particular, we show in this chapter that in the case of shape from motion for many 3D motions and for shape from stereo the bias causes an underestimation of slant. In =-=[40, 41]-=-, we have demonstrated that bias also causes underestimation in shape from texture. Thus, we find, that one of the reasons for inaccuracy in shape estimation is systematic estimation error, i.e. bias,... |

4 |
Are multifractal multipermuted multinomial measures good enough for unsupervised image segmentation
- Kam, Blanc-Talon
- 2000
(Show Context)
Citation Context ...ification on standard texture data sets, but by using much lower dimensional feature vectors. Fractal geometry has been used before in the description of textures ([81, 82]) and texture segmentation (=-=[83, 84, 85]-=-). However, the invariance of the fractal dimension to bi-Lipschitz maps has not been utilized in the vision community. Furthermore, existing approaches either simply compute a single fractal dimensio... |

3 | Global surface reconstruction by purposive viewpoint control
- Kutulakos, Dyer
- 1995
(Show Context)
Citation Context ...the 3D shape of scene surfaces. These computations are often referred to as shape from X, because cues such as motion [9, 22, 23], stereo [24], texture [25, 26, 27, 28], shading [29, 30] and contours =-=[31, 32]-=- encode information from which the the shape of scene surfaces can be obtained. The recovery of 3D shape is difficult. The main reason is that we have to segment the scene while we recover it. It is c... |

3 |
A ground plane preference for stereoscopic slant
- Goutcher, Mamassian
- 2002
(Show Context)
Citation Context ...not veridical [31, 33]. For a variety of conditions and from a number of cues there is an underestimation of slant. Planar surface patches esti12 mated from texture [34, 27], contour [35], stereopsis =-=[1, 36]-=-, and motion of various parameters [37] have been found to be estimated with smaller slant, that is, closer in orientation to a front-parallel plane than they are. In this chapter we are asking whethe... |

3 |
The Total Least
- Huffel, Vandewalle
- 1991
(Show Context)
Citation Context ...he inverse of 14 a matrix, the solution of the LS estimator xLS is characterized by xLS = (A TA)−1AT b. (2.4) However, it is well known, that under noisy conditions this estimator generally is biased =-=[42, 43]-=-. What does this mean? Consider a problem for which you have a set of noisy measurements and you make an estimate. Then you choose another set of measurements and make another estimate. Continue many ... |

3 |
Errors-in-variables modeling in optical flow estimation
- Lydia, Victor
- 2001
(Show Context)
Citation Context ...ities. Furthermore, the modeling of the scene as consisting of planar patches is an approximation to the actual surface of the scene. Sensor noise may be considered i.i.d. and is easier to deal with (=-=[51, 52]-=-). But other errors could be more significant, and they are more elaborate, making the statistics rather complicated. It is too difficult to estimate the statistics of the combined noise, which is nec... |

3 |
Wavelet-based super-resolution reconstruction: Theory and Algorithm
- Ji, Fermüller
- 2006
(Show Context)
Citation Context ...us techniques from wavelet theory in the iterations of the reconstruction. 5.2 Formulation of high-to-low image formation We first formulate the high-to-low image formation process in the same way as =-=[98]-=- did. To simplify the exposition, in the following we only discuss 1D signals with resolution enhancement by a factor 2. Later, without much difficulty, the analysis will be extended to the 2D case wi... |

2 |
Is structure from motion worth pursuing
- Tomasi, Zhang
- 1995
(Show Context)
Citation Context ...revious studies have analyzed the statistics of visual processes. In particular, [45] discussed bias for some visual recovery processes. A few studies analyzed the statistics of structure from motion =-=[46, 19, 12, 47]-=-. However, these analyses stayed at the general level of parameter estimation; no one has shown before the effects on the estimated shape. 18 2.2 Shape from multiple views The 3D shape of a surface pa... |

2 |
Integration of motion fields through shape
- Ji, Fermüller
- 2005
(Show Context)
Citation Context ... ,ωf min ‖rp f ‖=1 ∑ p,f ||W pf (tf , ωf )rpf ||2, subject to rank(M(rpf )) = 3. (3.11) We adopt the two-step optimization for the estimation of motion and structure from multiple frames presented in =-=[71]-=-. Step 2, that is, the estimation of tf and ωf remains the same; given n p f = ‖npf‖rpf we minimize (3.10) using least squares. However, Step 1, the estimation of npf is rather different and more diff... |

2 | Restoration of a signal super-resolution image from several blurred, noisy and undersampled measured images - Eland, Feuer - 1997 |

1 |
Bias in shape estimation
- Ji, Fermüller
- 2004
(Show Context)
Citation Context ...with the empirical findings. In particular, we show in this chapter that in the case of shape from motion for many 3D motions and for shape from stereo the bias causes an underestimation of slant. In =-=[40, 41]-=-, we have demonstrated that bias also causes underestimation in shape from texture. Thus, we find, that one of the reasons for inaccuracy in shape estimation is systematic estimation error, i.e. bias,... |