## Multi-Frame Optical Flow Estimation Using Subspace Constraints (1999)

### Cached

### Download Links

- [www.wisdom.weizmann.ac.il]
- [ftp.wisdom.weizmann.ac.il]
- DBLP

### Other Repositories/Bibliography

Citations: | 85 - 2 self |

### BibTeX

@MISC{Irani99multi-frameoptical,

author = {Michal Irani},

title = {Multi-Frame Optical Flow Estimation Using Subspace Constraints},

year = {1999}

}

### Years of Citing Articles

### OpenURL

### Abstract

We show that the set of all ow- elds in a sequence of frames imaging a rigid scene resides in a lowdimensional linear subspace. Based on this observation, we develop a method for simultaneous estimation of optical- ow across multiple frames, which uses these subspace constraints. The multi-frame subspace constraints are strong constraints, and replace commonly used heuristic constraints, such as spatial or temporal smoothness. The subspace constraints are geometrically meaningful, and are not violated at depth discontinuities, or when the camera-motion changes abruptly. Furthermore, we show that the subspace constraints on ow- elds apply for a variety of imaging models, scene models, and motion models. Hence, the presented approach forconstrained multi-frame ow estimation is general. However, our approach doesnot require prior knowledge of the underlying world or camera model. Although linear subspace constraints have been used successfully in the past for recovering 3D information (e.g., [18]), it has been assumed that 2D correspondences are given. However, correspondence estimation is a fundamental problem in motion analysis. In this paper, we usemulti-frame subspace constraints to constrain the 2D correspondence estimation process itself, and not for 3D recovery.

### Citations

2180 | An iterative image registration technique with an application to stereo vision
- Lucas, Kanade
- 1981
(Show Context)
Citation Context ...ne which is next described in Sect. 3.2, form the basis for our direct multi-point multi-frame algorithm, which is described in Sect. 4. 3.2 The Generalized Lucas & Kanade Constraint Lucas and Kanade =-=[14]-=- extended the pixel-based brightness constancy constraints of Eq. (5) to a local region-based constraint, by assuming a uniform displacement in very small windows (typically 3 \Theta 3 or 5 \Theta 5).... |

1897 | Determining optical flow
- Horn, Schunck
- 1981
(Show Context)
Citation Context ...rmation (this is known as the "aperture problem"), and the optical flow estimates obtained are hence noisy and/or partial. To overcome this problem, spatial smoothness constraints are employ=-=ed (e.g., [10, 1, 15]-=-). However, these smoothness constraints are heuristic, and are violated especially at depth discontinuities. For a review and comparison of several of these optical flow techniques see [2]. Temporal ... |

1135 | Performance of optical flow techniques
- Barron, Fleet, et al.
- 1994
(Show Context)
Citation Context ...., [10, 1, 15]). However, these smoothness constraints are heuristic, and are violated especially at depth discontinuities. For a review and comparison of several of these optical flow techniques see =-=[2]-=-. Temporal smoothness constraints have also been introduced [5]. These, however, are violated when the camera motion changes abruptly. Other methods overcome the aperture problem by applying global mo... |

945 | Shape and motion from image streams under orthography: a factorization method
- Tomasi, Kanade
- 1992
(Show Context)
Citation Context ...ur approach does not require prior knowledge of the underlying world or camera model. Although linear subspace constraints have been used successfully in the past for recovering 3D information (e.g., =-=[18]-=-), it has been assumed that 2D correspondences are given. However, correspondence estimation is a fundamental problem in motion analysis. In this paper, we use multi-frame subspace constraints to cons... |

605 | Hierarchical model-based motion estimation
- Bergen, Anandan, et al.
- 1992
(Show Context)
Citation Context ...hness constraints have also been introduced [5]. These, however, are violated when the camera motion changes abruptly. Other methods overcome the aperture problem by applying global model constraints =-=[7, 8, 3, 11, 17, 6, 4]-=-. This allows the use of large analysis windows (often the entire image), which do not suffer from lack of local information. These techniques, however, assume an a-priori restricted model of the worl... |

563 |
The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields
- Black, Anandan
- 1996
(Show Context)
Citation Context ...hness constraints have also been introduced [5]. These, however, are violated when the camera motion changes abruptly. Other methods overcome the aperture problem by applying global model constraints =-=[7, 8, 3, 11, 17, 6, 4]-=-. This allows the use of large analysis windows (often the entire image), which do not suffer from lack of local information. These techniques, however, assume an a-priori restricted model of the worl... |

479 |
A computational framework and an algorithm for the measurement of visual motion
- Anandan
- 1989
(Show Context)
Citation Context ...rmation (this is known as the "aperture problem"), and the optical flow estimates obtained are hence noisy and/or partial. To overcome this problem, spatial smoothness constraints are employ=-=ed (e.g., [10, 1, 15]-=-). However, these smoothness constraints are heuristic, and are violated especially at depth discontinuities. For a review and comparison of several of these optical flow techniques see [2]. Temporal ... |

322 |
The interpretation of a moving retinal image
- Longuet-Higgins, Prazdny
- 1980
(Show Context)
Citation Context ...ion). This model is valid when the field of view is very small, and the depth fluctuations in the scene are small relative to the overall depth. (ii) when an instantaneous motion model is used (e.g., =-=[13]-=-). This model is valid when the camera rotation is small and the forward translation is small relative to the depth. The instantaneous model is a good approximation of the motion over short video segm... |

229 | Computing occluding and transparent motions
- Irani, Rousso, et al.
- 1994
(Show Context)
Citation Context ...hness constraints have also been introduced [5]. These, however, are violated when the camera motion changes abruptly. Other methods overcome the aperture problem by applying global model constraints =-=[7, 8, 3, 11, 17, 6, 4]-=-. This allows the use of large analysis windows (often the entire image), which do not suffer from lack of local information. These techniques, however, assume an a-priori restricted model of the worl... |

149 |
Subspace methods for recovering rigid motion I: Algorithm and implementation
- Heeger, Jepson
- 1992
(Show Context)
Citation Context ...at depth discontinuities or when camera-motion changes abruptly. Linear subspace constraints have been used successfully in the past for recovering 3D information from known 2D correspondences (e.g., =-=[18, 9]-=-). In contrast, we use multi-frame linear subspace constraints to constrainsthe 2D correspondence estimation process itself, without recovering any 3D information. Furthermore, we show that for a vari... |

127 | A three-frame algorithm for estimating two-component image motion - Bergen, Burt, et al. - 1992 |

115 |
Displacement Vectors Derived from Second Order Intensity Variations
- Nagel
- 1983
(Show Context)
Citation Context ...rmation (this is known as the "aperture problem"), and the optical flow estimates obtained are hence noisy and/or partial. To overcome this problem, spatial smoothness constraints are employ=-=ed (e.g., [10, 1, 15]-=-). However, these smoothness constraints are heuristic, and are violated especially at depth discontinuities. For a review and comparison of several of these optical flow techniques see [2]. Temporal ... |

112 | Robust dynamic motion estimation over time
- Black, Anandan
- 1991
(Show Context)
Citation Context ...istic, and are violated especially at depth discontinuities. For a review and comparison of several of these optical flow techniques see [2]. Temporal smoothness constraints have also been introduced =-=[5]-=-. These, however, are violated when the camera motion changes abruptly. Other methods overcome the aperture problem by applying global model constraints [7, 8, 3, 11, 17, 6, 4]. This allows the use of... |

111 | Geometric motion segmentation and model selectionâ€ť Phil
- Torr
- 1998
(Show Context)
Citation Context ...analysis in this section is used only for deriving the upper bounds on the ranks of these matrices. It can be shown that the collection of all points across all views lie in a low-dimensional variety =-=[19]. Under fu-=-ll perspective projection and discrete views, this variety is non-linear. However, there are two cases in which this variety is linear: (i) when an "affine" camera [16] is used (i.e., weak-p... |

80 | B.G.Schunck: Determining optical ow - Horn - 1981 |

69 |
Affine Analysis of Image Sequences
- Shapiro
- 1995
(Show Context)
Citation Context ...w-dimensional variety [19]. Under full perspective projection and discrete views, this variety is non-linear. However, there are two cases in which this variety is linear: (i) when an "affine&quo=-=t; camera [16]-=- is used (i.e., weak-perspective, or orthographic projection). This model is valid when the field of view is very small, and the depth fluctuations in the scene are small relative to the overall depth... |

52 | Performance of optical ow techniques - Barron, DS, et al. - 1992 |

51 |
Direct multi-resolution estimation of ego-motion and structure from motion
- Hanna
- 1991
(Show Context)
Citation Context |

37 | Model-based brightness constraints: on direct estimation of structure and motion
- Stein, Shashua
(Show Context)
Citation Context |

32 | Multi-Frame Correspondence Estimation Using Subspace Constraints
- Irani
- 2002
(Show Context)
Citation Context ...ell as for a planar scene. Due to lack of space, we detail the rank derivation only for one case, and provide only the final derived ranks for the other cases. The omitted derivations can be found in =-=[12]-=-. A 3D scene point (X i ; Y i ; Z i ) is observed at pixel (x i ; y i ) in the reference frame I . Let ~ t j = (t X j ; t Y j ; t Z j ) denote the camera translation between frame I and frame I j , an... |

10 |
Combining stereo and motion for direct estimation of scene structure
- Hanna, Okamoto
- 1993
(Show Context)
Citation Context |