Results 1  10
of
110
Shape and motion from image streams under orthography: a factorization method
 International Journal of Computer Vision
, 1992
"... Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an illconditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orth ..."
Abstract

Cited by 894 (37 self)
 Add to MetaCart
Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an illconditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orthography without computing depth as an intermediate step. An image stream can be represented by the 2FxP measurement matrix of the image coordinates of P points tracked through F frames. We show that under orthographic projection this matrix is of rank 3. Based on this observation, the factorization method uses the singularvalue decomposition technique to factor the measurement matrix into two matrices which represent object shape and camera rotation respectively. Two of the three translation components are computed in a preprocessing stage. The method can also handle and obtain a full solution from a partially filledin measurement matrix that may result from occlusions or tracking failures. The method gives accurate results, and does not introduce smoothing in either shape or motion. We demonstrate this with a series of experiments on laboratory and outdoor image streams, with and without occlusions. 1
shape and motion from image streams: a factorization method
 International Journal of Computer Vision
, 1991
"... The factorization method described in this series of reports requires an algorithm to track the motion of features in an image stream. Given the small interframe displacement made possible by the factorization approach, the best tracking method turns out to be the one proposed by Lucas and Kanade i ..."
Abstract

Cited by 185 (12 self)
 Add to MetaCart
The factorization method described in this series of reports requires an algorithm to track the motion of features in an image stream. Given the small interframe displacement made possible by the factorization approach, the best tracking method turns out to be the one proposed by Lucas and Kanade in 1981. The method defines the measure of match between fixedsize feature windows in the past and current frame as the sum of squared intensity differences over the windows. The displacement is then defined as the one that minimizes this sum. For small motions, a linearization of the image intensities leads to a NewtonRaphson style minimization. In this report, after rederiving the method in a physically intuitive way, we answer the crucial question of how to choose the feature windows that are best suited for tracking. Our selection criterion is based directly on the definition of the tracking algorithm, and expresses how well a feature can be tracked. As a result, the criterion is optimal by construction. We show by experiment that the performance of both the selection and the tracking algorithm are adequate for our factorization method, and we address the issue of how to detect occlusions. In the conclusion, we point out specific open questions for future research. Chapter 1
Improvements to Graph Coloring Register Allocation
 ACM Transactions on Programming Languages and Systems
, 1994
"... This paper describes both the techniques themselves and our experience building and using register allocators that incorporate them. It provides a detailed description of optimistic coloring and rematerialization. It presents experimental data to show the performance of several versions of the regis ..."
Abstract

Cited by 173 (8 self)
 Add to MetaCart
This paper describes both the techniques themselves and our experience building and using register allocators that incorporate them. It provides a detailed description of optimistic coloring and rematerialization. It presents experimental data to show the performance of several versions of the register allocator on a suite of FORTRAN programs. It discusses several insights that we discovered only after repeated implementation of these allocators. Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processorscompi l ers , optimization General terms: Languages Additional Key Words and Phrases: Register allocation, code generation, graph coloring 1. INTRODUCTION The relationship between runtime performance and e#ective use of a machine's register set is well understood. In a compiler, the process of deciding which values to keep in registers at each point in the generated code is called register allocation. Value
On the Early History of the Singular Value Decomposition
, 1992
"... This paper surveys the contributions of five mathematicians  Eugenio Beltrami (18351899), Camille Jordan (18381921), James Joseph Sylvester (18141897), Erhard Schmidt (18761959), and Hermann Weyl (18851955)  who were responsible for establishing the existence of the singular value de ..."
Abstract

Cited by 82 (1 self)
 Add to MetaCart
This paper surveys the contributions of five mathematicians  Eugenio Beltrami (18351899), Camille Jordan (18381921), James Joseph Sylvester (18141897), Erhard Schmidt (18761959), and Hermann Weyl (18851955)  who were responsible for establishing the existence of the singular value decomposition and developing its theory.
SVDPACKC (Version 1.0) User's Guide
, 1993
"... SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values an ..."
Abstract

Cited by 63 (4 self)
 Add to MetaCart
SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values and corresponding left and rightsingular vectors) for large sparse matrices. The package has been ported to a variety of machines ranging from supercomputers to workstations: CRAY YMP, IBM RS/6000550, DEC 5000100, HP 9000750, SPARCstation 2, and Macintosh II/fx. This document (i) explains each algorithm in some detail, (ii) explains the input parameters for each program, (iii) explains how to compile/execute each program, and (iv) illustrates the performance of each method when we compute lower rank approximations to sparse termdocument matrices from information retrieval applications. A userfriendly software interface to the package for UNIXbased systems and the Macintosh II/fx is als...
Finding Approximate POMDP Solutions Through Belief Compression
, 2003
"... Standard value function approaches to finding policies for Partially Observable Markov Decision Processes (POMDPs) are generally considered to be intractable for large models. The intractability of these algorithms is to a large extent a consequence of computing an exact, optimal policy over the ent ..."
Abstract

Cited by 62 (2 self)
 Add to MetaCart
Standard value function approaches to finding policies for Partially Observable Markov Decision Processes (POMDPs) are generally considered to be intractable for large models. The intractability of these algorithms is to a large extent a consequence of computing an exact, optimal policy over the entire belief space. However, in realworld POMDP problems, computing the optimal policy for the full belief space is often unnecessary for good control even for problems with complicated policy classes. The beliefs experienced by the controller often lie near a structured, lowdimensional manifold embedded in the highdimensional belief space. Finding a good approximation to the optimal value function for only this manifold can be much easier than computing the full value function. We introduce a new method for solving largescale POMDPs by reducing the dimensionality of the belief space. We use Exponential family Principal Components Analysis (Collins, Dasgupta, & Schapire, 2002) to represent sparse, highdimensional belief spaces using lowdimensional sets of learned features of the belief state. We then plan only in terms of the lowdimensional belief features. By planning in this lowdimensional space, we can find policies for POMDP models that are orders of magnitude larger than models that can be handled by conventional techniques. We demonstrate the use of this algorithm on a synthetic problem and on mobile robot navigation tasks. 1.
A SvdBased Watermarking Scheme For Protecting Rightful Ownership
 IEEE TRANSACTIONS ON MULTIMEDIA
, 2002
"... Digital watermarking has been proposed as a solution to the problem of copyright protection of multimedia documents in networked environments. There are two important issues that watermarking algorithms need to address. Firstly, watermarking schemes are required to provide trustworthy evidence for p ..."
Abstract

Cited by 60 (0 self)
 Add to MetaCart
Digital watermarking has been proposed as a solution to the problem of copyright protection of multimedia documents in networked environments. There are two important issues that watermarking algorithms need to address. Firstly, watermarking schemes are required to provide trustworthy evidence for protecting rightful ownership; Secondly, good watermarking schemes should satisfy the requirement of robustness and resist distortions due to common image manipulations (such as filtering, compression, etc.). In this paper, we propose a novel watermarking algorithm based on singular value decomposition (SVD). Analysis and experimental results show that the new watermarking method performs well in both security and robustness.
Rank Degeneracy and Least Squares Problems
, 1976
"... This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain conditions when A has numerical rank r there is a distinguished r dimensional subspace of the column ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain conditions when A has numerical rank r there is a distinguished r dimensional subspace of the column space of A that is insensitive to how it is approximated by r independent columns of A. The consequences of this fact for the least squares problem are examined. Algorithms are described for approximating the stable part of the column space of A. 1. Introduction In this paper we shall be concerned with the following problem. Let A be an m \Theta n matrix with m n, and suppose A is near (in a sense to be made precise later) a matrix B whose rank is less than n. Can one find a set of linearly independent columns of A that span a good approximation to the column space of B? The solution of this problem is important in a number of applications. In this paper we shall be chiefly interested in...
An improved algorithm for computing the singular value decomposition
 ACM Trans. Math. Software
, 1982
"... The most wellknown and widely used algorithm for computing the Singular Value Decomposition (SVD) A U ~V T of an m x n rectangular matrix A is the GolubReinsch algorithm (GRSVD). In this paper, an improved version of the original GRSVD algorithm is presented. The new algorithm works best for ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
The most wellknown and widely used algorithm for computing the Singular Value Decomposition (SVD) A U ~V T of an m x n rectangular matrix A is the GolubReinsch algorithm (GRSVD). In this paper, an improved version of the original GRSVD algorithm is presented. The new algorithm works best for matrices with m>> n, but is more efficient even when m is only slightly greater than n (usually when m ~ 2n) and in some cases can achieve as much as 50 percent savings. If the matrix U ~s exphcltly desired, then n 2 extra storage locations are required, but otherwise no extra storage is needed. The two main modifications are: (1) first triangularizing A by Householder transformations before bldmgonahzing it (thin idea seems to be widely known among some researchers in the field, but as far as can be determined, neither a detailed analysis nor an lmplementatmn has been published before), and (2) accumulating the left Givens transformations in GRSVD on an n x n array instead of on an m x n array. A PFORTverified FORTRAN Implementation m included. Comparisons with the EISPACK SVD routine are given.