Results 1  10
of
1,158
A Fast Marching Level Set Method for Monotonically Advancing Fronts
 PROC. NAT. ACAD. SCI
, 1995
"... We present a fast marching level set method for monotonically advancing fronts, which leads to an extremely fast scheme for solving the Eikonal equation. Level set methods are numerical techniques for computing the position of propagating fronts. They rely on an initial value partial differential eq ..."
Abstract

Cited by 617 (22 self)
 Add to MetaCart
(Show Context)
We present a fast marching level set method for monotonically advancing fronts, which leads to an extremely fast scheme for solving the Eikonal equation. Level set methods are numerical techniques for computing the position of propagating fronts. They rely on an initial value partial differential equation for a propagating level set function, and use techniques borrowed from hyperbolic conservation laws. Topological changes, corner and cusp development, and accurate determination of geometric properties such as curvature and normal direction are naturally obtained in this setting. In this paper, we describe a particular case of such methods for interfaces whose speed depends only on local position. The technique works by coupling work on entropy conditions for interface motion, the theory of viscosity solutions for HamiltonJacobi equations and fast adaptive narrow band level set methods. The technique is applicable to a variety of problems, including shapefromshading problems, lithog...
Large steps in cloth simulation
 SIGGRAPH 98 Conference Proceedings
, 1998
"... The bottleneck in most cloth simulation systems is that time steps must be small to avoid numerical instability. This paper describes a cloth simulation system that can stably take large time steps. The simulation system couples a new technique for enforcing constraints on individual cloth particle ..."
Abstract

Cited by 578 (5 self)
 Add to MetaCart
(Show Context)
The bottleneck in most cloth simulation systems is that time steps must be small to avoid numerical instability. This paper describes a cloth simulation system that can stably take large time steps. The simulation system couples a new technique for enforcing constraints on individual cloth particles with an implicit integration method. The simulator models cloth as a triangular mesh, with internal cloth forces derived using a simple continuum formulation that supports modeling operations such as local anisotropic stretch or compression; a unified treatment of damping forces is included as well. The implicit integration method generates a large, unbanded sparse linear system at each time step which is solved using a modified conjugate gradient method that simultaneously enforces particles ’ constraints. The constraints are always maintained exactly, independent of the number of conjugate gradient iterations, which is typically small. The resulting simulation system is significantly faster than previous accounts of cloth simulation systems in the literature. Keywords—Cloth, simulation, constraints, implicit integration, physicallybased modeling. 1
Fast Multiresolution Image Querying
, 1995
"... We present a method for searching in an image database using a query image that is similar to the intended target. The query image may be a handdrawn sketch or a (potentially lowquality) scan of the image to be retrieved. Our searching algorithm makes use of multiresolution wavelet decompositions ..."
Abstract

Cited by 313 (4 self)
 Add to MetaCart
(Show Context)
We present a method for searching in an image database using a query image that is similar to the intended target. The query image may be a handdrawn sketch or a (potentially lowquality) scan of the image to be retrieved. Our searching algorithm makes use of multiresolution wavelet decompositions of the query and database images. The coefficients of these decompositions are distilled into small "signatures" for each image. We introduce an "image querying metric" that operates on these signatures. This metric essentially compares how many significant wavelet coefficients the query has in common with potential targets. The metric includes parameters that can be tuned, using a statistical analysis, to accommodate the kinds of image distortions found in different types of image queries. The resulting algorithm is simple, requires very little storage overhead for the database of signatures, and is fast enough to be performed on a database of 20,000 images at interactive rates (on standard...
Methods for dealing with reaction time outliers
 Psychological Bulletin
, 1993
"... The effect of outliers on reaction time analyses is evaluated. The first section assesses the power of different methods of minimizing the effect of outliers on analysis of variance (ANOVA) and makes recommendations about the use of transformations and cutoffs. The second section examines the effect ..."
Abstract

Cited by 252 (6 self)
 Add to MetaCart
(Show Context)
The effect of outliers on reaction time analyses is evaluated. The first section assesses the power of different methods of minimizing the effect of outliers on analysis of variance (ANOVA) and makes recommendations about the use of transformations and cutoffs. The second section examines the effect of outliers and cutoffs on different measures of location, spread, and shape and concludes using quantitative examples that robust measures are much less affected by outliers and cutoffs than measures based on moments. The third section examines fitting explicit distribution functions as a way of recovering means and standard deviations and concludes that unless fitting the distribution function is used as a model of distribution shape, the method is probably not worth routine use. Almost everyone who has analyzed reaction time data has been faced with the problem of what to do with outlier response times. Outliers are response times generated by processes that are not the ones being studied. The processes that generate outliers can be fast guesses, guesses that are based on the subject's estimate of the usual time to respond, multiple runs of the process that is actually under study, the subject's inattention, or
A Pyramid Approach to SubPixel Registration Based on Intensity
, 1998
"... We present an automatic subpixel registration algorithm that minimizes the mean square intensity difference between a reference and a test data set, which can be either images (2D) or volumes (3D). It uses an explicit spline representation of the images in conjunction with spline processing, and ..."
Abstract

Cited by 226 (18 self)
 Add to MetaCart
(Show Context)
We present an automatic subpixel registration algorithm that minimizes the mean square intensity difference between a reference and a test data set, which can be either images (2D) or volumes (3D). It uses an explicit spline representation of the images in conjunction with spline processing, and is based on a coarsetofine iterative strategy (pyramid approach). The minimization is performed according to a new variation (ML*) of the MarquardtLevenberg algorithm for nonlinear leastsquare optimization. The geometric deformation model is a global 3D affine transformation that can be optionally restricted to rigidbody motion (rotation and translation), combined with isometric scaling. It also includes an optional adjustment of image contrast differences. We obtain excellent results for the registration of intramodality Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI) data. We conclude that the multiresolution refinement strategy is more robust than a comparable singlestage method, being less likely to be trapped into a false local optimum. In addition, our improved version of the MarquardtLevenberg algorithm is faster.
Recovery of Parametric Models from Range Images: The Case for Superquadrics with Global Deformations
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1990
"... this paper, we introduce a method for recovery of compact volumetric models for single part objects. To solve the shape recovery problem in isolation from segmentation, we assume that only a single object is present in the scene at a time. Although we made this simplification to break up the problem ..."
Abstract

Cited by 220 (6 self)
 Add to MetaCart
(Show Context)
this paper, we introduce a method for recovery of compact volumetric models for single part objects. To solve the shape recovery problem in isolation from segmentation, we assume that only a single object is present in the scene at a time. Although we made this simplification to break up the problem, this assumption is still valid for some restricted environments [30]. We show that the shape of those objects can be recovered subject to the model's internal constraints. In this work we use a partic ular example of compact volumetric modelssuperquad ric primitives with parametric deformations. We introduce a leastsquares minimization method to recover model and deformation parameters using range data as the input. Range data enables us to study shape recovery independent of different passive techniques of obtaining depth data, such as depth from stereo, depth from focus, or depth from motion. The fitting function which we min imize is a cost or energy function whose value depends on the distance of points from the model's surface and on the overall size of the model. We show that the solution space, which can have more than one "deep" minimum or acceptable solution and many shallow local minima, can be searched efficiently with a gradient descent method. By using a stochastic technique, the procedure can escape from shallow local minima, and a particular solution among several acceptable solutions can be reached by searching in a constrained parameter subspace. The paper is organized as follows. Section II is on parametric models in computer vision, focusing on comparison of generalized cylinders and superquadrics. Section III explains superquadric models in detail. Section IV is about recovery of nondeformed superquadric models, and Section V is on recovery of defo...
Efficient BackProp
, 1998
"... . The convergence of backpropagation learning is analyzed so as to explain common phenomenon observed by practitioners. Many undesirable behaviors of backprop can be avoided with tricks that are rarely exposed in serious technical publications. This paper gives some of those tricks, and offers expl ..."
Abstract

Cited by 209 (31 self)
 Add to MetaCart
. The convergence of backpropagation learning is analyzed so as to explain common phenomenon observed by practitioners. Many undesirable behaviors of backprop can be avoided with tricks that are rarely exposed in serious technical publications. This paper gives some of those tricks, and offers explanations of why they work. Many authors have suggested that secondorder optimization methods are advantageous for neural net training. It is shown that most "classical" secondorder methods are impractical for large neural networks. A few methods are proposed that do not have these limitations. 1 Introduction Backpropagation is a very popular neural network learning algorithm because it is conceptually simple, computationally efficient, and because it often works. However, getting it to work well, and sometimes to work at all, can seem more of an art than a science. Designing and training a network using backprop requires making many seemingly arbitrary choices such as the number ...
Implementation of a Portable Nested DataParallel Language
 Journal of Parallel and Distributed Computing
, 1994
"... This paper gives an overview of the implementation of Nesl, a portable nested dataparallel language. This language and its implementation are the first to fully support nested data structures as well as nested dataparallel function calls. These features allow the concise description of parallel alg ..."
Abstract

Cited by 203 (28 self)
 Add to MetaCart
This paper gives an overview of the implementation of Nesl, a portable nested dataparallel language. This language and its implementation are the first to fully support nested data structures as well as nested dataparallel function calls. These features allow the concise description of parallel algorithms on irregular data, such as sparse matrices and graphs. In addition, they maintain the advantages of dataparallel languages: a simple programming model and portability. The current Nesl implementation is based on an intermediate language called Vcode and a library of vector routines called Cvl. It runs on the Connection Machine CM2, the Cray YMP C90, and serial machines. We compare initial benchmark results of Nesl with those of machinespecific code on these machines for three algorithms: leastsquares linefitting, median finding, and a sparsematrix vector product. These results show that Nesl's performance is competitive with that of machinespecific codes for regular dense da...
Advances in Domain Independent Linear Text Segmentation
, 2000
"... This paper describes a method for linear text seg mc. ntation which is twice as accurate and over seven times as fast as the stateoftheart (Reynar, 1998). Intersentence similarity is replaced by rank in the local context. Boundary locations are discovered by divisive clustering. ..."
Abstract

Cited by 182 (1 self)
 Add to MetaCart
This paper describes a method for linear text seg mc. ntation which is twice as accurate and over seven times as fast as the stateoftheart (Reynar, 1998). Intersentence similarity is replaced by rank in the local context. Boundary locations are discovered by divisive clustering.
Multiresolution Curves
, 1994
"... We describe a multiresolution curve representation, based on wavelets, that conveniently supports a variety of operations: smoothing a curve; editing the overall form of a curve while preserving its details; and approximating a curve within any given error tolerance for scan conversion. We present m ..."
Abstract

Cited by 174 (5 self)
 Add to MetaCart
We describe a multiresolution curve representation, based on wavelets, that conveniently supports a variety of operations: smoothing a curve; editing the overall form of a curve while preserving its details; and approximating a curve within any given error tolerance for scan conversion. We present methods to support continuous levels of smoothing as well as direct manipulation of an arbitrary portion of the curve; the control points, as well as the discrete nature of the underlying hierarchical representation, can be hidden from the user. The multiresolution representation requires no extra storage beyond that of the original control points, and the algorithms using the representation are both simple and fast.