Results 1  10
of
232
Accelerated training of conditional random fields with stochastic gradient methods
 In ICML
, 2006
"... We apply Stochastic MetaDescent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several large data sets, the resulting optimizer converges to the same quality of solution over an order of magnitude faster than lim ..."
Abstract

Cited by 95 (4 self)
 Add to MetaCart
We apply Stochastic MetaDescent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several large data sets, the resulting optimizer converges to the same quality of solution over an order of magnitude faster than limitedmemory BFGS, the leading method reported to date. We report results for both exact and inexact inference techniques. 1.
Fluid Control Using the Adjoint Method
 ACM TRANS. GRAPH. (SIGGRAPH PROC
, 2004
"... We describe a novel method for controlling physicsbased fluid simulations through gradientbased nonlinear optimization. Using a technique known as the adjoint method, derivatives can be computed efficiently, even for large 3D simulations with millions of control parameters. In addition, we introdu ..."
Abstract

Cited by 70 (1 self)
 Add to MetaCart
We describe a novel method for controlling physicsbased fluid simulations through gradientbased nonlinear optimization. Using a technique known as the adjoint method, derivatives can be computed efficiently, even for large 3D simulations with millions of control parameters. In addition, we introduce the first method for the full control of freesurface liquids. We show how to compute adjoint derivatives through each step of the simulation, including the fast marching algorithm, and describe a new set of control parameters specifically designed for liquids.
Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems
, 2002
"... ..."
What color is your Jacobian? Graph coloring for computing derivatives
 SIAM REV
, 2005
"... Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specific ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertexcoloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrixestimation problems. The framework is based upon the viewpoint that a partition of a matrixinto structurally orthogonal groups of columns corresponds to distance2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrixas an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.
Reduced order modeling of the Upper Tropical Pacific ocean model using proper orthogonal decomposition. Computers and Mathematics with Applications
, 2006
"... The proper orthogonal decomposition (POD) is shown an efficiently model reduction technique in simulating physical processes governed by partial differential equations. In this paper we make an initial effort to investigate problems related to POD reduced modeling of a largescale upper ocean circul ..."
Abstract

Cited by 37 (18 self)
 Add to MetaCart
The proper orthogonal decomposition (POD) is shown an efficiently model reduction technique in simulating physical processes governed by partial differential equations. In this paper we make an initial effort to investigate problems related to POD reduced modeling of a largescale upper ocean circulation in the tropic Pacific domain. We constructed different POD models with different choices of snapshots and different number of POD basis functions. The results from these different POD models are compared with that of the original model. The main findings are: (1) the largescale seasonal variability of the tropic Pacific obtained by the original model can be captured well by a low dimensional system of order of 22, which is constructed by 20 snapshots and 7 leading POD basis functions. (2) RMS errors for the upper ocean layer thickness of the POD model of order of 22 is less than 1m that is less than 1from the POD model is around 0.99. (3) The modes that capture 0.99 energy are necessary to construct POD models in order to yield a high accuracy.
The dynamics of legged locomotion: Models, analyses, and challenges
 SIAM Review
, 2006
"... Cheetahs and beetles run, dolphins and salmon swim, and bees and birds fly with grace and economy surpassing our technology. Evolution has shaped the breathtaking abilities of animals, leaving us the challenge of reconstructing their targets of control and mechanisms of dexterity. In this review we ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Cheetahs and beetles run, dolphins and salmon swim, and bees and birds fly with grace and economy surpassing our technology. Evolution has shaped the breathtaking abilities of animals, leaving us the challenge of reconstructing their targets of control and mechanisms of dexterity. In this review we explore a corner of this fascinating world. We describe mathematical models for legged animal locomotion, focusing on rapidly running insects, and highlighting achievements and challenges that remain. Newtonian bodylimb dynamics are most naturally formulated as piecewiseholonomic rigid body mechanical systems, whose constraints change as legs touch down or lift off. Central pattern generators and proprioceptive sensing require models of spiking neurons, and simplified phase oscillator descriptions of ensembles of them. A full neuromechanical model of a running animal requires integration of these elements, along with proprioceptive feedback and models of goaloriented sensing, planning and learning. We outline relevant background material from neurobiology and biomechanics, explain key properties of the hybrid dynamical systems that 1 underlie legged locomotion models, and provide numerous examples of such models, from the simplest, completely soluble ‘pegleg walker ’ to complex neuromuscular subsystems that are yet to be assembled into models of behaving animals. 1
Error Estimations For Indirect Measurements: Randomized Vs. Deterministic Algorithms For "BlackBox" Programs
 Handbook on Randomized Computing, Kluwer, 2001
, 2000
"... In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure them indirectly: by first measuring some relating quantities x1 ; : : : ; xn , and then by using the known relation between x i and y to reconstruct the value of the desired quantity y. In practice, it is often very important to estimate the error of the resulting indirect measurement. In this paper, we describe and compare different deterministic and randomized algorithms for solving this problem in the situation when a program for transforming the estimates e x1 ; : : : ; e xn for x i into an estimate for y is only available as a black box (with no source code at hand). We consider this problem in two settings: statistical, when measurements errors \Deltax i = e x i \Gamma x i are inde...
ModelBased Hand Tracking with Texture, Shading and Selfocclusions
, 2008
"... A novel modelbased approach to 3D hand tracking from monocular video is presented. The 3D hand pose, the hand texture and the illuminant are dynamically estimated through minimization of an objective function. Derived from an inverse problem formulation, the objective function enables explicit use ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
A novel modelbased approach to 3D hand tracking from monocular video is presented. The 3D hand pose, the hand texture and the illuminant are dynamically estimated through minimization of an objective function. Derived from an inverse problem formulation, the objective function enables explicit use of texture temporal continuity and shading information, while handling important selfocclusions and timevarying illumination. The minimization is done efficiently using a quasiNewton method, for which we propose a rigorous derivation of the objective function gradient. Particular attention is given to terms related to the change of visibility near selfocclusion boundaries that are neglected in existing formulations. In doing so we introduce new occlusion forces and show that using all gradient terms greatly improves the performance of the method. Experimental results demonstrate the potential of the formulation.
Newton’s method with deflation for isolated singularities of polynomial systems
 Theor. Comp. Sci. 359
"... We present a modification of Newton’s method to restore quadratic convergence for isolated singular solutions of polynomial systems. Our method is symbolicnumeric: we produce a new polynomial system which has the original multiple solution as a regular root. We show that the number of deflation sta ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
We present a modification of Newton’s method to restore quadratic convergence for isolated singular solutions of polynomial systems. Our method is symbolicnumeric: we produce a new polynomial system which has the original multiple solution as a regular root. We show that the number of deflation stages is bounded by the multiplicity of the isolated root. Our implementation performs well on a large class of applications. 2000 Mathematics Subject Classification. Primary 65H10. Secondary 14Q99, 68W30. Key words and phrases. Newton’s method, deflation, numerical homotopy algorithms, symbolicnumeric computations. 1
A New CauchyBased BlackBox Technique for Uncertainty in Risk Analysis
 in Risk Analysis, Reliability Engineering and Systems Safety
, 2002
"... Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. T ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general formalism for handling different types of uncertainty, and to describe a new blackbox technique for processing this type of uncertainty.