Results 21  30
of
1,104
RSVM: Reduced support vector machines
 Data Mining Institute, Computer Sciences Department, University of Wisconsin
, 2001
"... Abstract An algorithm is proposed which generates a nonlinear kernelbased separating surface that requires as little as 1 % of a large dataset for its explicit evaluation. To generate this nonlinear surface, the entire dataset is used as a constraint in an optimization problem with very few variabl ..."
Abstract

Cited by 125 (16 self)
 Add to MetaCart
Abstract An algorithm is proposed which generates a nonlinear kernelbased separating surface that requires as little as 1 % of a large dataset for its explicit evaluation. To generate this nonlinear surface, the entire dataset is used as a constraint in an optimization problem with very few variables corresponding to the 1%
Autocalibration from planar scenes
 European Conference on Computer Vision
, 1998
"... This paper describes a theory and a practical algorithm for the autocalibration of a moving projective camera, from views of a planar scene. The unknown camera calibration, and (up to scale) the unknown scene geometry and camera motion are recovered from the hypothesis that the camera’s internal par ..."
Abstract

Cited by 124 (2 self)
 Add to MetaCart
This paper describes a theory and a practical algorithm for the autocalibration of a moving projective camera, from views of a planar scene. The unknown camera calibration, and (up to scale) the unknown scene geometry and camera motion are recovered from the hypothesis that the camera’s internal parameters remain constant during the motion. This work extends the various existing methods for nonplanar autocalibration to a practically common situation in which it is not possible to bootstrap the calibration from an intermediate projective reconstruction. It also extends Hartley’s method for the internal calibration of a rotating camera, to allow camera translation and to provide 3D as well as calibration information. The basic constraint is that the projections of orthogonal direction vectors (points at infinity) in the plane must be orthogonal in the calibrated camera frame of each image. Abstractly, since the two circular points of the 3D plane (representing its Euclidean structure) lie on the 3D absolute conic, their projections into each image must lie on the absolute conic’s image (representing the camera calibration). The resulting numerical algorithm optimizes this constraint over all circular points and projective calibration parameters, using the interimage homographies as a projective scene representation.
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 123 (13 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
ThroughtheLens Camera Control
, 1992
"... In this paper we introduce throughthelens camera control, a body of techniques that permit a user to manipulate a virtual camera by controlling and constraining features in the image seen through its lens. Rather than solving for camera parameters directly, constrained optimization is used to com ..."
Abstract

Cited by 122 (6 self)
 Add to MetaCart
In this paper we introduce throughthelens camera control, a body of techniques that permit a user to manipulate a virtual camera by controlling and constraining features in the image seen through its lens. Rather than solving for camera parameters directly, constrained optimization is used to compute their time derivatives based on desired changes in userdefined controls. This effectively permits new controls to be defined independent of the underlying parameterization. The controls can also serve as constraints, maintaining their values as others are changed. We describe the techniques in general and work through a detailed example of a specific camera model. Our implementation demonstrates a gallery of useful controls and constraints and provides some examples of how these may be used in composing images and animations.
Algorithms and applications for approximate nonnegative matrix factorization
 Computational Statistics and Data Analysis
, 2006
"... In this paper we discuss the development and use of lowrank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both spars ..."
Abstract

Cited by 117 (6 self)
 Add to MetaCart
In this paper we discuss the development and use of lowrank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for largescale and timevarying datasets. Key words: nonnegative matrix factorization, text mining, spectral data analysis, email surveillance, conjugate gradient, constrained least squares.
Calibration as Parameter Estimation in Sensor Networks
, 2002
"... We describe an adhoc localization system for sensor networks and explain why traditional calibration methods are inadequate for this system. Building upon previous work, we frame calibration as a parameter estimation problem; we parameterize each device and choose the values of those parameters tha ..."
Abstract

Cited by 116 (7 self)
 Add to MetaCart
We describe an adhoc localization system for sensor networks and explain why traditional calibration methods are inadequate for this system. Building upon previous work, we frame calibration as a parameter estimation problem; we parameterize each device and choose the values of those parameters that optimize the overall system performance. This method reduces our average error from 74.6% without calibration to 10.1%. We propose ways to expand this technique to a method of autocalibration for localization as well as to other sensor network applications.
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 115 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
Las Vegas algorithms for linear and integer programming when the dimension is small
 J. ACM
, 1995
"... Abstract. This paper gives an algcmthm for solving linear programming problems. For a problem with tz constraints and d variables, the algorithm requires an expected O(d’n) + (log n)o(d)d’’+(’(’) + o(dJA log n) arithmetic operations, as rz ~ ~. The constant factors do not depend on d. Also, an algor ..."
Abstract

Cited by 105 (2 self)
 Add to MetaCart
Abstract. This paper gives an algcmthm for solving linear programming problems. For a problem with tz constraints and d variables, the algorithm requires an expected O(d’n) + (log n)o(d)d’’+(’(’) + o(dJA log n) arithmetic operations, as rz ~ ~. The constant factors do not depend on d. Also, an algorlthm N gwen for integer hnear programmmg. Let p bound the number of bits required to specify the ratmnal numbers defmmg an input constraint or the ob~ective function vector. Let n and d be as before. Then, the algorithm requires expected 0(2d dn + S~dm In n) + dc)’d) ~ in H operations on numbers with O(1~p bits d ~ ~ ~z + ~, where the constant factors do not depend on d or p. The expectations are with respect to the random choices made by the algorithms, and the bounds hold for any gwen input. The techmque can be extended to other convex programming problems. For example, m algorlthm for finding the smallest sphere enclosing a set of /z points m Ed has the same t]me bound
Penalized regression: the bridge versus the lasso
 Journal of Computational and Graphical Statistics
, 1998
"... Bridge regression, a special family of penalized regressions of a penalty function � βj  γ with γ ≥ 1, is considered. A general approach to solve for the bridge estimator is developed. A new algorithm for the lasso (γ = 1) is obtained by studying the structure of the bridge estimators. The shrinka ..."
Abstract

Cited by 105 (2 self)
 Add to MetaCart
Bridge regression, a special family of penalized regressions of a penalty function � βj  γ with γ ≥ 1, is considered. A general approach to solve for the bridge estimator is developed. A new algorithm for the lasso (γ = 1) is obtained by studying the structure of the bridge estimators. The shrinkage parameter γ and the tuning parameter λ are selected via generalized crossvalidation (GCV). Comparison between the bridge model (γ ≥ 1) and several other shrinkage models, namely the ordinary least squares regression (λ = 0), the lasso (γ = 1) and ridge regression (γ = 2), is made through a simulation study. It is shown that the bridge regression performs well compared to the lasso and ridge regression. These methods are demonstrated through an analysis of a prostate cancer data. Some computational advantages and limitations are discussed.
A trust region method based on interior point techniques for nonlinear programming
 Mathematical Programming
, 1996
"... Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direc ..."
Abstract

Cited by 105 (18 self)
 Add to MetaCart
Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primaldual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, SQP iteration, barrier method, trust region method.