Results 1 - 10
of
24,649
Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods
- ADVANCES IN LARGE MARGIN CLASSIFIERS
, 1999
"... The output of a classifier should be a calibrated posterior probability to enable post-processing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score. Howev ..."
Abstract
-
Cited by 1051 (0 self)
- Add to MetaCart
The output of a classifier should be a calibrated posterior probability to enable post-processing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score
Regularization and variable selection via the Elastic Net.
- J. R. Stat. Soc. Ser. B
, 2005
"... Abstract We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, wher ..."
Abstract
-
Cited by 973 (11 self)
- Add to MetaCart
Abstract We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect
Regularization paths for generalized linear models via coordinate descent
, 2009
"... We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, twoclass logistic regression, and multinomial regression problems while the penalties include ℓ1 (the lasso), ℓ2 (ridge regression) and mixtures of the two (the elastic ..."
Abstract
-
Cited by 724 (15 self)
- Add to MetaCart
elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.
Manifold regularization: A geometric framework for learning from labeled and unlabeled examples
- JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning al ..."
Abstract
-
Cited by 578 (16 self)
- Add to MetaCart
algorithms and standard methods including Support Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize properties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely
The Regularization Method for an Obstacle Problem
, 1994
"... . We give a relatively complete analysis for the regularization method, which is usually used in solving non-differentiable minimization problems. The model problem considered in the paper is an obstacle problem. In addition to the usual convergence result and a-priori error estimates, we provide a- ..."
Abstract
-
Cited by 7 (3 self)
- Add to MetaCart
. We give a relatively complete analysis for the regularization method, which is usually used in solving non-differentiable minimization problems. The model problem considered in the paper is an obstacle problem. In addition to the usual convergence result and a-priori error estimates, we provide a
Arnoldi-Tikhonov regularization methods
, 2008
"... Tikhonov regularization for large-scale linear ill-posed problems is commonly implemented by determining a partial Lanczos bidiagonalization of the matrix of the given system of equations. This paper explores the possibility of instead computing a partial Arnoldi decomposition of the given matrix. C ..."
Abstract
-
Cited by 11 (8 self)
- Add to MetaCart
. Computed examples illustrate that this approach may require fewer matrix-vector product evaluations and, therefore, less arithmetic work. Moreover, the proposed range-restricted Arnoldi-Tikhonov regularization method does not require the adjoint matrix and, hence, is convenient to use for problems
Regularization Methods in Dynamic MRI
, 1999
"... In this work we consider an inverse ill--posed problem coming from the area of dynamic Magnetic Resonance Imaging (MRI), where high resolution images must be reconstructed from incomplete data sets collected in the Fourier domain. The RIGR (Reduced--encoding Imaging by Generalized--series Reconstruc ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
--series Reconstruction) method used leads to ill-- conditioned linear systems with noisy right hand side. We analyze the behaviour of three regularization methods, the Truncated Singular Value Decomposition, the Tikhonov regularization method and the Conjugate Gradients, together with some methods for the choice
SEQUENTIAL REGULARIZATION METHODS FOR NONLINEAR
"... Abstract. Sequential regularization methods relate to a combination of stabilization methods and the usual penalty method for dierential equations with algebraic equality constraints. This paper extends an earlier work [SIAM J. Numer. Anal., 33 (1996), pp. 1921{1940] to nonlinear problems and to die ..."
Abstract
- Add to MetaCart
Abstract. Sequential regularization methods relate to a combination of stabilization methods and the usual penalty method for dierential equations with algebraic equality constraints. This paper extends an earlier work [SIAM J. Numer. Anal., 33 (1996), pp. 1921{1940] to nonlinear problems
Regularization methods for semidefinite programming
- SIAM JOURNAL ON OPTIMIZATION
, 2009
"... We introduce a new class of algorithms for solving linear semidefinite programming (SDP) problems. Our approach is based on classical tools from convex optimization such as quadratic regularization and augmented Lagrangian techniques. We study the theoretical properties and we show that practical im ..."
Abstract
-
Cited by 44 (7 self)
- Add to MetaCart
We introduce a new class of algorithms for solving linear semidefinite programming (SDP) problems. Our approach is based on classical tools from convex optimization such as quadratic regularization and augmented Lagrangian techniques. We study the theoretical properties and we show that practical
A Bayesian Framework for the Analysis of Microarray Expression Data: Regularized t-Test and Statistical Inferences of Gene Changes
- Bioinformatics
, 2001
"... Motivation: DNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory ..."
Abstract
-
Cited by 491 (6 self)
- Add to MetaCart
Motivation: DNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory
Results 1 - 10
of
24,649