Results

**11 - 20**of**20**### DOI 10.1007/s10994-008-5056-8 On reoptimizing multi-class classifiers

, 2008

"... Abstract Significant changes in the instance distribution or associated cost function of a learning problem require one to reoptimize a previously-learned classifier to work under new conditions. We study the problem of reoptimizing a multi-class classifier based on its ROC hypersurface and a matrix ..."

Abstract
- Add to MetaCart

Abstract Significant changes in the instance distribution or associated cost function of a learning problem require one to reoptimize a previously-learned classifier to work under new conditions. We study the problem of reoptimizing a multi-class classifier based on its ROC hypersurface and a matrix describing the costs of each type of prediction error. For a binary classifier, it is straightforward to find an optimal operating point based on its ROC curve and the relative cost of true positive to false positive error. However, the corresponding multiclass problem (finding an optimal operating point based on a ROC hypersurface and cost matrix) is more challenging and until now, it was unknown whether an efficient algorithm existed that found an optimal solution. We answer this question by first proving that the decision version of this problem is NP-complete. As a complementary positive result, we give an algorithm that finds an optimal solution in polynomial time if the number of classes n is a constant. We also present several heuristics for this problem, including linear, nonlinear, and quadratic programming formulations, genetic algorithms, and a customized algorithm. Empirical results suggest that under both uniform and non-uniform cost models, simple greedy methods outperform more sophisticated methods. Preliminary results appeared in Deng et al. (2006). Editor: Tom Fawcett.

### Convex Optimization for the Design of Learning Machines

"... Abstract. This paper reviews the recent surge of interest in convex optimization in a context of pattern recognition and machine learning. The main thesis of this paper is that the design of task-specific learning machines is aided substantially by using a convex optimization solver as a back-end to ..."

Abstract
- Add to MetaCart

Abstract. This paper reviews the recent surge of interest in convex optimization in a context of pattern recognition and machine learning. The main thesis of this paper is that the design of task-specific learning machines is aided substantially by using a convex optimization solver as a back-end to implement the task, liberating the designer from the concern of designing and analyzing an ad hoc algorithm. The aim of this paper is twofold: (i) it phrases the contributions of this ESANN 2007 special session in a broader context, and (ii) it provides a road-map to published resultsinthiscontext. 1

### cvx Users ’ Guide for cvx version 1.22 ∗

, 2012

"... 1.1 What is cvx?............................... 4 1.2 What is disciplined convex programming?............... 5 1.3 About this version............................ 5 ..."

Abstract
- Add to MetaCart

1.1 What is cvx?............................... 4 1.2 What is disciplined convex programming?............... 5 1.3 About this version............................ 5

### Robust counterparts of . . .

"... Of interest here are linear data fitting problems with uncertain data which lie in a given uncertainty set. A robust counterpart of such a problem may be interpreted as the problem of finding a solution which is best over all possible perturbations of the data which lie in the set. In particular, ro ..."

Abstract
- Add to MetaCart

Of interest here are linear data fitting problems with uncertain data which lie in a given uncertainty set. A robust counterpart of such a problem may be interpreted as the problem of finding a solution which is best over all possible perturbations of the data which lie in the set. In particular, robust counterparts of total least squares problems have been studied and good algorithms are available. The purpose of this paper is to consider robust counterparts of the problems considered as errors-in-variables problems, when it is appropriate to work directly with the uncertain variable values. It is shown how the original problems can be replaced by convex optimization problems in fewer variables for which standard software may be applied.

### Real-Time Convex Optimization . . . -- Recent advances that make it easier to design and implement algorithms

, 2010

"... Convex optimization has been used in signal processing for a long time to choose coefficients for use in fast (linear) algorithms, such as in filter or array design; more recently, it has been used to carry out (nonlinear) processing on the signal itself. Examples of the latter case include total va ..."

Abstract
- Add to MetaCart

Convex optimization has been used in signal processing for a long time to choose coefficients for use in fast (linear) algorithms, such as in filter or array design; more recently, it has been used to carry out (nonlinear) processing on the signal itself. Examples of the latter case include total variation denoising, compressed sensing, fault detection, and image classification. In both scenarios, the optimization is carried out on time scales of seconds or minutes and without strict time constraints. Convex optimization has traditionally been considered computationally expensive, so its use has been limited to applications where plenty of time is available. Such restrictions are no longer justified. The combination of dramatically increased computing power, modern algorithms, and new coding approaches has delivered an enormous speed increase, which makes it possible to solve modest-sized convex optimization problems on microsecond or millisecond time scales and with strict deadlines. This enables real-time convex optimization in signal processing.

### An Algorithm for Unconstrained Quadratically Penalized Convex Optimization

, 811

"... A descent algorithm, “Quasi-Quadratic Minimization with Memory” (QQMM), is proposed for unconstrained minimization of the sum, F, of a non-negative convex function, V, and a quadratic form. Such problems come up in regularized estimation in machine learning and statistics. In addition to values of F ..."

Abstract
- Add to MetaCart

A descent algorithm, “Quasi-Quadratic Minimization with Memory” (QQMM), is proposed for unconstrained minimization of the sum, F, of a non-negative convex function, V, and a quadratic form. Such problems come up in regularized estimation in machine learning and statistics. In addition to values of F, QQMM requires the (sub)gradient of V. Two features of QQMM help keep low the number of evaluations of the objective function it needs. First, QQMM provides good control over stopping the iterative search. This feature makes QQMM well adapted to statistical problems because in such problems the objective function is based on random data and therefore stopping early is sensible. Secondly, QQMM uses a complex method for determining trial minimizers of F. After a description of the problem and algorithm a simulation study comparing QQMM to the popular BFGS optimization algorithm is described. The simulation study and other experiments suggest that QQMM is generally substantially faster than BFGS in the problem domain for which it was designed. A QQMM-BFGS hybrid is also generally substantially faster than BFGS but does better than QQMM when QQMM is very slow.

### (Open Access publication)

, 2006

"... Optimisation of contribution of candidate parents to maximise genetic gain and restricting inbreeding using semidefinite programming ..."

Abstract
- Add to MetaCart

Optimisation of contribution of candidate parents to maximise genetic gain and restricting inbreeding using semidefinite programming

### Composite Self-Concordant Minimization ∗

"... We propose a variable metric framework for minimizing the sum of a self-concordant function and a possibly non-smooth convex function endowed with a computable proximal operator. We theoretically establish the convergence of our framework without relying on the usual Lipschitz gradient assumption on ..."

Abstract
- Add to MetaCart

We propose a variable metric framework for minimizing the sum of a self-concordant function and a possibly non-smooth convex function endowed with a computable proximal operator. We theoretically establish the convergence of our framework without relying on the usual Lipschitz gradient assumption on the smooth part. An important highlight of our work is a new set of analytic step-size selection and correction procedures based on the structure of the problem. We describe concrete algorithmic instances of our framework for several interesting large-scale applications and demonstrate them numerically on both synthetic and real data.

### 1 Automatic Code Generation for Real-Time Convex Optimization

"... Press, 2009. This chapter concerns the use of convex optimization in real-time embedded systems, in areas such as signal processing, automatic control, real-time estimation, real-time resource allocation and decision making, and fast automated trading. By ‘embedded ’ we mean that the optimization al ..."

Abstract
- Add to MetaCart

Press, 2009. This chapter concerns the use of convex optimization in real-time embedded systems, in areas such as signal processing, automatic control, real-time estimation, real-time resource allocation and decision making, and fast automated trading. By ‘embedded ’ we mean that the optimization algorithm is part of a larger, fully automated system, that executes automatically with newly arriving data or changing conditions, and without any human intervention or action. By ‘real-time ’ we mean that the optimization algorithm executes much faster than a typical or generic method with a human in the loop, in times measured in milliseconds or microseconds for small and medium size problems, and (a few) seconds for larger problems. In real-time embedded convex optimization the same optimization problem is solved many times, with different data, often with a hard real-time deadline. In this chapter we propose an automatic code generation system for real-time embedded convex optimization. Such a system scans a description of the problem family, and performs much of the analysis