Results 1  10
of
33
Graph implementations for nonsmooth convex programs
 Recent Advances in Learning and Control, Lecture Notes in Control and Information Sciences
, 2008
"... Summary. We describe graph implementations, a generic method for representing a convex function via its epigraph, described in a disciplined convex programming framework. This simple and natural idea allows a very wide variety of smooth and nonsmooth convex programs to be easily specified and effi ..."
Abstract

Cited by 248 (7 self)
 Add to MetaCart
(Show Context)
Summary. We describe graph implementations, a generic method for representing a convex function via its epigraph, described in a disciplined convex programming framework. This simple and natural idea allows a very wide variety of smooth and nonsmooth convex programs to be easily specified and efficiently solved, using interiorpoint methods for smooth or cone convex programs. Key words: Convex optimization, nonsmooth optimization, disciplined convex programming, optimization modeling languages, semidefinite program
An approximate dynamic programming approach to network revenue management with customer choice. Transportation Science, 43:381–394, 2009. Use of Approximate Dynamic Programming for Production Optimization SPE 141677 (a) Comparison with baseline strategy (
"... We consider a network revenue management problem where customers choose among open fare products according to some prespecified choice model. Starting with a Markov decision process (MDP) formulation, we approximate the value function with an affine function of the state vector. We show that the re ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
(Show Context)
We consider a network revenue management problem where customers choose among open fare products according to some prespecified choice model. Starting with a Markov decision process (MDP) formulation, we approximate the value function with an affine function of the state vector. We show that the resulting problem provides a tighter bound for the MDP value than the choicebased linear program proposed by Gallego et al. (2004) and Liu and van Ryzin (2007). We develop a column generation algorithm to solve the problem for a multinomial logit choice model with disjoint consideration sets. We also derive a bound as a byproduct of a decomposition heuristic. Our numerical study shows the policies from our solution approach can significantly outperform heuristics from the choicebased linear program. While a substantial amount of research has been done on methods for solving the network revenue management problem, much less work has been done in solving the version where customers choose among available network products. Usually, when airlines open up a menu of fares for a given set of flights, customers will make substitutions between those available, or purchase nothing. Although incorporating customer choice is important in practice, methodologically it is
Reformulations in Mathematical Programming: A Computational Approach
"... Summary. Mathematical programming is a language for describing optimization problems; it is based on parameters, decision variables, objective function(s) subject to various types of constraints. The present treatment is concerned with the case when objective(s) and constraints are algebraic mathema ..."
Abstract

Cited by 24 (19 self)
 Add to MetaCart
(Show Context)
Summary. Mathematical programming is a language for describing optimization problems; it is based on parameters, decision variables, objective function(s) subject to various types of constraints. The present treatment is concerned with the case when objective(s) and constraints are algebraic mathematical expressions of the parameters and decision variables, and therefore excludes optimization of blackbox functions. A reformulation of a mathematical program P is a mathematical program Q obtained from P via symbolic transformations applied to the sets of variables, objectives and constraints. We present a survey of existing reformulations interpreted along these lines, some example applications, and describe the implementation of a software framework for reformulation and optimization. 1
REFORMULATIONS IN MATHEMATICAL PROGRAMMING: DEFINITIONS AND SYSTEMATICS
, 2008
"... A reformulation of a mathematical program is a formulation which shares some properties with, but is in some sense better than, the original program. Reformulations are important with respect to the choice and efficiency of the solution algorithms; furthermore, it is desirable that reformulations c ..."
Abstract

Cited by 23 (17 self)
 Add to MetaCart
A reformulation of a mathematical program is a formulation which shares some properties with, but is in some sense better than, the original program. Reformulations are important with respect to the choice and efficiency of the solution algorithms; furthermore, it is desirable that reformulations can be carried out automatically. Reformulation techniques are very common in mathematical programming but interestingly they have never been studied under a common framework. This paper attempts to move some steps in this direction. We define a framework for storing and manipulating mathematical programming formulations, give several fundamental definitions categorizing reformulations in essentially four types (optreformulations, narrowings, relaxations and approximations). We establish some theoretical results and give reformulation examples for each type.
Improved total variationtype regularization using higherorder edge detectors
 SIAM Journal on Imaging Sciences
"... Abstract. We present a novel deconvolution approach to accurately restore piecewise smooth signals from blurred data. The first stage uses Higher Order Total Variation restorations to obtain an estimate of the location of jump discontinuities from the blurred data. In the second stage the estimated ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present a novel deconvolution approach to accurately restore piecewise smooth signals from blurred data. The first stage uses Higher Order Total Variation restorations to obtain an estimate of the location of jump discontinuities from the blurred data. In the second stage the estimated jump locations are used to determine the local orders of a Variable Order Total Variation restoration. The method replaces the first order derivative approximation used in standard Total Variation by a variable order derivative operator. Smooth segments as well as jump discontinuities are restored while the staircase effect typical for standard first order Total Variation regularization is avoided. As compared to first order Total Variation, signal restorations are more accurate representations of the true signal, as measured in a relative l 2 norm. The method can also be used to obtain an accurate estimation of the locations and sizes of the true jump discontinuities. The approach is independent of the algorithm used for the standard Total Variation problem and is, consequently, readily incorporated in existing Total Variation restoration codes.
Composite SelfConcordant Minimization ∗
"... We propose a variable metric framework for minimizing the sum of a selfconcordant function and a possibly nonsmooth convex function endowed with a computable proximal operator. We theoretically establish the convergence of our framework without relying on the usual Lipschitz gradient assumption on ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
We propose a variable metric framework for minimizing the sum of a selfconcordant function and a possibly nonsmooth convex function endowed with a computable proximal operator. We theoretically establish the convergence of our framework without relying on the usual Lipschitz gradient assumption on the smooth part. An important highlight of our work is a new set of analytic stepsize selection and correction procedures based on the structure of the problem. We describe concrete algorithmic instances of our framework for several interesting largescale applications and demonstrate them numerically on both synthetic and real data.
On Reoptimizing MultiClass Classifiers ∗
, 2006
"... Significant changes in the instance distribution or associated cost function of a learning problem require one to reoptimize a previously learned classifier to work under new conditions. We study the problem of reoptimizing a multiclass classifier based on its ROC hypersurface and a matrix describi ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Significant changes in the instance distribution or associated cost function of a learning problem require one to reoptimize a previously learned classifier to work under new conditions. We study the problem of reoptimizing a multiclass classifier based on its ROC hypersurface and a matrix describing the costs of each type of prediction error. For a binary classifier, it is straightforward to find an optimal operating point based on its ROC curve and the relative cost of true positive to false positive error. However, the corresponding multiclass problem (finding an optimal operating point based on a ROC hypersurface and cost matrix) is more challenging and until now, it was unknown whether an efficient algorithm existed that found an optimal solution. We answer this question by first proving that the decision version of this problem is NPcomplete. As a complementary positive result, we give an algorithm that finds an optimal solution in polynomial time if the number of classes n is a constant. We also present several heuristics for this problem, including linear, nonlinear, and quadratic programming formulations, genetic algorithms, and a customized algorithm. Empirical results suggest that under uniform costs several methods exhibit significant improvements while genetic algorithms and margin maximization quadratic programs fare the best under nonuniform cost models.
D4L: Decentralized Dynamic Discriminative Dictionary Learning
, 2015
"... We consider discriminative dictionary learning in a distributed online setting, where a network of agents aims to learn a common set of dictionary elements of a feature space and model parameters while sequentially receiving observations. We formulate this problem as a distributed stochastic program ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We consider discriminative dictionary learning in a distributed online setting, where a network of agents aims to learn a common set of dictionary elements of a feature space and model parameters while sequentially receiving observations. We formulate this problem as a distributed stochastic program with a nonconvex objective and present a block variant of the ArrowHurwicz saddle point algorithm to solve it. Using Lagrange multipliers to penalize the discrepancy between them, only neighboring nodes exchange model information. We show that decisions made with this saddle point algorithm asymptotically achieve a firstorder stationarity condition on average. The learning rate depends on the signal source, network, and discriminative task. We illustrate the algorithm performance in an online multiagent setting for a collaborative image classification task and show that practical performance is comparable to the centralized case. Moreover, in the multiclass setting, the proposed framework empirically allows nodes to make global inferences despite only observing distinct subsets of the feature space. We apply the proposed method to a mobile robotic team performing collaborative navigability assessment in an unknown environment, demonstrating the proposed algorithm’s utility in a field setting.
A primaldual algorithmic framework for constrained convex minimization,” arXiv preprint:1406.5403
, 2014
"... We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nester ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov’s excessive gap technique in a structured fashion and unifies it with smoothing and primaldual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction methodofmultipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.
An algorithm for direct identification of passive transfer matrices with positive real fractions via convex programming
 Int. J. Numer. Modelling: Electron. Networks, Devices and Fields
"... The paper presents a new algorithm for the identification of a positive real rational transfer matrix of a multiinput–multioutput system from frequency domain data samples. It is based on the combination of leastsquares pole identification by the Vector Fitting algorithm and residue identificatio ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The paper presents a new algorithm for the identification of a positive real rational transfer matrix of a multiinput–multioutput system from frequency domain data samples. It is based on the combination of leastsquares pole identification by the Vector Fitting algorithm and residue identification based on frequencyindependent passivity constraints by convex programming. Such an approach enables the identification of a priori guaranteed passive lumped models, so avoids the passivity check and subsequent (perturbative) passivity enforcement as required by most of the other available algorithms. As a case study, the algorithm is successfully applied to the macromodeling of a twisted cable pair, and the results compared with a passive identification performed with an algorithm based on quadratic programming