Results 1  10
of
105
Large margin methods for structured and interdependent output variables
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary ..."
Abstract

Cited by 399 (11 self)
 Add to MetaCart
Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary issue of designing classification algorithms that can deal with more complex outputs, such as trees, sequences, or sets. More generally, we consider problems involving multiple dependent output variables, structured output spaces, and classification problems with class attributes. In order to accomplish this, we propose to appropriately generalize the wellknown notion of a separation margin and derive a corresponding maximummargin formulation. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems. The proposed method has important applications in areas such as computational biology, natural language processing, information retrieval/extraction, and optical character recognition. Experiments from various domains involving different types of output spaces emphasize the breadth and generality of our approach.
Variable Neighborhood Search
, 1997
"... Variable neighborhood search (VNS) is a recent metaheuristic for solving combinatorial and global optimization problems whose basic idea is systematic change of neighborhood within a local search. In this survey paper we present basic rules of VNS and some of its extensions. Moreover, applications a ..."
Abstract

Cited by 242 (24 self)
 Add to MetaCart
Variable neighborhood search (VNS) is a recent metaheuristic for solving combinatorial and global optimization problems whose basic idea is systematic change of neighborhood within a local search. In this survey paper we present basic rules of VNS and some of its extensions. Moreover, applications are briefly summarized. They comprise heuristic solution of a variety of optimization problems, ways to accelerate exact algorithms and to analyze heuristic solution processes, as well as computerassisted discovery of conjectures in graph theory.
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 70 (9 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
BundleBased Relaxation Methods For Multicommodity Capacitated Fixed Charge Network Design
, 1999
"... To efficiently derive bounds for largescale instances of the capacitated fixedcharge network design problem, Lagrangian relaxations appear promising. This paper presents the results of comprehensive experiments aimed at calibrating and comparing bundle and subgradient methods applied to the optimi ..."
Abstract

Cited by 48 (27 self)
 Add to MetaCart
To efficiently derive bounds for largescale instances of the capacitated fixedcharge network design problem, Lagrangian relaxations appear promising. This paper presents the results of comprehensive experiments aimed at calibrating and comparing bundle and subgradient methods applied to the optimization of Lagrangian duals arising from two Lagrangian relaxations. This study substantiates the fact that bundle methods appear superior to subgradient approaches because they converge faster and are more robust relative to different relaxations, problem characteristics, and selection of the initial parameter values. It also demonstrates that effective lower bounds may be computed efficiently for largescale instances of the capacitated fixedcharge network design problem. Indeed, in a fraction of the time required by a standard simplex approach to solve the linear programming relaxation, the methods we present attain very high quality solutions.
Call center staffing with simulation and cutting plane methods, Annals of Operations Research 127
, 2004
"... Abstract. We present an iterative cutting plane method for minimizing staffing costs in a service system subject to satisfying acceptable service level requirements over multiple time periods. We assume that the service level cannot be easily computed, and instead is evaluated using simulation. The ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
Abstract. We present an iterative cutting plane method for minimizing staffing costs in a service system subject to satisfying acceptable service level requirements over multiple time periods. We assume that the service level cannot be easily computed, and instead is evaluated using simulation. The simulation uses the method of common random numbers, so that the same sequence of random phenomena is observed when evaluating different staffing plans. In other words, we solve a sample average approximation problem. We establish convergence of the cutting plane method on a given sample average approximation. We also establish both convergence, and the rate of convergence, of the solutions to the sample average approximation to solutions of the original problem as the sample size increases. The cutting plane method relies on the service level functions being concave in the number of servers. We show how to verify this requirement as our algorithm proceeds. A numerical example showcases the properties of our method, and sheds light on when the concavity requirement can be expected to hold.
FIR Filter Design via Spectral Factorization and Convex Optimization
, 1997
"... We consider the design of finite impulse response (FIR) filters subject to upper and lower bounds on the frequency response magnitude. The associated optimization problems, with the filter coefficients as the variables and the frequency response bounds as constraints, are in general nonconvex. Usin ..."
Abstract

Cited by 37 (6 self)
 Add to MetaCart
We consider the design of finite impulse response (FIR) filters subject to upper and lower bounds on the frequency response magnitude. The associated optimization problems, with the filter coefficients as the variables and the frequency response bounds as constraints, are in general nonconvex. Using a change of variables and spectral factorization, we can pose such problems as linear or nonlinear convex optimization problems. As a result we can solve them efficiently (and globally) by recently developed interiorpoint methods. We describe applications to filter and equalizer design, and the related problem of antenna array weight design.
A Bundle Type DualAscent Approach to Linear Multicommodity MinCost Flow Problems
, 1999
"... ... MinCost Flow problem, where the mutual capacity constraints are dualized and the resulting Lagrangean Dual is solved with a dualascent algorithm belonging to the class of Bundle methods. Although decomposition approaches to blockstructured Linear Programs have been reported not to be competit ..."
Abstract

Cited by 33 (15 self)
 Add to MetaCart
... MinCost Flow problem, where the mutual capacity constraints are dualized and the resulting Lagrangean Dual is solved with a dualascent algorithm belonging to the class of Bundle methods. Although decomposition approaches to blockstructured Linear Programs have been reported not to be competitive with generalpurpose software, our extensive computational comparison shows that, when carefully implemented, a decomposition algorithm can outperform several other approaches, especially on problems where the number of commodities is “large” with respect to the size of the graph. Our specialized Bundle algorithm is characterized by a new heuristic for the trust region parameter handling, and embeds a specialized Quadratic Program solver that allows the efficient implementation of strategies for reducing the number of active Lagrangean variables. We also exploit the structural properties of the singlecommodity MinCost Flow subproblems to reduce the overall computational cost. The proposed approach can be easily extended to handle variants of the problem.
Structured and Simultaneous Lyapunov Functions for System Stability Problems
, 2001
"... It is shown that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A. The existence of such a Lyapunov function can be determined by solving a c ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
It is shown that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A. The existence of such a Lyapunov function can be determined by solving a convex program. We present several numerical methods for these optimization problems. A simple numerical example is given.
Characterization and Computation of Optimal Distributions for Channel Coding
 IEEE Trans. Inform. Theory
, 2004
"... This paper concerns the structure of optimal codes for stochastic channel models. An investigation of an associated dual convex program reveals that the optimal distribution in channel coding is typically discrete. Based on this observation we obtain the following theoretical conclusions, as well as ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
(Show Context)
This paper concerns the structure of optimal codes for stochastic channel models. An investigation of an associated dual convex program reveals that the optimal distribution in channel coding is typically discrete. Based on this observation we obtain the following theoretical conclusions, as well as new algorithms for constructing capacityachieving distributions: (i) Under general conditions, for low SNR the optimal random code is defined by a distribution whose magnitude is binary. (ii) Simple discrete approximations can nearly reach capacity even in cases where the optimal distribution is known to be absolutely continuous with respect to Lebesgue measure. (iii) A new class of algorithms is introduced, based on the cuttingplane method, to generate discrete distributions that are optimal within a prescribed class. Keywords: Information theory; channel coding; fading channels. # Department of Electrical and Computer Engineering, the Coordinated Science Laboratory, and the University of Illinois, 1308 W. Main Street, Urbana, IL 61801, URL http://black.csl.uiuc.edu:80/#meyn (smeyn@uiuc.edu). Work supported in part by the National Science Foundation through ITR 0085929 1