Results 1  10
of
142
Large margin methods for structured and interdependent output variables
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary ..."
Abstract

Cited by 612 (12 self)
 Add to MetaCart
Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary issue of designing classification algorithms that can deal with more complex outputs, such as trees, sequences, or sets. More generally, we consider problems involving multiple dependent output variables, structured output spaces, and classification problems with class attributes. In order to accomplish this, we propose to appropriately generalize the wellknown notion of a separation margin and derive a corresponding maximummargin formulation. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems. The proposed method has important applications in areas such as computational biology, natural language processing, information retrieval/extraction, and optical character recognition. Experiments from various domains involving different types of output spaces emphasize the breadth and generality of our approach.
Variable Neighborhood Search
, 1997
"... Variable neighborhood search (VNS) is a recent metaheuristic for solving combinatorial and global optimization problems whose basic idea is systematic change of neighborhood within a local search. In this survey paper we present basic rules of VNS and some of its extensions. Moreover, applications a ..."
Abstract

Cited by 342 (26 self)
 Add to MetaCart
Variable neighborhood search (VNS) is a recent metaheuristic for solving combinatorial and global optimization problems whose basic idea is systematic change of neighborhood within a local search. In this survey paper we present basic rules of VNS and some of its extensions. Moreover, applications are briefly summarized. They comprise heuristic solution of a variety of optimization problems, ways to accelerate exact algorithms and to analyze heuristic solution processes, as well as computerassisted discovery of conjectures in graph theory.
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 110 (12 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
Call center staffing with simulation and cutting plane method
 ANNALS OF OPERATIONS RESEARCH
, 2004
"... We present an iterative cutting plane method for minimizing staffing costs in a service system subject to satisfying acceptable service level requirements over multiple time periods. We assume that the service level cannot be easily computed, and instead is evaluated using simulation. The simulation ..."
Abstract

Cited by 54 (2 self)
 Add to MetaCart
We present an iterative cutting plane method for minimizing staffing costs in a service system subject to satisfying acceptable service level requirements over multiple time periods. We assume that the service level cannot be easily computed, and instead is evaluated using simulation. The simulation uses the method of common random numbers, so that the same sequence of random phenomena is observed when evaluating different staffing plans. In other words, we solve a sample average approximation problem. We establish convergence of the cutting plane method on a given sample average approximation. We also establish both convergence, and the rate of convergence, of the solutions to the sample average approximation to solutions of the original problem as the sample size increases. The cutting plane method relies on the service level functions being concave in the number of servers. We show how to verify this requirement as our algorithm proceeds. A numerical example showcases the properties of our method, and sheds light on when the concavity requirement can be expected to hold.
BundleBased Relaxation Methods For Multicommodity Capacitated Fixed Charge Network Design
, 1999
"... To efficiently derive bounds for largescale instances of the capacitated fixedcharge network design problem, Lagrangian relaxations appear promising. This paper presents the results of comprehensive experiments aimed at calibrating and comparing bundle and subgradient methods applied to the optimi ..."
Abstract

Cited by 50 (26 self)
 Add to MetaCart
To efficiently derive bounds for largescale instances of the capacitated fixedcharge network design problem, Lagrangian relaxations appear promising. This paper presents the results of comprehensive experiments aimed at calibrating and comparing bundle and subgradient methods applied to the optimization of Lagrangian duals arising from two Lagrangian relaxations. This study substantiates the fact that bundle methods appear superior to subgradient approaches because they converge faster and are more robust relative to different relaxations, problem characteristics, and selection of the initial parameter values. It also demonstrates that effective lower bounds may be computed efficiently for largescale instances of the capacitated fixedcharge network design problem. Indeed, in a fraction of the time required by a standard simplex approach to solve the linear programming relaxation, the methods we present attain very high quality solutions.
Structured and Simultaneous Lyapunov Functions for System Stability Problems
, 2001
"... It is shown that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A. The existence of such a Lyapunov function can be determined by solving a c ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
It is shown that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A. The existence of such a Lyapunov function can be determined by solving a convex program. We present several numerical methods for these optimization problems. A simple numerical example is given.
Performance Evaluation and Policy Selection in Multiclass Networks
, 2002
"... This paper concerns modelling and policy synthesis for regulation of multiclass queueing networks. A 2parameter network model is introduced to allow independent modelling of variability and mean processingrates, while maintaining simplicity of the model. Policy synthesis is based on consideration ..."
Abstract

Cited by 46 (26 self)
 Add to MetaCart
This paper concerns modelling and policy synthesis for regulation of multiclass queueing networks. A 2parameter network model is introduced to allow independent modelling of variability and mean processingrates, while maintaining simplicity of the model. Policy synthesis is based on consideration of more tractable workload models, and then translating a policy from this abstraction to the discrete network of interest. Translation is made possible through the use of safetystocks that maintain feasibility of workload trajectories. This is a wellknown approach in the queueing theory literature, and may be viewed as a generic approach to avoid deadlock in a discreteevent dynamical system. Simulation is used to evaluate a given policy, and to tune safetystock levels. These simulations are accelerated through a variance reduction technique that incorporates stochastic approximation to tune the variance reduction. The search for appropriate safetystock levels is coordinated through a cutting plane algorithm. Both the policy synthesis and the simulation acceleration rely heavily on the development of approximations to the value function through fluid model considerations.
Characterization and Computation of Optimal Distributions for Channel Coding
 IEEE Trans. Inform. Theory
, 2004
"... This paper concerns the structure of optimal codes for stochastic channel models. An investigation of an associated dual convex program reveals that the optimal distribution in channel coding is typically discrete. Based on this observation we obtain the following theoretical conclusions, as well as ..."
Abstract

Cited by 45 (3 self)
 Add to MetaCart
(Show Context)
This paper concerns the structure of optimal codes for stochastic channel models. An investigation of an associated dual convex program reveals that the optimal distribution in channel coding is typically discrete. Based on this observation we obtain the following theoretical conclusions, as well as new algorithms for constructing capacityachieving distributions: (i) Under general conditions, for low SNR the optimal random code is defined by a distribution whose magnitude is binary. (ii) Simple discrete approximations can nearly reach capacity even in cases where the optimal distribution is known to be absolutely continuous with respect to Lebesgue measure. (iii) A new class of algorithms is introduced, based on the cuttingplane method, to generate discrete distributions that are optimal within a prescribed class. Keywords: Information theory; channel coding; fading channels. # Department of Electrical and Computer Engineering, the Coordinated Science Laboratory, and the University of Illinois, 1308 W. Main Street, Urbana, IL 61801, URL http://black.csl.uiuc.edu:80/#meyn (smeyn@uiuc.edu). Work supported in part by the National Science Foundation through ITR 0085929 1
FIR Filter Design via Spectral Factorization and Convex Optimization
, 1997
"... We consider the design of finite impulse response (FIR) filters subject to upper and lower bounds on the frequency response magnitude. The associated optimization problems, with the filter coefficients as the variables and the frequency response bounds as constraints, are in general nonconvex. Usin ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
We consider the design of finite impulse response (FIR) filters subject to upper and lower bounds on the frequency response magnitude. The associated optimization problems, with the filter coefficients as the variables and the frequency response bounds as constraints, are in general nonconvex. Using a change of variables and spectral factorization, we can pose such problems as linear or nonlinear convex optimization problems. As a result we can solve them efficiently (and globally) by recently developed interiorpoint methods. We describe applications to filter and equalizer design, and the related problem of antenna array weight design.