Results 1  10
of
2,109,477
Global Optimization with Polynomials and the Problem of Moments
 SIAM JOURNAL ON OPTIMIZATION
, 2001
"... We consider the problem of finding the unconstrained global minimum of a realvalued polynomial p(x) : R R, as well as the global minimum of p(x), in a compact set K defined by polynomial inequalities. It is shown that this problem reduces to solving an (often finite) sequence of convex linear ma ..."
Abstract

Cited by 562 (47 self)
 Add to MetaCart
matrix inequality (LMI) problems. A notion of KarushKuhnTucker polynomials is introduced in a global optimality condition. Some illustrative examples are provided.
Optimization Flow Control, I: Basic Algorithm and Convergence
 IEEE/ACM TRANSACTIONS ON NETWORKING
, 1999
"... We propose an optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates. We view network links and sources as processors of a distributed computation system to solve the dual problem using gradient projection algorithm. In thi ..."
Abstract

Cited by 688 (64 self)
 Add to MetaCart
We propose an optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates. We view network links and sources as processors of a distributed computation system to solve the dual problem using gradient projection algorithm
Optimal approximation by piecewise smooth functions and associated variational problems
 Commun. Pure Applied Mathematics
, 1989
"... (Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Mumford, David Bryant, and Jayant Shah. 1989. Optimal approximations by piecewise smooth functions and associated variational problems. ..."
Abstract

Cited by 1285 (14 self)
 Add to MetaCart
(Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Mumford, David Bryant, and Jayant Shah. 1989. Optimal approximations by piecewise smooth functions and associated variational problems
No Free Lunch Theorems for Optimization
, 1997
"... A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch ” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performan ..."
Abstract

Cited by 927 (10 self)
 Add to MetaCart
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch ” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset
A firstorder primaldual algorithm for convex problems with applications to imaging
, 2010
"... In this paper we study a firstorder primaldual algorithm for convex optimization problems with known saddlepoint structure. We prove convergence to a saddlepoint with rate O(1/N) in finite dimensions, which is optimal for the complete class of nonsmooth problems we are considering in this paper ..."
Abstract

Cited by 427 (20 self)
 Add to MetaCart
In this paper we study a firstorder primaldual algorithm for convex optimization problems with known saddlepoint structure. We prove convergence to a saddlepoint with rate O(1/N) in finite dimensions, which is optimal for the complete class of nonsmooth problems we are considering
Some optimal inapproximability results
, 2002
"... We prove optimal, up to an arbitrary ffl? 0, inapproximability results for MaxEkSat for k * 3, maximizing the number of satisfied linear equations in an overdetermined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for ..."
Abstract

Cited by 748 (11 self)
 Add to MetaCart
for the efficient approximability of many optimization problems studied previously. In particular, for MaxE2Sat, MaxCut, MaxdiCut, and Vertex cover. Warning: Essentially this paper has been published in JACM and is subject to copyright restrictions. In particular it is for personal use only.
Learnability in Optimality Theory
, 1995
"... In this article we show how Optimality Theory yields a highly general Constraint Demotion principle for grammar learning. The resulting learning procedure specifically exploits the grammatical structure of Optimality Theory, independent of the content of substantive constraints defining any given gr ..."
Abstract

Cited by 526 (34 self)
 Add to MetaCart
grammatical module. We decompose the learning problem and present formal results for a central subproblem, deducing the constraint ranking particular to a target language, given structural descriptions of positive examples. The structure imposed on the space of possible grammars by Optimality Theory allows
Closedform Dual Optimization Problem Equivalent Computationally Efficient ClosedForm Dual Problem
"... www.csl.uiuc.edu Characterizing neural spiking activity as a function of environmental stimuli, and intrinsic effects such as a neuron's own spiking history and concurrent ensemble activity is important in neuroscience. Such a characterization is complex and there is increasing need for a broad ..."
Abstract
 Add to MetaCart
. Nonparametric methods are attractive due to fewer assumptions, but very few efficient methods for these scenarios are known. We propose a computationally efficient maximumlikelihood estimation methodology under mild smoothness assumptions. It relies on convex optimization and admits complexity reduction
A Limited Memory Algorithm for Bound Constrained Optimization
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 1994
"... An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based ..."
Abstract

Cited by 554 (9 self)
 Add to MetaCart
An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 579 (23 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first
Results 1  10
of
2,109,477