Results 1  10
of
31
SOSTOOLS: Sum of squares optimization toolbox for MATLAB
, 2004
"... Version 2.00 ..."
(Show Context)
Complete search in continuous global optimization and constraint satisfaction
 ACTA NUMERICA 13
, 2004
"... ..."
A tutorial on sum of squares techniques for system analysis
 In Proceedings of the American control conference, ASCC
, 2005
"... Abstract — This tutorial is about new system analysis techniques that were developed in the past few years based on the sum of squares decomposition. We will present stability and robust stability analysis tools for different classes of systems: systems described by nonlinear ordinary differential e ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Abstract — This tutorial is about new system analysis techniques that were developed in the past few years based on the sum of squares decomposition. We will present stability and robust stability analysis tools for different classes of systems: systems described by nonlinear ordinary differential equations or differential algebraic equations, hybrid systems with nonlinear subsystems and/or nonlinear switching surfaces, and timedelay systems described by nonlinear functional differential equations. We will also discuss how different analysis questions such as model validation and safety verification can be answered for uncertain nonlinear and hybrid systems. I.
Approximate dynamic programming via iterated Bellman inequalities
, 2010
"... In this paper we introduce new methods for finding functions that lower bound the value function of a stochastic control problem, using an iterated form of the Bellman inequality. Our method is based on solving linear or semidefinite programs, and produces both a bound on the optimal objective, as w ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
In this paper we introduce new methods for finding functions that lower bound the value function of a stochastic control problem, using an iterated form of the Bellman inequality. Our method is based on solving linear or semidefinite programs, and produces both a bound on the optimal objective, as well as a suboptimal policy that appears to works very well. These results extend and improve bounds obtained by authors in a previous paper using a single Bellman inequality condition. We describe the methods in a general setting, and show how they can be applied in specific cases including the finite state case, constrained linear quadratic control, switched affine control, and multiperiod portfolio investment.
Providing a basin of attraction to a target region by computation of Lyapunovlike functions
 In IEEE Int. Conf. on Computational Cybernetics
, 2006
"... Abstract — In this paper, we present a method for computing a basin of attraction to a target region for nonlinear ordinary differential equations. This basin of attraction is ensured by a Lyapunovlike polynomial function that we compute using an interval based branchandrelax algorithm. This alg ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we present a method for computing a basin of attraction to a target region for nonlinear ordinary differential equations. This basin of attraction is ensured by a Lyapunovlike polynomial function that we compute using an interval based branchandrelax algorithm. This algorithm relaxes the necessary conditions on the coefficients of the Lyapunovlike function to a system of linear interval inequalities that can then be solved exactly, and iteratively reduces the relaxation error by recursively decomposing the state space into hyperrectangles. Tests on an implementation are promising. I.
On parameterdependent Lyapunov functions for robust stability of linear systems
"... Abstract — For a linear system affected by real parametric uncertainty, this paper focuses on robust stability analysis via quadraticinthestate Lyapunov functions polynomially dependent on the parameters. The contribution is twofold. First, if n denotes the system order and m the number of parame ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract — For a linear system affected by real parametric uncertainty, this paper focuses on robust stability analysis via quadraticinthestate Lyapunov functions polynomially dependent on the parameters. The contribution is twofold. First, if n denotes the system order and m the number of parameters, it is shown that it is enough to seek a parameterdependent Lyapunov function of given degree 2nm in the parameters. Second, it is shown that robust stability can be assessed by globally minimizing a multivariate scalar polynomial related with this Lyapunov matrix. A hierarchy of LMI relaxations is proposed to solve this problem numerically, yielding simultaneously upper and lower bounds on the global minimum with guarantee of asymptotic convergence. I.
Analysis and design of optimization algorithms via integral quadratic constraints. Arxiv preprint
, 2014
"... This manuscript develops a new framework to analyze and design iterative optimization algorithms built on the notion of Integral Quadratic Constraints (IQC) from robust control theory. IQCs provide sufficient conditions for the stability of complicated interconnected systems, and these conditions ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This manuscript develops a new framework to analyze and design iterative optimization algorithms built on the notion of Integral Quadratic Constraints (IQC) from robust control theory. IQCs provide sufficient conditions for the stability of complicated interconnected systems, and these conditions can be checked by semidefinite programming. We discuss how to adapt IQC theory to study optimization algorithms, proving new inequalities about convex functions and providing a version of IQC theory adapted for use by optimization researchers. Using these inequalities, we derive numerical upper bounds on convergence rates for the Gradient method, the Heavyball method, Nesterov’s accelerated method, and related variants by solving small, simple semidefinite programming problems. We also briefly show how these techniques can be used to search for optimization algorithms with desired performance characteristics, establishing a new methodology for algorithm design. 1
Positivstellensatz Certificates for NonFeasibility of Connectivity Graphs in Multiagent Coordination
 16TH IFAC WORLD CONGRESS
, 2005
"... In this paper, we discuss how to obtain certificates for the nonfeasibility of connectivity graphs arising from multiagent formations. We summarize some previous work on connectivity graphs and their feasibility. Next, we introduce the Positivstellensatz to show how it can be used to better unders ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
In this paper, we discuss how to obtain certificates for the nonfeasibility of connectivity graphs arising from multiagent formations. We summarize some previous work on connectivity graphs and their feasibility. Next, we introduce the Positivstellensatz to show how it can be used to better understand the space of all connectivity graphs for a fixed number of vertices, which had previously been understood as only a subspace of the space of all graphs. We study one particular class of graphs and use our methodology to prove some more results in the feasibility of connectivity graphs.
An alternative approach for nonlinear optimal control problems based on the method of moments
 Computational Optimization and Applications
"... We propose an alternative method for computing effectively the solution of nonlinear, fixedterminaltime, optimal control problems when they are given in Lagrange, Bolza or Mayer forms. This method works well when the nonlinearities in the control variable can be expressed as polynomials. The esse ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We propose an alternative method for computing effectively the solution of nonlinear, fixedterminaltime, optimal control problems when they are given in Lagrange, Bolza or Mayer forms. This method works well when the nonlinearities in the control variable can be expressed as polynomials. The essential of this proposal is the transformation of a nonlinear, nonconvex optimal control problem into an equivalent optimal control problem with linear and convex structure. The method is based on global optimization of polynomials by the method of moments. With this method we can determine either the existence or lacking of minimizers. In addition, we can calculate generalized solutions when the original problem lacks of minimizers. We also present the numerical schemes to solve several examples arising in science and technology. 1 1