Results 1 
3 of
3
Verifying nonlinear real formulas via sums of squares
 Theorem Proving in Higher Order Logics, TPHOLs 2007, volume 4732 of Lect. Notes in Comp. Sci
, 2007
"... Abstract. Techniques based on sums of squares appear promising as a general approach to the universal theory of reals with addition and multiplication, i.e. verifying Boolean combinations of equations and inequalities. A particularly attractive feature is that suitable ‘sum of squares ’ certificates ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
Abstract. Techniques based on sums of squares appear promising as a general approach to the universal theory of reals with addition and multiplication, i.e. verifying Boolean combinations of equations and inequalities. A particularly attractive feature is that suitable ‘sum of squares ’ certificates can be found by sophisticated numerical methods such as semidefinite programming, yet the actual verification of the resulting proof is straightforward even in a highly foundational theorem prover. We will describe our experience with an implementation in HOL Light, noting some successes as well as difficulties. We also describe a new approach to the univariate case that can handle some otherwise difficult examples. 1 Verifying nonlinear formulas over the reals Over the real numbers, there are algorithms that can in principle perform quantifier elimination from arbitrary firstorder formulas built up using addition, multiplication and the usual equality and inequality predicates. A classic example of such a quantifier elimination equivalence is the criterion for a quadratic equation to have a real root: ∀a b c. (∃x. ax 2 + bx + c = 0) ⇔ a = 0 ∧ (b = 0 ⇒ c = 0) ∨ a � = 0 ∧ b 2 ≥ 4ac
Control of Systems with Repeated Scalar Nonlinearities
 IEEE Conf. Decision and Control
, 1998
"... A class of discretetime nonlinear systems is studied which is described by a standard linear stateequation except that each statecomponent is subject to an identical odd 1Lipschitz nonlinearity. This may represent a class of recurrent neural networks. The emphasis is on the development of an app ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A class of discretetime nonlinear systems is studied which is described by a standard linear stateequation except that each statecomponent is subject to an identical odd 1Lipschitz nonlinearity. This may represent a class of recurrent neural networks. The emphasis is on the development of an approach that makes the best use of the fact that the nonlinearity on each statecomponent is the same. It is shown that the origin of such a system is globally stable, and an upper bound on the ` 2 ` 2 induced gain can be deduced, if a quadratic Lyapunov/storage function exists whose Hessian matrix is positive definite diagonally dominant. This reduces to finding such a matrix to satisfy appropriate Linear Matrix Inequalities, which is shown computationally tractable. In synthesis, it is assumed that the controller is also a nonlinear system of this form and has the same nonlinearity as the plant does. With appropriate assumptions on the stabilizability and detectability of the plant, all suc...
Distributed Estimation via Dual Decomposition
"... Abstract — The focus of this paper is to develop a framework for distributed estimation via convex optimization. We deal with a network of complex sensor subsystems with local estimation and signal processing. More specifically, the sensor subsystems locally solve a maximum likelihood (or maximum a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract — The focus of this paper is to develop a framework for distributed estimation via convex optimization. We deal with a network of complex sensor subsystems with local estimation and signal processing. More specifically, the sensor subsystems locally solve a maximum likelihood (or maximum a posteriori probability) estimation problem by maximizing a (strictly) concave loglikelihood function subject to convex constraints. These local implementations are not revealed outside the subsystem. The subsystems interact with one another via convex coupling constraints. We discuss a distributed estimation scheme to fuse the local subsystem estimates into a globally optimal estimate that satisfies the coupling constraints. The approach uses dual decomposition techniques in combination with the subgradient method to develop a simple distributed estimation algorithm. Many existing methods of data fusion are suboptimal, i.e., they do not maximize the loglikelihood exactly but rather ‘fuse ’ partial results from many processors. For linear gaussian formulation, least mean square (LMS) consensus provides optimal (maximum likelihood) solution. The main contribution of this work is to provide a new approach for data fusion which is based on distributed convex optimization. It applies to a class of problems, described by concave loglikelihood functions, which is much broader than the LMS consensus setup. I.