Results 1  10
of
20
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 59 (6 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
Selected topics in robust convex optimization
 Math. Prog. B, this issue
, 2007
"... Abstract Robust Optimization is a rapidly developing methodology for handling optimization problems affected by nonstochastic “uncertainbutbounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract Robust Optimization is a rapidly developing methodology for handling optimization problems affected by nonstochastic “uncertainbutbounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of robust counterpart of an optimization problem with uncertain data, (2) tractability of robust counterparts, (3) links between RO and traditional chance constrained settings of problems with stochastic data, and (4) a novel generic application of the RO methodology in Robust Linear Control. Keywords optimization under uncertainty · robust optimization · convex programming · chance constraints · robust linear control
Ellipsoidal bounds for uncertain linear equations and dynamical systems
, 2003
"... In this paper, we discuss semidefinite relaxation techniques for computing minimal size ellipsoids that bound the solution set of a system of uncertain linear equations. The proposed technique is based on the combination of a quadratic embedding of the uncertainty, and the Sprocedure. This formulat ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
In this paper, we discuss semidefinite relaxation techniques for computing minimal size ellipsoids that bound the solution set of a system of uncertain linear equations. The proposed technique is based on the combination of a quadratic embedding of the uncertainty, and the Sprocedure. This formulation leads to convex optimization problems that can be essentially solved in O(n 3)—n being the size of unknown vector — by means of suitable interior point barrier methods, as well as to closed form results in some particular cases. We further show that the uncertain linear equations paradigm can be directly applied to various statebounding problems for dynamical systems subject to setvalued noise and model uncertainty.
Sparsity Penalties in Dynamical System Estimation
"... Abstract—In this work we address the problem of state estimation in dynamical systems using recent developments in compressive sensing and sparse approximation. We formulate the traditional Kalman filter as a onestep update optimization procedure which leads us to a more unified framework, useful f ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract—In this work we address the problem of state estimation in dynamical systems using recent developments in compressive sensing and sparse approximation. We formulate the traditional Kalman filter as a onestep update optimization procedure which leads us to a more unified framework, useful for incorporating sparsity constraints. We introduce three combinations of two sparsity conditions (sparsity in the state and sparsity in the innovations) and write recursive optimization programs to estimate the state for each model. This paper is meant as an overview of different methods for incorporating sparsity into the dynamic model, a presentation of algorithms that unify the support and coefficient estimation, and a demonstration that these suboptimal schemes can actually show some performance improvements (either in estimation error or convergence time) over standard optimal methods that use an impoverished model.
Robust Fault Detection Using Linear Interval Observers
 Proceedings of the 5th IFAC Symposium on Fault Detection, Supervision and Safety of Technical Processes, SAFEPROCESS 2003
"... Abstract: The problem of robustness in fault detection using observers has been treated basically using the active approach, based on decoupling the effects of the uncertainty from the effects of the faults on the residual. On the other hand, the passive approach is based on propagating the effect o ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract: The problem of robustness in fault detection using observers has been treated basically using the active approach, based on decoupling the effects of the uncertainty from the effects of the faults on the residual. On the other hand, the passive approach is based on propagating the effect of the uncertainty to the residuals and then using adaptive thresholds. In this paper, the passive approach based on adaptive thresholds produced using a model with uncertain parameters bounded in intervals, also known as an "interval model", will be presented in the context of linear observer methodology, deriving their corresponding interval version. Finally, an example based on an industrial actuator used as an FDI benchmark in the European project DAMADICS will be used for testing the proposed approach. Copyright © 2003 IFA C
Minimum Variance Estimation with Uncertain Statistical Model
, 2001
"... In this paper, we consider the problem of parameter estimation in a linear stochastic model, where the observations are affected by noise with uncertain variance. In particular, we discuss a linear estimator which minimizes a worstcase measure of the aposteriori covariance of the parameters. T ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we consider the problem of parameter estimation in a linear stochastic model, where the observations are affected by noise with uncertain variance. In particular, we discuss a linear estimator which minimizes a worstcase measure of the aposteriori covariance of the parameters. The estimate is efficiently computed by means of convex programming, and may be updated with upcoming observations in a recursive setting. 1
Persistent Homology for Learning Densities with Bounded Support
"... We present a novel method for learning densities with bounded support which enables us to incorporate ‘hard ’ topological constraints. In particular, we show how emerging techniques from computational algebraic topology and the notion of persistent homology can be combined with kernelbased methods ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We present a novel method for learning densities with bounded support which enables us to incorporate ‘hard ’ topological constraints. In particular, we show how emerging techniques from computational algebraic topology and the notion of persistent homology can be combined with kernelbased methods from machine learning for the purpose of density estimation. The proposed formalism facilitates learning of models with bounded support in a principled way, and – by incorporating persistent homology techniques in our approach – we are able to encode algebraictopological constraints which are not addressed in current state of the art probabilistic models. We study the behaviour of our method on two synthetic examples for various sample sizes and exemplify the benefits of the proposed approach on a realworld dataset by learning a motion model for a race car. We show how to learn a model which respects the underlying topological structure of the racetrack, constraining the trajectories of the car. 1
SetMembership Filtering for DiscreteTime Systems With Nonlinear Equality Constraints
"... have close zero frequency response. The model is truncated to 20 states by means of the described quasiconvex optimization technique (QCO method), Hankel model reduction. We implement QCO method on the frequency grid with 84 samples with tolerance in bisection procedure 10 06. The optimization toge ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
have close zero frequency response. The model is truncated to 20 states by means of the described quasiconvex optimization technique (QCO method), Hankel model reduction. We implement QCO method on the frequency grid with 84 samples with tolerance in bisection procedure 10 06. The optimization together with calculating frequency samples took 74 seconds and the resulting approximation error is 2:9 1 10 05. Hankel model reduction took around 20 minutes providing the error 7:98 1 10 05. Results, see in the Fig. 1. For the given frequency interval QCO provided a better model than Hankel reduction. However, in general we do not expect QCO approximations to be better than Hankel reduction approximations. This example shows, that for large/medium scale systems we win sufficiently in time and do not really lose in approximation quality. VII. CONCLUSION In this technical note we have discussed multiinputmultioutput extension
Robust Error Square Constrained Filter Design for Systems With NonGaussian Noises
"... Abstract—In this letter, an error square constrained filtering problem is considered for systems with both nonGaussian noises and polytopic uncertainty. A novel filter is developed to estimate the systems states based on the current observation and known deterministic input signals. A free paramete ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—In this letter, an error square constrained filtering problem is considered for systems with both nonGaussian noises and polytopic uncertainty. A novel filter is developed to estimate the systems states based on the current observation and known deterministic input signals. A free parameter is introduced in the filter to handle the uncertain input matrix in the known deterministic input term. In addition, unlike the existing variance constrained filters, which are constructed by the previous observation, the filter is formed from the current observation. A timevarying linear matrix inequality (LMI) approach is used to derive an upper bound of the state estimation error square. The optimal bound is obtained by solving a convex optimization problem via semidefinite programming (SDP) approach. Simulation results are provided to demonstrate the effectiveness of the proposed method. Index Terms—Current observation, error square constrained filtering, known deterministic input, nonGaussian noise, polytopic uncertainty. I.
Multiple Instance Filtering
"... We propose a robust filtering approach based on semisupervised and multiple instance learning (MIL). We assume that the posterior density would be unimodal if not for the effect of outliers that we do not wish to explicitly model. Therefore, we seek for a point estimate at the outset, rather than a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose a robust filtering approach based on semisupervised and multiple instance learning (MIL). We assume that the posterior density would be unimodal if not for the effect of outliers that we do not wish to explicitly model. Therefore, we seek for a point estimate at the outset, rather than a generic approximation of the entire posterior. Our approach can be thought of as a combination of standard finitedimensional filtering (Extended Kalman Filter, or Unscented Filter) with multiple instance learning, whereby the initial condition comes with a putative set of inlier measurements. We show how both the state (regression) and the inlier set (classification) can be estimated iteratively and causally by processing only the current measurement. We illustrate our approach on visual tracking problems whereby the object of interest (target) moves and evolves as a result of occlusions and deformations, and partial knowledge of the target is given in the form of a bounding box (training set). 1