Results 1  10
of
185
A generalized Gaussian image model for edgepreserving MAP estimation
 IEEE Trans. on Image Processing
, 1993
"... Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distri ..."
Abstract

Cited by 300 (36 self)
 Add to MetaCart
(Show Context)
Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distribution used in robust detection and estimation. The model satisifies several desirable analytical and computational properties for MAP estimation, including continuous dependence of the estimate on the data, invariance of the character of solutions to scaling of data, and a solution which lies at the unique global minimum of the U posteriori loglikeihood function. The GGMRF is demonstrated to be useful for image reconstruction in lowdosage transmission tomography. I.
Applications of Secondorder Cone Programming
, 1998
"... In a secondorder cone program (SOCP) a linear function is minimized over the intersection of an affine set and the product of secondorder (quadratic) cones. SOCPs are nonlinear convex problems that include linear and (convex) quadratic programs as special cases, but are less general than semidefin ..."
Abstract

Cited by 218 (10 self)
 Add to MetaCart
In a secondorder cone program (SOCP) a linear function is minimized over the intersection of an affine set and the product of secondorder (quadratic) cones. SOCPs are nonlinear convex problems that include linear and (convex) quadratic programs as special cases, but are less general than semidefinite programs (SDPs). Several efficient primaldual interiorpoint methods for SOCP have been developed in the last few years. After reviewing
Multiobjective output feedback control via LMI
 in Proc. Amer. Contr. Conf
, 1997
"... The problem of multiobjective H2=H1 optimal controller design is reviewed. There is as yet no exact solution to this problem. We present a method based on that proposed by Scherer [14]. The problem is formulated as a convex semidefinite program (SDP) using the LMI formulation of the H2 and H1 norms. ..."
Abstract

Cited by 212 (8 self)
 Add to MetaCart
The problem of multiobjective H2=H1 optimal controller design is reviewed. There is as yet no exact solution to this problem. We present a method based on that proposed by Scherer [14]. The problem is formulated as a convex semidefinite program (SDP) using the LMI formulation of the H2 and H1 norms. Suboptimal solutions are computed using finite dimensional Qparametrization. The objective value of the suboptimal Q's converges to the true optimum as the dimension of Q is increased. State space representations are presented which are the analog of those given by Khargonekar and Rotea [11] for the H2 case. A simple example computed using FIR (Finite Impulse Response) Q's is presented.
A characterization of convex problems in decentralized control
 IEEE Transactions on Automatic Control
"... Abstract—We consider the problem of constructing optimal decentralized controllers. We formulate this problem as one of minimizing the closedloop norm of a feedback system subject to constraints on the controller structure. We define the notion of quadratic invariance of a constraint set with respe ..."
Abstract

Cited by 126 (24 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the problem of constructing optimal decentralized controllers. We formulate this problem as one of minimizing the closedloop norm of a feedback system subject to constraints on the controller structure. We define the notion of quadratic invariance of a constraint set with respect to a system, and show that if the constraint set has this property, then the constrained minimumnorm problem may be solved via convex programming. We also show that quadratic invariance is necessary and sufficient for the constraint set to be preserved under feedback. These results are developed in a very general framework, and are shown to hold in both continuous and discrete time, for both stable and unstable systems, and for any norm. This notion unifies many previous results identifying specific tractable decentralized control problems, and delineates the largest known class of convex problems in decentralized control. As an example, we show that optimal stabilizing controllers may be efficiently computed in the case where distributed controllers can communicate faster than their dynamics propagate. We also show that symmetric synthesis is included in this classification, and provide a test for sparsity constraints to be quadratically invariant, and thus amenable to convex synthesis. Index Terms—Convex optimization, decentralized control, delayed control, extended linear spaces, networked control. I.
Method of centers for minimizing generalized eigenvalues
 Linear Algebra Appl
, 1993
"... We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fr ..."
Abstract

Cited by 78 (12 self)
 Add to MetaCart
We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fractional programs. Many problems arising in control theory can be cast in this form. The problem is nondifferentiable but quasiconvex, so methods such as Kelley's cuttingplane algorithm or the ellipsoid algorithm of Shor, Nemirovksy, and Yudin are guaranteed to minimize it. In this paper we describe relevant background material and a simple interior point method that solves such problems more efficiently. The algorithm is a variation on Huard's method of centers, using a selfconcordant barrier for matrix inequalities developed by Nesterov and Nemirovsky. (Nesterov and Nemirovsky have also extended their potential reduction methods to handle the same problem [NN91b].) Since the problem is quasiconvex but not convex, devising a nonheuristic stopping criterion (i.e., one that guarantees a given accuracy) is more difficult than in the convex case. We describe several nonheuristic stopping criteria that are based on the dual of a related convex problem and a new ellipsoidal approximation that is slightly sharper, in some cases, than a more general result due to Nesterov and Nemirovsky. The algorithm is demonstrated on an example: determining the quadratic Lyapunov function that optimizes a decay rate estimate for a differential inclusion.
FIR Filter Design via Spectral Factorization and Convex Optimization
, 1997
"... We consider the design of finite impulse response (FIR) filters subject to upper and lower bounds on the frequency response magnitude. The associated optimization problems, with the filter coefficients as the variables and the frequency response bounds as constraints, are in general nonconvex. Usin ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
We consider the design of finite impulse response (FIR) filters subject to upper and lower bounds on the frequency response magnitude. The associated optimization problems, with the filter coefficients as the variables and the frequency response bounds as constraints, are in general nonconvex. Using a change of variables and spectral factorization, we can pose such problems as linear or nonlinear convex optimization problems. As a result we can solve them efficiently (and globally) by recently developed interiorpoint methods. We describe applications to filter and equalizer design, and the related problem of antenna array weight design.
Branch and Bound Algorithm for Computing the Minimum Stability Degree of Parameterdependent Linear Systems
, 1991
"... We consider linear systems with unspecified parameters that lie between given upper and lower bounds. Except for a few special cases, the computation of many quantities of interest for such systems can be performed only through an exhaustive search in parameter space. We present a general branch and ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
We consider linear systems with unspecified parameters that lie between given upper and lower bounds. Except for a few special cases, the computation of many quantities of interest for such systems can be performed only through an exhaustive search in parameter space. We present a general branch and bound algorithm that implements this search in a systematic manner and apply it to computing the minimum stability degree. 1 Introduction 1.1 Notation R (C) denotes the set of real (complex) numbers. For c 2 C, Re c is the real part of c. The set of n \Theta n matrices with real (complex) entries is denoted R n\Thetan (C n\Thetan ). P T stands for the transpose of P , and P , the complex conjugate transpose. I denotes the identity matrix, with size determined from context. For a matrix P 2 R n\Thetan (or C n\Thetan ), i (P ); 1 i n denotes the ith eigenvalue of P (with no particular ordering). oe max (P ) denotes the maximum singular value (or spectral norm) of P , define...
RealTime Convex Optimization in Signal Processing
"... Convex optimization has been used in signal processing for a long time, to choose coefficients for use in fast (linear) algorithms, such as in filter or array design; more recently, it has been used to carry out (nonlinear) processing on the signal itself. Examples of the latter case include total ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
Convex optimization has been used in signal processing for a long time, to choose coefficients for use in fast (linear) algorithms, such as in filter or array design; more recently, it has been used to carry out (nonlinear) processing on the signal itself. Examples of the latter case include total variation denoising, compressed sensing, fault detection, and image classification. In both scenarios, the optimization is carried out on time scales of seconds or minutes, and without strict time constraints. Convex optimization has traditionally been considered computationally expensive, so its use has been limited to applications where plenty of time is available. Such restrictions are no longer justified. The combination of dramatically increased computational power, modern algorithms, and new coding approaches has delivered an enormous speed increase, which makes it possible to solve modestsized convex optimization problems on microsecond or millisecond time scales, and with strict deadlines. This enables realtime convex optimization in signal processing.
Decentralized control information structures preserved under feedback
 In Proc. IEEE Conference on Decision and Control
, 2002
"... We consider the problem of constructing decentralized control systems. We formulate this problem as one of minimizing the closedloop norm of a feedback system subject to constraints on the controller structure. We de¯ne the notion of quadratic invariance of a constraint set with respect to a system ..."
Abstract

Cited by 37 (16 self)
 Add to MetaCart
(Show Context)
We consider the problem of constructing decentralized control systems. We formulate this problem as one of minimizing the closedloop norm of a feedback system subject to constraints on the controller structure. We de¯ne the notion of quadratic invariance of a constraint set with respect to a system, and show that if the constraint set has this property, then the constrained minimum norm problem may be solved via convex programming. We also show that quadratic invariance is necessary and su±cient for the constraint set to be preserved under feedback. We develop necessary and su±cient conditions under which the constraint set is quadratically invariant, and show that many examples of decentralized synthesis which have been proven to be solvable in the literature are quadratically invariant. As an example, we show that a controller which minimizes the norm of the closedloop map may be e±ciently computed in the case where distributed controllers can communicate faster than the propagation delay of the plant dynamics.