Results 1  10
of
44
DETERMINANT MAXIMIZATION WITH LINEAR MATRIX INEQUALITY CONSTRAINTS
"... The problem of maximizing the determinant of a matrix subject to linear matrix inequalities arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the s ..."
Abstract

Cited by 174 (18 self)
 Add to MetaCart
The problem of maximizing the determinant of a matrix subject to linear matrix inequalities arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the semidefinite programming problem. We give an overview of the applications of the determinant maximization problem, pointing out simple cases where specialized algorithms or analytical solutions are known. We then describe an interiorpoint method, with a simplified analysis of the worstcase complexity and numerical results that indicate that the method is very efficient, both in theory and in practice. Compared to existing specialized algorithms (where they are available), the interiorpoint method will generally be slower; the advantage is that it handles a much wider variety of problems.
OnLine Estimation of Error Covariance Parameters for Atmospheric Data Assimilation
, 1994
"... We present a simple scheme for online estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximumlikelihoodapproach in which estimates are produced on the basis of a single batch of simultaneous observations. Singlesample covariance estimation is ..."
Abstract

Cited by 75 (10 self)
 Add to MetaCart
We present a simple scheme for online estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximumlikelihoodapproach in which estimates are produced on the basis of a single batch of simultaneous observations. Singlesample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the singlesample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: timedependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be th...
Maximumlikelihood estimation of forecast and observation error covariance parameters. Part I: Methodology
, 1998
"... The maximumlikelihood method for estimating observation and forecast error covariance parameters is described. The method is presented in general terms but with particular emphasis on practical aspects of implementation. Issues such as bias estimation and correction, parameter identifiability, esti ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
The maximumlikelihood method for estimating observation and forecast error covariance parameters is described. The method is presented in general terms but with particular emphasis on practical aspects of implementation. Issues such as bias estimation and correction, parameter identifiability, estimation accuracy, and robustness of the method, are discussed in detail. The relationship between the maximumlikelihood method and Generalized CrossValidation is briefly addressed. The method can be regarded as a generalization of the traditional procedure for estimating covariance parameters from station data. It does not involve any restrictions on the covariance models and can be used with data from moving observers, provided the parameters to be estimated are identifiable. Any available a priori information about the observation and forecast error distributions can be incorporated into the estimation procedure. Estimates of parameter accuracy due to sampling error are obtained as a byp...
Informationtheoretic image formation
 IEEE Transactions on Information Theory
, 1998
"... Abstract — The emergent role of information theory in image formation is surveyed. Unlike the subject of informationtheoretic communication theory, informationtheoretic imaging is far from a mature subject. The possible role of information theory in problems of image formation is to provide a rigo ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract — The emergent role of information theory in image formation is surveyed. Unlike the subject of informationtheoretic communication theory, informationtheoretic imaging is far from a mature subject. The possible role of information theory in problems of image formation is to provide a rigorous framework for defining the imaging problem, for defining measures of optimality used to form estimates of images, for addressing issues associated with the development of algorithms based on these optimality criteria, and for quantifying the quality of the approximations. The definition of the imaging problem consists of an appropriate model for the data and an appropriate model for the reproduction space, which is the space within which image estimates take values. Each problem statement has an associated optimality criterion that measures the overall quality of an estimate. The optimality criteria include maximizing the likelihood function and minimizing mean squared error for stochastic problems, and minimizing squared error and discrimination for deterministic problems. The development of algorithms is closely tied to the definition of the imaging problem and the associated optimality criterion. Algorithms with a strong informationtheoretic motivation are obtained by the method of expectation maximization. Related alternating minimization algorithms are discussed. In quantifying the quality of approximations, global and local measures are discussed. Global measures include the (mean) squared error and discrimination between an estimate and the truth, and probability of error for recognition or hypothesis testing problems. Local measures include Fisher information. Index Terms—Image analysis, image formation, image processing, image reconstruction, image restoration, imaging, inverse problems, maximumlikelihood estimation, pattern recognition. I.
Hyperspectral Imagery: Clutter Adaptation in Anomaly Detection
 IEEE Trans. Inform. Theory
, 2000
"... Abstract—Hyperspectral sensors are passive sensors that simultaneously record images for hundreds of contiguous and narrowly spaced regions of the electromagnetic spectrum. Each image corresponds to the same ground scene, thus creating a cube of images that contain both spatial and spectral informat ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Abstract—Hyperspectral sensors are passive sensors that simultaneously record images for hundreds of contiguous and narrowly spaced regions of the electromagnetic spectrum. Each image corresponds to the same ground scene, thus creating a cube of images that contain both spatial and spectral information about the objects and backgrounds in the scene. In this paper, we present an adaptive anomaly detector designed assuming that the background clutter in the hyperspectral imagery is a threedimensional Gauss–Markov random field. This model leads to an efficient and effective algorithm for discriminating manmade objects (the anomalies) in real hyperspectral imagery. The major focus of the paper is on the adaptive stage of the detector, i.e., the estimation of the Gauss–Markov random field parameters. We develop three methods: maximumlikelihood; least squares; and approximate maximumlikelihood. We study these approaches along three directions: estimation error performance, computational cost, and detection performance. In terms of estimation error, we derive the Cramér–Rao bounds and carry out Monte Carlo simulation studies that show that the three estimation procedures have similar performance when the fields are highly correlated, as is often the case with real hyperspectral imagery. The approximate maximumlikelihood method has a clear advantage from the computational point of view. Finally, we test extensively with real hyperspectral imagery the adaptive anomaly detector incorporating either the least squares or the approximate maximumlikelihood estimators. Its performance compares very favorably with that of the RX algorithm, an alternative detector commonly used with multispectral data, while reducing by up to an order of magnitude the associated computational cost. Index Terms—Anomaly detection, Cramér–Rao bounds, Gauss– Markov random field, hyperspectral imagery, least squares, maximum
Information Criteria for Residual Generation and Fault Detection and Isolation
, 1996
"... Using an information point of view, we discuss deterministic versus stochastic tools for residual generation and evaluation for fault detection and isolation (FDI) in linear time invariant (LTI) statespace systems. In both types of approaches to offline FDI, residual generation can be viewed as t ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
Using an information point of view, we discuss deterministic versus stochastic tools for residual generation and evaluation for fault detection and isolation (FDI) in linear time invariant (LTI) statespace systems. In both types of approaches to offline FDI, residual generation can be viewed as the design of a linear transformation of a Gaussian vector (the finitewindow inputadjusted observations) . Several statistical isolation methods are revisited, using both a linear transform formulation and the information content of the corresponding residuals. We formally state several multiple fault cases, with or without causality assumptions, and discuss an optimality criterion for the most general one. New information criteria are proposed for investigating the residual optimization problem.
Computationally Efficient Maximum Likelihood Estimation of Structured Covariance Matrices
 IEEE Trans. Signal Processing
, 1999
"... By invoking the extended invariance principle (EXIP), we present herein a computationally efficient method that provides asymptotic (for large samples) maximum likelihood (AML) estimation for structured covariance matrices and will be referred to as the AML algorithm. A closedform formula for estim ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
By invoking the extended invariance principle (EXIP), we present herein a computationally efficient method that provides asymptotic (for large samples) maximum likelihood (AML) estimation for structured covariance matrices and will be referred to as the AML algorithm. A closedform formula for estimating Hermitian Toeplitz covariance matrices that makes AML computationally simpler than most existing Hermitian Toeplitz matrix estimation algorithms is derived. Although the AML covariance matrix estimator can be used in a variety of applications, we focus on array processing in this paper. Our simulation study shows that AML enhances the performances of angle estimation algorithms, such as MUSIC, by making them very close to the corresponding Cram erRao bound (CRB) for uncorrelated signals. Numerical comparisons with several structured and unstructured covariance matrix estimators are also presented.
A majorized penalty approach for calibrating rank constrained correlation matrix problems
, 2010
"... In this paper, we aim at finding a nearest correlation matrix to a given symmetric matrix, measured by the componentwise weighted Frobenius norm, with a prescribed rank and bound constraints on its correlations. This is in general a nonconvex and difficult problem due to the presence of the rank co ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
In this paper, we aim at finding a nearest correlation matrix to a given symmetric matrix, measured by the componentwise weighted Frobenius norm, with a prescribed rank and bound constraints on its correlations. This is in general a nonconvex and difficult problem due to the presence of the rank constraint. To deal with this difficulty, we first consider a penalized version of this problem and then apply the essential ideas of the majorization method to the penalized problem by solving iteratively a sequence of least squares correlation matrix problems without the rank constraint. The latter problems can be solved by a recently developed quadratically convergent smoothing NewtonBiCGStab method. Numerical examples demonstrate that our approach is very efficient for obtaining a nearest correlation matrix with both rank and bound constraints. Key words: correlation matrix, penalty method, majorization, least squares, Newton’s method 1
Penalized MaximumLikelihood Estimation of Covariance Matrices with Linear Structure
 IEEE Trans. Signal Processing
, 1996
"... In this paper, a spacealternating generalized expectationmaximization (SAGE) algorithm is presented for the numerical computation of maximumlikelihood (ML) and penalized maximumlikelihood (PML) estimates of the parameters of covariance matrices with linear structure for complex Gaussian processes ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, a spacealternating generalized expectationmaximization (SAGE) algorithm is presented for the numerical computation of maximumlikelihood (ML) and penalized maximumlikelihood (PML) estimates of the parameters of covariance matrices with linear structure for complex Gaussian processes. By using a less informative hiddendata space and a sequential parameterupdate scheme, a SAGEbased algorithm is derived for which convergence of the likelihood is demonstrated to be significantly faster than that of an EMbased algorithm that has been previously proposed. In addition, the SAGE procedure is shown to easily accommodate penalty functions, and a SAGEbased algorithm is derived and demonstrated for forming PML estimates with a quadratic smoothness penalty.
On the secondorder statistics of the weighted sample covariance matrix
 IEEE Trans. Sig. Proces
, 2003
"... Abstract—The secondorder statistics of the sample covariance are encountered in many covariance based processing algorithms. This paper is to derive closedform expressions for the covariance of the weighted sample covariance matrix with an arbitrary weight for both a real system and complex system ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract—The secondorder statistics of the sample covariance are encountered in many covariance based processing algorithms. This paper is to derive closedform expressions for the covariance of the weighted sample covariance matrix with an arbitrary weight for both a real system and complex system. Given a system model, the results explicitly rely on the secondorder and fourthorder statistics of the channel noise and inputs. They are shown to coincide with the existing results when the channel inputs and noise are Gaussian distributed. Our results can be directly applied to analyze the statistical properties of subspacebased channel estimation methods for singleinput multipleoutput (SIMO) systems and codedivision multiple access (CDMA) systems. Numerical examples are provided to further verify analyses. Index Terms—Asymptotic analysis, covariance estimation, subspace decomposition. I.