Results 1 
5 of
5
Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values
, 2001
"... Estimating the mean and the covariance matrix of an incomplete dataset and filling in missing values with imputed values is generally a nonlinear problem, which must be solved iteratively. The expectation maximization (EM) algorithm for Gaussian data, an iterative method both for the estimation of m ..."
Abstract

Cited by 54 (3 self)
 Add to MetaCart
Estimating the mean and the covariance matrix of an incomplete dataset and filling in missing values with imputed values is generally a nonlinear problem, which must be solved iteratively. The expectation maximization (EM) algorithm for Gaussian data, an iterative method both for the estimation of mean values and covariance matrices from incomplete datasets and for the imputation of missing values, is taken as the point of departure for the development of a regularized EM algorithm. In contrast to the conventional EM algorithm, the regularized EM algorithm is applicable to sets of climate data, in which the number of variables typically exceeds the sample size. The regularized EM algorithm is based on iterated analyses of linear regressions of variables with missing values on variables with available values, with regression coefficients estimated by ridge regression, a regularized regression method in which a continuous regularization parameter controls the filtering of the noise in the data. The regularization parameter is determined by generalized crossvalidation, such as to minimize, approximately, the expected mean squared error of the imputed values. The regularized EM algorithm can estimate, and exploit for the imputation of missing values, both synchronic and diachronic covariance matrices, which may contain information on spatial covariability, stationary temporal covariability, or cyclostationary temporal covariability. A test of the regularized EM algorithm with simulated surface temperature data demonstrates that the algorithm is applicable to typical sets of climate data and that it leads to more accurate estimates of the missing values than a conventional noniterative imputation technique.
A New Hessian Preconditioning Method Applied to Variational Data Assimilation Experiments Using NASA General Circulation Models
, 1996
"... An analysis is provided to show that Courtier's et al. method for estimating the Hessian preconditioning is not applicable to important categories of cases involving nonlinearity. An extension of the method to cases with higher nonlinearity is proposed in the present paper by designing an algorith ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
An analysis is provided to show that Courtier's et al. method for estimating the Hessian preconditioning is not applicable to important categories of cases involving nonlinearity. An extension of the method to cases with higher nonlinearity is proposed in the present paper by designing an algorithm that reduces errors in Hessian estimation induced by lack of validity of the tangent linear approximation. The new preconditioning method was numerically tested in the framework of variational data assimilation expeximents using both the National Aeronautics and Space Administration (NASA) semiLagrangian semiimplicit global shallowwater equations model and the adiabatic version of the NASA/Data AssimilatiOn Office (DAO) Goddard Observing System Version I (GEOS1) general circulation model. The authors' results show that the new preconditioning method speeds up convergence rate of minimization when applied to variational data assimilation cases characterized by strong nonlinearity.
A conceptual framework for predictability studies
 J. Climate
, 1999
"... A conceptual framework is presented for a unified treatment of issues arising in a variety of predictability studies. The predictive power (PP), a predictability measure based on information–theoretical principles, lies at the center of this framework. The PP is invariant under linear coordinate tra ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
A conceptual framework is presented for a unified treatment of issues arising in a variety of predictability studies. The predictive power (PP), a predictability measure based on information–theoretical principles, lies at the center of this framework. The PP is invariant under linear coordinate transformations and applies to multivariate predictions irrespective of assumptions about the probability distribution of prediction errors. For univariate Gaussian predictions, the PP reduces to conventional predictability measures that are based upon the ratio of the rms error of a model prediction over the rms error of the climatological mean prediction. Since climatic variability on intraseasonal to interdecadal timescales follows an approximately Gaussian distribution, the emphasis of this paper is on multivariate Gaussian random variables. Predictable and unpredictable components of multivariate Gaussian systems can be distinguished by predictable component analysis, a procedure derived from discriminant analysis: seeking components with large PP leads to an eigenvalue problem, whose solution yields uncorrelated components that are ordered by PP from largest to smallest. In a discussion of the application of the PP and the predictable component analysis in different types of predictability studies, studies are considered that use either ensemble integrations of numerical models or autoregressive models fitted to observed or simulated data. An investigation of simulated multidecadal variability of the North Atlantic illustrates the proposed methodology. Reanalyzing an ensemble of integrations of the Geophysical Fluid Dynamics Laboratory coupled general circulation model confirms and refines earlier findings. With an autoregressive model fitted to a single integration of the same model, it is demonstrated that similar conclusions can be reached without resorting to computationally costly ensemble integrations. 1.
q1997 American Meteorological Society
"... The adjoint Newton algorithm (ANA) is based on the first and secondorder adjoint techniques allowing one to obtain the "Newton line search direction" by integrating a "tangent linear model" backward in time (with negative time steps). Moreover, the ANA provides a new technique to find Newton lin ..."
Abstract
 Add to MetaCart
The adjoint Newton algorithm (ANA) is based on the first and secondorder adjoint techniques allowing one to obtain the "Newton line search direction" by integrating a "tangent linear model" backward in time (with negative time steps). Moreover, the ANA provides a new technique to find Newton line search direction without using gradient information. The error present in approximating the Hessian (the matrix of secondorder derivatives) of the cost function with respect to the control variables in the quasiNewtontype algorithm is thus completely eliminated, while the storage problem related to storing the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The ANA is applied here, for the first time, in the framework of 4D variational data assimilation to the adiabatic version of the Advanced Regional Prediction System, a threedimensional, compressible, nonhydrostatic stormscale model. The purpose is to assess the feasibility and efficiency of the ANA as a largescale minimization algorithm in the setting of 4D variational data assimilation.
210 MONTHLY WEATHER REVIEW VOLUME 126 Adaptive Tuning of Numerical Weather Prediction Models: Simultaneous Estimation of Weighting, Smoothing, and Physical Parameters
, 1996
"... In Wahba et al. it was shown how the randomized trace method could be used to adaptively tune numerical weather prediction models via generalized cross validation (GCV) and related methods. In this paper a ‘‘toy’’ fourdimensional data assimilation model is developed (actually one space and one time ..."
Abstract
 Add to MetaCart
In Wahba et al. it was shown how the randomized trace method could be used to adaptively tune numerical weather prediction models via generalized cross validation (GCV) and related methods. In this paper a ‘‘toy’’ fourdimensional data assimilation model is developed (actually one space and one time variable), consisting of an equivalent barotropic vorticity equation on a latitude circle, and used to demonstrate how this technique may be used to simultaneously tune weighting, smoothing, and physical parameters. Analyses both with the model as a strong constraint (corresponding to the usual 4DVar approach) and as a weak constraint (corresponding theoretically to a fixedinterval Kalman smoother) are carried out. The conclusions are limited to the particular toy problem considered, but it can be seen how more elaborate experiments could be carried out, as well as how the method might be applied in practice. The authors have considered five adjustable parameters, two related to a distributed coefficient in the equivalent barotropic vorticity equation (‘‘physical’ ’ parameters), one governing the relative weight given to observations versus forecast, one governing the relative weight given to observations versus goodness of fit to the model (in the weak constraint case), and one governing the damping of highfrequency oscillations in the analysis at the final time point (‘‘smoothing’ ’ parameter). The weighting parameters and the smoothing parameter can, if desired, be interpreted as ratios of parameters in prior covariances.