Results 1  10
of
1,133,306
Variational Assimilation for Xenon Dynamical Forecasts in Neutronic using Advanced Background Error Covariance Matrix
, 2014
"... Data assimilation method consists in combining all available pieces of information about a system to obtain optimal estimates of initial states. The different sources of information are weighted according to their accuracy by the means of error covariance matrices. Our purpose here is to evaluate th ..."
Abstract
 Add to MetaCart
the efficiency of variational data assimilation for the xenon induced oscillations forecasts in nuclear cores. In this paper we focus on the comparison between 3DVAR schemes with optimised background error covariance matrix B and a 4DVAR scheme. Tests were made in twin experiments using a simulation code which
NOTES A NOTE ON THE STEIN RULE ESTIMATION IN LINEAR MODELS WITH NONSCALAR ERROR COVARIANCE MATRIX
"... SUMMARY. For the coefficient vector of a linear regression model with nonscalar error covariance matrix, Feaisble Generalised Least Squares (FGLS) and Stein rule estimators are considered and taking sample size to be large, condition for the dominance of the Stein rule estimator over the FGLS estim ..."
Abstract
 Add to MetaCart
SUMMARY. For the coefficient vector of a linear regression model with nonscalar error covariance matrix, Feaisble Generalised Least Squares (FGLS) and Stein rule estimators are considered and taking sample size to be large, condition for the dominance of the Stein rule estimator over the FGLS
MODEL IDENTIFICATION AND ERROR COVARIANCE MATRIX ESTIMATION FROM NOISY DATA USING PCA
"... Abstract: Principal Components Analysis (PCA) is increasingly being used for reducing the dimensionality of multivariate data, process monitoring, model identification, and fault diagnosis. However, in the mode that PCA is currently used, it can be statistically justified only if measurement errors ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
and error covariance matrix, but also provides answers to the two important issues of data scaling and model order determination.
Sequential data assimilation with a nonlinear quasigeostrophic model using Monte Carlo methods to forecast error statistics
 J. Geophys. Res
, 1994
"... . A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter. The ..."
Abstract

Cited by 782 (22 self)
 Add to MetaCart
covariance equation are avoided because storage and evolution of the error covariance matrix itself are not needed. The results are also better than what is provided by the extended Kalman filter since there is no closure problem and the quality of the forecast error statistics therefore improves. The method
A HeteroskedasticityConsistent Covariance Matrix Estimator And A Direct Test For Heteroskedasticity
, 1980
"... This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator ..."
Abstract

Cited by 3060 (5 self)
 Add to MetaCart
This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator
Learning the Kernel Matrix with SemiDefinite Programming
, 2002
"... Kernelbased learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information ..."
Abstract

Cited by 780 (22 self)
 Add to MetaCart
is contained in the socalled kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input spaceclassical model selection
Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification
 Psychological Methods
, 1998
"... This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic robustn ..."
Abstract

Cited by 505 (0 self)
 Add to MetaCart
robustness theory also was examined. Standardized rootmeansquare residual (SRMR) was the most sensitive index to models with misspecified factor covariance(s), and TuckerLewis Index (1973; TLI), Bollen's fit index (1989; BL89), relative noncentrality index (RNI), comparative fit index (CFI
Algorithms for Nonnegative Matrix Factorization
 In NIPS
, 2001
"... Nonnegative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minim ..."
Abstract

Cited by 1230 (5 self)
 Add to MetaCart
Nonnegative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown
Good ErrorCorrecting Codes based on Very Sparse Matrices
, 1999
"... We study two families of errorcorrecting codes defined in terms of very sparse matrices. "MN" (MacKayNeal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The ..."
Abstract

Cited by 741 (23 self)
 Add to MetaCart
We study two families of errorcorrecting codes defined in terms of very sparse matrices. "MN" (MacKayNeal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties
Results 1  10
of
1,133,306