Results 1  10
of
24
Multichannel Blind Deconvolution: Fir Matrix Algebra And Separation Of Multipath Mixtures
, 1996
"... A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and mat ..."
Abstract

Cited by 73 (0 self)
 Add to MetaCart
A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and matrix algorithms for use in multichannel /multipath problems. Using abstract algebra/group theoretic concepts, information theoretic principles, and the Bussgang property, methods of single channel filtering and source separation of multipath mixtures are merged into a general FIR matrix framework. Techniques developed for equalization may be applied to source separation and vice versa. Potential applications of these results lie in neural networks with feedforward memory connections, wideband array processing, and in problems with a multiinput, multioutput network having channels between each source and sensor, such as source separation. Particular applications of FIR polynomial matrix alg...
FINITE SAMPLE APPROXIMATION RESULTS FOR PRINCIPAL COMPONENT ANALYSIS: A MATRIX PERTURBATION APPROACH
"... Principal Component Analysis (PCA) is a standard tool for dimensional reduction of a set of n observations (samples), each with p variables. In this paper, using a matrix perturbation approach, we study the nonasymptotic relation between the eigenvalues and eigenvectors of PCA computed on a finite ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
Principal Component Analysis (PCA) is a standard tool for dimensional reduction of a set of n observations (samples), each with p variables. In this paper, using a matrix perturbation approach, we study the nonasymptotic relation between the eigenvalues and eigenvectors of PCA computed on a finite sample of size n, to those of the limiting population PCA as n → ∞. As in machine learning, we present a finite sample theorem which holds with high probability for the closeness between the leading eigenvalue and eigenvector of sample PCA and population PCA under a spiked covariance model. In addition, we also consider the relation between finite sample PCA and the asymptotic results in the joint limit p, n → ∞, with p/n = c. We present a matrix perturbation view of the “phase transition phenomenon”, and a simple linearalgebra based derivation of the eigenvalue and eigenvector overlap in this asymptotic limit. Moreover, our analysis also applies for finite p, n where we show that although there is no sharp phase transition as in the infinite case, either as a function of noise level or as a function of sample size n, the eigenvector of sample PCA may exhibit a sharp ”loss of tracking”, suddenly losing its relation to the (true) eigenvector of the population PCA matrix. This occurs due to a crossover between the eigenvalue due to the signal and the largest eigenvalue due to noise, whose eigenvector points in a random direction.
On the eigenspectrum of the gram matrix and its relationship to the operator eigenspectrum
 Eds.): ALT 2002, LNAI 2533
, 2002
"... Abstract. In this paper we analyze the relationships between the eigenvalues of the m × m Gram matrix K for a kernel k(·, ·) corresponding to a sample x1,...,xm drawn from a density p(x) and the eigenvalues of the corresponding continuous eigenproblem. We bound the differences between the two spectr ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Abstract. In this paper we analyze the relationships between the eigenvalues of the m × m Gram matrix K for a kernel k(·, ·) corresponding to a sample x1,...,xm drawn from a density p(x) and the eigenvalues of the corresponding continuous eigenproblem. We bound the differences between the two spectra and provide a performance bound on kernel PCA. 1
Sample eigenvalue based detection of highdimensional signals in white noise using relatively few samples
, 2007
"... ..."
A conceptual framework for predictability studies
 J. Climate
, 1999
"... A conceptual framework is presented for a unified treatment of issues arising in a variety of predictability studies. The predictive power (PP), a predictability measure based on information–theoretical principles, lies at the center of this framework. The PP is invariant under linear coordinate tra ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
A conceptual framework is presented for a unified treatment of issues arising in a variety of predictability studies. The predictive power (PP), a predictability measure based on information–theoretical principles, lies at the center of this framework. The PP is invariant under linear coordinate transformations and applies to multivariate predictions irrespective of assumptions about the probability distribution of prediction errors. For univariate Gaussian predictions, the PP reduces to conventional predictability measures that are based upon the ratio of the rms error of a model prediction over the rms error of the climatological mean prediction. Since climatic variability on intraseasonal to interdecadal timescales follows an approximately Gaussian distribution, the emphasis of this paper is on multivariate Gaussian random variables. Predictable and unpredictable components of multivariate Gaussian systems can be distinguished by predictable component analysis, a procedure derived from discriminant analysis: seeking components with large PP leads to an eigenvalue problem, whose solution yields uncorrelated components that are ordered by PP from largest to smallest. In a discussion of the application of the PP and the predictable component analysis in different types of predictability studies, studies are considered that use either ensemble integrations of numerical models or autoregressive models fitted to observed or simulated data. An investigation of simulated multidecadal variability of the North Atlantic illustrates the proposed methodology. Reanalyzing an ensemble of integrations of the Geophysical Fluid Dynamics Laboratory coupled general circulation model confirms and refines earlier findings. With an autoregressive model fitted to a single integration of the same model, it is demonstrated that similar conclusions can be reached without resorting to computationally costly ensemble integrations. 1.
NonParametric Detection of Signals by Information Theoretic Criteria: Performance Analysis and an Improved Estimator
, 2009
"... Determining the number of sources is a fundamental problem in many scientific fields. In this paper we consider the nonparametric setting, and focus on the detection performance of two popular estimators based on information theoretic criteria, the Akaike information criterion (AIC) and minimum des ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Determining the number of sources is a fundamental problem in many scientific fields. In this paper we consider the nonparametric setting, and focus on the detection performance of two popular estimators based on information theoretic criteria, the Akaike information criterion (AIC) and minimum description length (MDL). We present three contributions on this subject. First, we derive a new expression for the detection performance of the MDL estimator, which exhibits a much closer fit to simulations in comparison to previous formulas. Second, we present a random matrix theory viewpoint of the performance of the AIC estimator, including approximate analytical formulas for its overestimation probability. Finally, we show that a small increase in the penalty term of AIC leads to an estimator with a very good detection performance and a negligible overestimation probability.
Successive Interference Cancellation for DSCDMA Systems,” Submitted for
 IEEE Transactions on Communications
, 2000
"... Abstract—In this paper, we propose a blind successive interference cancellation receiver for asynchronous directsequence codedivision multipleaccess (DSCDMA) systems using a maximum mean energy (MME) optimization criterion. The covariance matrix of the received vector is used in conjunction with ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract—In this paper, we propose a blind successive interference cancellation receiver for asynchronous directsequence codedivision multipleaccess (DSCDMA) systems using a maximum mean energy (MME) optimization criterion. The covariance matrix of the received vector is used in conjunction with the MME criterion to realize a blind successive interference canceler that is referred to as the BICMME receiver. The receiver executes interference cancellation in a successive manner, starting with the most dominant interference component and successively cancelling the weaker ones. The receiver is compared against various centralized and decentralized receivers, and it is shown to perform well in the presence of estimation errors of the covariance matrix, making it suitable for application in timevarying channels. We also analyze properties of the covariance matrix estimates which are relevant to the performance of the BICMME receiver. Further, the BICMME receiver is particularly efficient in the presence of a few strong interferers as may be the case in the downlink of DSCDMA systems where intracell user transmissions are orthogonal. An iterative implementation that results in reduced complexity is also studied. Index Terms—Blind interference cancellation, CDMA downlink, successive interference cancellation. I.
Statistical Performance Analysis of MDL Source Enumeration
 in Array Processing, IEEE
"... Abstract — In this correspondence, we focus on the performance analysis of the widelyused minimum description length (MDL) source enumeration technique in array processing. Unfortunately, available theoretical analysis exhibit deviation from the simulation results. We present an accurate and insigh ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract — In this correspondence, we focus on the performance analysis of the widelyused minimum description length (MDL) source enumeration technique in array processing. Unfortunately, available theoretical analysis exhibit deviation from the simulation results. We present an accurate and insightful performance analysis for the probability of missed detection. We also show that the statistical performance of the MDL is approximately the same under both deterministic and stochastic signal models. Simulation results show the superiority of the proposed analysis over available results. Index Terms — Minimum description length (MDL), source enumeration, performance analysis, deterministic signal. EDICS Category: SAMPERF, SAMSDET I. INTRODUCTION AND PRELIMINARIES MDL [1], is one of the most successful methods for determining the number of present signals in array processing and channel
Perils of parsimony: Properties of reduced rank estimates of genetic covariances. Genetics 0 (2008) 000–000 (in press). doi: 00
"... Eigenvalues and eigenvectors of covariance matrices are important statistics for multivariate problems in many applications, including quantitative genetics. Estimates of these quantities are subject to different types of bias. This paper reviews and extends the existing theory on these biases, cons ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Eigenvalues and eigenvectors of covariance matrices are important statistics for multivariate problems in many applications, including quantitative genetics. Estimates of these quantities are subject to different types of bias. This paper reviews and extends the existing theory on these biases, considering a balanced oneway classification and restricted maximum likelihood estimation. Biases are due to the spread of sample roots, and from ignoring selected principal components when imposing constraints on the parameter space, to ensure positive semidefinite estimates or to estimate covariance matrices of chosen, reduced rank. In addition, it is shown that reduced rank estimators which consider only the leading eigenvalues andvectors of the ‘between group ’ covariance matrix may be biased due to selecting the wrong subset of principal components. In a genetic context, with groups representing families, this bias is inverse proportional to the degree of genetic relationship among family members, but is independent of sample size. Theoretical results are supplemented by a simulation study, demonstrating close agreement between predicted and observed bias for large samples. It is emphasized that the rank of the genetic covariance matrix should be chosen sufficiently large to accommodate all important genetic principal components, even though, paradoxically, this may require including a number of components with negligible eigenvalues. A strategy for rank selection in practical analyses is outlined. 3
Bayesian Extreme Components Analysis
"... Extreme Components Analysis (XCA) is a statistical method based on a single eigenvalue decomposition to recover the optimal combination of principal and minor components in the data. Unfortunately, minor components are notoriously sensitive to overfitting when the number of data items is small relat ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Extreme Components Analysis (XCA) is a statistical method based on a single eigenvalue decomposition to recover the optimal combination of principal and minor components in the data. Unfortunately, minor components are notoriously sensitive to overfitting when the number of data items is small relative to the number of attributes. We present a Bayesian extension of XCA by introducing a conjugate prior for the parameters of the XCA model. This BayesianXCA is shown to outperform plain vanilla XCA as well as BayesianPCA and XCA based on a frequentist correction to the sample spectrum. Moreover, we show that minor components are only picked when they represent genuine constraints in the data, even for very small sample sizes. An extension to mixtures of Bayesian XCA models is also explored. 1