Results 1  10
of
23
ROP: Matrix recovery via rankone projections
 The Annals of Statistics
"... Estimation of lowrank matrices is of significant interest in a range of contemporary applications. In this paper, we introduce a rankone projection model for lowrank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of lowrank matrices in the noisy ca ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Estimation of lowrank matrices is of significant interest in a range of contemporary applications. In this paper, we introduce a rankone projection model for lowrank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of lowrank matrices in the noisy case. The procedure is adaptive to the rank and robust against small lowrank perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rateoptimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The main results obtained in the paper also have implications to other related statistical problems. An application to estimation of spike covariance matrices from onedimensional random projections is considered. The results demonstrate that it is possible to accurately estimate the covariance matrix of a highdimensional distribution based only on onedimensional projections.
On the Distribution of Roy’s Largest Root Test in MANOVA and in Signal Detection in Noise ∗
, 2011
"... Roy’s largest root is a common test in multivariate analysis of variance (MANOVA), with applications in several other problems, such as signal detection in noise. In this paper, assuming multivariate Gaussian observations, we derive a simple yet accurate approximation for the distribution of Roy’s l ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Roy’s largest root is a common test in multivariate analysis of variance (MANOVA), with applications in several other problems, such as signal detection in noise. In this paper, assuming multivariate Gaussian observations, we derive a simple yet accurate approximation for the distribution of Roy’s largest root test, in the extreme case of concentrated noncentrality, where the signal or difference between groups is concentrated in a single direction. Our main result is that in the MANOVA setting, up to centering and scaling, Roy’s largest root test approximately follows a noncentral F distribution whereas in the signal detection application, it approximately follows a modified central F distribution (of the form (s+χ 2 a)/χ 2 b). Our results allow power calculations for Roy’s test, as well as estimates of sample size required to detect a given (rankone) group difference by this test, both of which are important quantities in hypothesisdriven research. 1
MODIFIED DOA ESTIMATION METHODS WITH UN KNOWN SOURCE NUMBER BASED ON PROJECTION PRETRANSFORMATION
"... Abstract—In this paper, our purpose is to develop methods that have high resolution and robustness in the presence of unknown source number, array error, snapshot deficient, and low SNR. The DOA (DirectionOfArrival) estimation with unknown source number methods referred as MUSIClike and SSMUSICl ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, our purpose is to develop methods that have high resolution and robustness in the presence of unknown source number, array error, snapshot deficient, and low SNR. The DOA (DirectionOfArrival) estimation with unknown source number methods referred as MUSIClike and SSMUSIClike methods have shown high resolution in the snapshot deficient and low SNR scenario. However, they need to take several times of fine search on the full space, which bring about high computational complexities. Thus, modified methods are proposed to reduce computational complexities and improve performances further. In the modified methods, we priori use conventional beamforming to get the rough distribution of signals ’ angle, which helps to reduce computational complexity and connect the technique of projection pretransformation. Then through projection pretransformation, original methods are further simplified and improved. As demonstrated in computer simulations, the modified DOA estimation with unknown source number methods shows not only higher resolution in the snapshot deficient and lower SNR scenario, but also more robustness against array errors. Although the proposed methods cannot replace the array calibration completely, they reduce the requirement of calibration accuracy. Combined with these advantages, it has been shown that the new methods are more suitable in engineering.
ESTIMATION OF THE NUMBER OF FACTORS, POSSIBLY EQUAL, IN THE HIGHDIMENSIONAL CASE
, 2013
"... Abstract. Estimation of the number of factors in a factor model is an important problem in many areas such as economics or signal processing. Most of classical approaches assume a large sample size n whereas the dimension p of the observations is kept small. In this paper, we consider the case of hi ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Estimation of the number of factors in a factor model is an important problem in many areas such as economics or signal processing. Most of classical approaches assume a large sample size n whereas the dimension p of the observations is kept small. In this paper, we consider the case of high dimension, where p is large compared to n. The approach is based on recent results of random matrix theory. We extend our previous results to a more difficult situation when some factors are equal, and compare our algorithm to an existing benchmark method. 1.
Parametric joint detectionestimation of the number of sources in array processing, unpublished
"... Abstract—Detection of the number of signals and estimation of their directions of arrival (DOAs) are fundamental problems in array processing. We present three main contributions to these problems, under the conditional model, where signal amplitudes are assumed deterministic unknown. First, we show ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Detection of the number of signals and estimation of their directions of arrival (DOAs) are fundamental problems in array processing. We present three main contributions to these problems, under the conditional model, where signal amplitudes are assumed deterministic unknown. First, we show that there is an explicit relation between model selection and the breakdown phenomena of the Maximum Likelihood estimator (MLE). Second, for the case of a single source, we provide a simple approximate formula for the location of the breakdown of the MLE, using tools from the maxima of stochastic processes. This gives an explicit formula for the source strength required for reliable detection. Third, we apply these results and propose a new joint detectionestimation algorithm with stateoftheart performance. We demonstrate via simulations the improved detection performance of our algorithm, compared to other popular source enumeration methods. I.
A theoretical investigation of several model selection criteria for . . .
 PATTERN RECOGNITION LETTERS
, 2012
"... ..."
Optshrink: An algorithm for improved lowrank signal matrix denoising by optimal, datadriven singular value shrinkage
 ISSN 00189448. doi: 10.1109/TIT.2014.2311661. URL http://dx.doi.org/ 10.1109/TIT.2014.2311661
, 2014
"... Abstract. The truncated singular value decomposition (SVD) of the measurement matrix is the optimal solution to the representation problem of how to best approximate a noisy measurement matrix using a lowrank matrix. Here, we consider the (unobservable) denoising problem of how to best approximate ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The truncated singular value decomposition (SVD) of the measurement matrix is the optimal solution to the representation problem of how to best approximate a noisy measurement matrix using a lowrank matrix. Here, we consider the (unobservable) denoising problem of how to best approximate a lowrank signal matrix buried in noise by optimal (re)weighting of the singular vectors of the measurement matrix. We exploit recent results from random matrix theory to exactly characterize the large matrix limit of the optimal weighting coefficients and show that they can be computed directly from data for a large class of noise models that includes the i.i.d. Gaussian noise case. Our analysis brings into sharp focus the shrinkageandthresholding form of the optimal weights, the nonconvex nature of the associated shrinkage function (on the singular values) and explains why matrix regularization via singular value thresholding with convex penalty functions (such as the nuclear norm) will always be suboptimal. We validate our theoretical predictions with numerical simulations, develop an implementable algorithm (OptShrink) that realizes the predicted performance gains and show how our methods can be used to improve estimation in the setting where the measured matrix has missing entries. 1.
Bayesian information criterion for source enumeration in largescale adaptive antenna array
 IEEE TRANS. VEH. TECHNOL
, 2015
"... Subspacebased highresolution algorithms for directionofarrival estimation have been developed for largescale adaptive antenna arrays. However, its prerequisite step, namely, source enumeration, has not yet been addressed. In this work, a new approach is devised in the framework of Bayesian info ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Subspacebased highresolution algorithms for directionofarrival estimation have been developed for largescale adaptive antenna arrays. However, its prerequisite step, namely, source enumeration, has not yet been addressed. In this work, a new approach is devised in the framework of Bayesian information criterion (BIC) to provide reliable detection of the signal source number for the general asymptotic regime where m;n! 1 and m=n! c 2 (0;1) with m and n being the numbers of antennas and snapshots, respectively. In particular, the a posteriori probability is determined by correctly calculating the loglikelihood and penalty functions for the general asymptotic case. By means of the maximum a posteriori probability, we are capable of effectively finding the signal number. Accurate closedform expression for the probability of missed detection is also derived for the proposed BIC variant. In addition, the probability of falsealarm for the BIC detector is proved to converge to zero as m;n! 1 and m=n! c. Simulation results are included to demonstrate the superiority of the proposed detection approach over stateoftheart schemes and corroborate our theoretical calculations.
WHEN ARE THE MOST INFORMATIVE COMPONENTS FOR INFERENCE ALSO THE PRINCIPAL COMPONENTS?
"... Abstract. Which components of the singular value decomposition of a signalplusnoise data matrix are most informative for the inferential task of detecting or estimating an embedded lowrank signal matrix? Principal component analysis ascribes greater importance to the components that capture the g ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Which components of the singular value decomposition of a signalplusnoise data matrix are most informative for the inferential task of detecting or estimating an embedded lowrank signal matrix? Principal component analysis ascribes greater importance to the components that capture the greatest variation, i.e., the singular vectors associated with the largest singular values. This choice is often justified by invoking the EckartYoung theorem even though that work addresses the problem of how to best represent a signalplusnoise matrix using a lowrank approximation and not how to best infer the underlying lowrank signal component. Here we take a firstprinciples approach in which we start with a signalplusnoise data matrix and show how the spectrum of the noiseonly component governs whether the principal or the middle components of the singular value decomposition of the data matrix will be the informative components for inference. Simply put, if the noise spectrum is supported on a connected interval, in a sense we make precise, then the use of the principal components is justified. When the noise spectrum is supported on multiple intervals, then the middle components might be more informative than the principal components. The end result is a proper justification of the use of principal components in the oft considered setting where the noise matrix is i.i.d. Gaussian. An additional consequence of our study is the identification of scenarios, generically involving heterogeneous noise models such as mixtures of Gaussians, where the middle components might be more informative than the principal components so that they may be exploited to extract additional processing gain. In these settings, our results show how the blind use of principal components can lead to suboptimal or even faulty inference because of phase transitions that separate a regime where the principal components are informative from a regime where they are uninformative. We illustrate our findings using numerical simulations and a realworld example. 1.
Statistical Analysis of the Performance of MDL Enumeration for MultipleMissed Detection in Array Processing
 SENSORS
, 2015
"... ..."