Results 1  10
of
234
A randomized algorithm for principal component analysis
 SIAM Journal on Matrix Analysis and Applications
"... Principal component analysis (PCA) requires the computation of a lowrank approximation to a matrix containing the data being analyzed. In many applications of PCA, the best possible accuracy of any rankdeficient approximation is at most a few digits (measured in the spectral norm, relative to the ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
Principal component analysis (PCA) requires the computation of a lowrank approximation to a matrix containing the data being analyzed. In many applications of PCA, the best possible accuracy of any rankdeficient approximation is at most a few digits (measured in the spectral norm, relative
Tensor Principal Component Analysis via Convex Optimization
, 2012
"... This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is supersymmetric with an even degr ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is supersymmetric with an even
A probabilistic algorithm for kSAT and constraint satisfaction problems
 In Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science, FOCS'99
, 1999
"... We present a simple probabilistic algorithm for solving kSAT, and more generally, for solving constraint satisfaction problems (CSP). The algorithm follows a simple localsearch paradigm (cf. [9]): randomly guess an initial assignment and then, guided by those clauses (constraints) that are not sati ..."
Abstract

Cited by 158 (4 self)
 Add to MetaCart
) that are not satisfied, by successively choosing a random literal from such a clause and flipping the corresponding bit, try to find a satisfying assignment. If no satisfying assignment is found after O(n) steps, start over again. Our analysis shows that for any satisfiable kCNF formula with n variables this process
Robust PCA via outlier pursuit
, 2010
"... Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a wellknown, welldocumented sensitivity to outliers. Recent work has considered the setting w ..."
Abstract

Cited by 92 (9 self)
 Add to MetaCart
Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a wellknown, welldocumented sensitivity to outliers. Recent work has considered the setting
Whom You Know Matters: Venture Capital Networks and Investment Performance,
 Journal of Finance
, 2007
"... Abstract Many financial markets are characterized by strong relationships and networks, rather than arm'slength, spotmarket transactions. We examine the performance consequences of this organizational choice in the context of relationships established when VCs syndicate portfolio company inv ..."
Abstract

Cited by 138 (8 self)
 Add to MetaCart
centrality measure by dividing by the maximum possible degree in an nactor network (i.e., n1). While we normalize the centrality measures used in the empirical analysis, we note that all our results are robust to using nonnormalized network centrality measures instead. 8 B. Closeness While degree counts
Rigorous Hitting Times for Binary Mutations
, 1999
"... In the binary evolutionary optimization framework, two mutation operators are theoretically investigated. For both the standard mutation, in which all bits are flipped independently with the same probability, and the 1bitflip mutation, which flips exactly one bit per bitstring, the statistical dis ..."
Abstract

Cited by 64 (2 self)
 Add to MetaCart
distribution of the first hitting times of the target are thoroughly computed (expectation and variance) up to terms of order l (the size of the bitstrings) in two distinct situations: without any selection, or with the deterministic (1+1)ES selection on the OneMax problem. In both cases, the 1bitflip
VLSI architecture of leading eigenvector generation for onchip principal component analysis spike sorting system
 in Proc. Conf. IEEE EMBS
, 2008
"... Abstract — Onchip spike detection and principal component analysis (PCA) sorting hardware in an integrated multichannel neural recording system is highly desired to ease the bandwidth bottleneck from highdensity microelectrode array implanted in the cortex. In this paper, we propose the first lea ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract — Onchip spike detection and principal component analysis (PCA) sorting hardware in an integrated multichannel neural recording system is highly desired to ease the bandwidth bottleneck from highdensity microelectrode array implanted in the cortex. In this paper, we propose the first
RICE UNIVERSITY Regime Change: Sampling Rate vs. BitDepth in Compressive Sensing
, 2011
"... The compressive sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by exploiting inherent structure in natural and manmade signals. It has been demonstrated that structured signals can be acquired with just a small number of linear measurements, on the order of t ..."
Abstract
 Add to MetaCart
. We develop a new theoretical framework to analyze this extreme case and develop new algorithms for signal reconstruction from such coarsely quantized measurements. The 1bit CS framework leads us to scenarios where it may be more appropriate to reduce bitdepth instead of sampling rate. We find
Robust Matrix Decomposition with Sparse Corruptions
"... Abstract—Suppose a given observation matrix can be decomposed as the sum of a lowrank matrix and a sparse matrix, and the goal is to recover these individual components from the observed sum. Such additive decompositions have applications in a variety of numerical problems including system identifi ..."
Abstract

Cited by 47 (4 self)
 Add to MetaCart
identification, latent variable graphical modeling, and principal components analysis. We study conditions under which recovering such a decomposition is possible via a combination of ℓ1 norm and trace norm minimization. We are specifically interested in the question of how many sparse corruptions are allowed so
Results 1  10
of
234