Results 1  10
of
23
Praneeth Netrapalli (UT Austin) Alternating Minimization 4 / 23
"... Matrix completion: theory and practice Theory: algorithm based on convex relaxation [CR09, CT09],... Practice: alternating guesses for user preferences / movie features [Kor09] Disconnect between theoretical and practical algorithms Our work: understand why heuristics used in practice work so well P ..."
Praneeth Netrapalli Provable Matrix Completion using Alternating Minimization AltMin Sensing Completion Proof References
, 2013
"... To minimize f (X) over rankk matrices X, repeat the following: fix U and minimize f (UV †)over V fix V and minimize f (UV †)over U X U V' A popular Empirical approach to solve low rank matrix problems eg. matrix completion, clustering etc. Challenge: few theoretical guarantees ..."
Abstract
 Add to MetaCart
To minimize f (X) over rankk matrices X, repeat the following: fix U and minimize f (UV †)over V fix V and minimize f (UV †)over U X U V' A popular Empirical approach to solve low rank matrix problems eg. matrix completion, clustering etc. Challenge: few theoretical guarantees
Praneeth Netrapalli Onebit Compressed Sensing: Provable Support and Vector Recovery 1bit CS Support Approx. Summary Quantization
, 2013
"... Goal: Reconstruct a sparse signal using very few linear measurements Tremendous amount of work in the last decade O (k log n) measurements to reconstruct ksparse signals in Rn ..."
Abstract
 Add to MetaCart
Goal: Reconstruct a sparse signal using very few linear measurements Tremendous amount of work in the last decade O (k log n) measurements to reconstruct ksparse signals in Rn
Lowrank matrix completion using alternating minimization. ArXiv:1212.0467 eprint
, 2012
"... ar ..."
Greedy Learning of Markov Network Structure
 In Allerton Conf. on Communication, Control and Computing
, 2010
"... We propose a new yet natural algorithm for learning the graph structure of general discrete graphical models (a.k.a. Markov random fields) from samples. Our algorithm finds the neighborhood of a node by sequentially adding nodes that produce the largest reduction in empirical conditional entropy; it ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
We propose a new yet natural algorithm for learning the graph structure of general discrete graphical models (a.k.a. Markov random fields) from samples. Our algorithm finds the neighborhood of a node by sequentially adding nodes that produce the largest reduction in empirical conditional entropy; it is greedy in the sense that the choice of addition is based only on the reduction achieved at that iteration. Its sequential nature gives it a lower computational complexity as compared to other existing comparisonbased techniques, all of which involve exhaustive searches over every node set of a certain size. Our main result characterizes the sample complexity of this procedure, as a function of node degrees, graph size and girth in factorgraph representation. We subsequently specialize this result to the case of Ising models, where we provide a simple transparent characterization of sample complexity as a function of model and graph parameters. For tree graphs, our algorithm is the same as the classical ChowLiu algorithm, and in that sense can be considered the extension of the same to graphs with cycles. 1.
Phase retrieval using alternating minimization
 In NIPS
, 2013
"... Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating minimization; i.e. alternating between estima ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating minimization; i.e. alternating between estimating the missing phase information, and the candidate solution. In this paper, we show that a simple alternating minimization algorithm geometrically converges to the solution of one such problem – finding a vector x from y,A, where y = ATx  and z  denotes a vector of elementwise magnitudes of z – under the assumption that A is Gaussian. Empirically, our algorithm performs similar to recently proposed convex techniques for this variant (which are based on “lifting ” to a convex matrix problem) in sample complexity and robustness to noise. However, our algorithm is much more efficient and can scale to large problems. Analytically, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the only known theoretical guarantee for alternating minimization for any variant of phase retrieval problems in the nonconvex setting. 1
Learning Planar Ising Models
"... Inference and learning of graphical models are both wellstudied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximati ..."
Abstract
 Add to MetaCart
Inference and learning of graphical models are both wellstudied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus our attention on the class of planar Ising models, for which inference is tractable using techniques of statistical physics [Kac and Ward; Kasteleyn]. Based on these techniques and recent methods for planarity testing and planar embedding [Chrobak and Payne], we propose a simple greedy algorithm for learning the best planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. We demonstrate our method in some simulations and for the application of modeling senate voting records. 1
Nonconvex robust PCA
 In Advances in Neural Information Processing Systems
, 2014
"... We propose a new method for robust PCA – the task of recovering a lowrank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of lowrank matrices, and the set of sparse matrices; each projection ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We propose a new method for robust PCA – the task of recovering a lowrank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of lowrank matrices, and the set of sparse matrices; each projection is nonconvex but easy to compute. In spite of this nonconvexity, we establish exact recovery of the lowrank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For anm×n input matrix (m ≤ n), our method has a running time of O (r2mn) per iteration, and needs O (log(1/)) iterations to reach an accuracy of . This is close to the running times of simple PCA via the power method, which requires O (rmn) per iteration, and O (log(1/)) iterations. In contrast, the existing methods for robust PCA, which are based on convex optimization, have O m2n complexity per iteration, and take O (1/) iterations, i.e., exponentially more iterations for the same accuracy. Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations.
Results 1  10
of
23