Results 1  10
of
47
Phase retrieval, error reduction algorithm, and fienup variants: a view from convex optimization
 J. Opt. Soc. Amer. A
, 2002
"... The phase retrieval problem is of paramount importance in various areas of applied physics and engineering. The state of the art for solving this problem in two dimensions relies heavily on the pioneering work of Gerchberg, Saxton, and Fienup. Despite the widespread use of the algorithms proposed by ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
The phase retrieval problem is of paramount importance in various areas of applied physics and engineering. The state of the art for solving this problem in two dimensions relies heavily on the pioneering work of Gerchberg, Saxton, and Fienup. Despite the widespread use of the algorithms proposed by these three researchers, current mathematical theory cannot explain their remarkable success. Nevertheless, great insight can be gained into the behavior, the shortcomings, and the performance of these algorithms from their possible counterparts in convex optimization theory. An important step in this direction was made two decades ago when the error reduction algorithm was identified as a nonconvex alternating projection algorithm. The purpose of this paper is to formulate the phase retrieval problem with mathematical care and to establish new connections between well established numerical phase retrieval schemes and classical convex optimization methods. Specifically, it is shown that Fienup’s basic inputoutput algorithm corresponds to Dykstra’s algorithm, and that Fienup’s hybrid inputoutput algorithm can be viewed as an instance of the DouglasRachford algorithm. This work provides a theoretical framework to better understand and, potentially, improve existing phase recovery algorithms. 1 1
Formulation and Solution of Structured Total Least Norm Problems for Parameter Estimation
 IEEE Transactions on signal processing
, 1996
"... The Total Least Squares (TLS) method is a generalization of the least squares (LS) method for solving overdetermined sets of linear equations Ax ß b. The TLS method minimizes jj[E j \Gamma r]jj F where r = b \Gamma (A+E)x, so that (b \Gamma r) 2 Range(A+E), given A 2 C m\Thetan , with m n and b 2 ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
The Total Least Squares (TLS) method is a generalization of the least squares (LS) method for solving overdetermined sets of linear equations Ax ß b. The TLS method minimizes jj[E j \Gamma r]jj F where r = b \Gamma (A+E)x, so that (b \Gamma r) 2 Range(A+E), given A 2 C m\Thetan , with m n and b 2 C m\Theta1 . The most common TLS algorithm is based on the singular value decomposition (SVD) of [A j b]. However, the SVD based methods may not be appropriate when the matrix A has a special structure, since they do not preserve the structure. Recently, a new problem formulation, called Structured Total Least Norm (STLN), and algorithm for computing the STLN solution have been developed. The STLN method preserves the special structure of A or [A j b], and can minimize the error in the discrete L p norm, where p = 1; 2 or 1. In this paper, the STLN problem formulation is generalized for computing the solution of STLN problems with multiple righthand sides AX ß B. It is shown that these ...
Structured total least squares and L_2 Approximation Problems
, 1993
"... It is shown how structured and weighted total least squares and L 2 approximation problems lead to a 'nonlinear' generalized singular value decomposition. An inverse iteration scheme to find a (local) minimum is proposed. The emphasis of the paper is not on the convergence analysis of the ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
It is shown how structured and weighted total least squares and L 2 approximation problems lead to a 'nonlinear' generalized singular value decomposition. An inverse iteration scheme to find a (local) minimum is proposed. The emphasis of the paper is not on the convergence analysis of the algorithm, but rather the purpose is to illustrate its use in numerous applications in systems and control, including total least squares with relative errors and/or fixed elements, inverse singular value problems, an errorsinvariables variant of the Kalman filter, impulse response realization from noisy data, H 2 model reduction, H 2 system identification and calculating the largest stability radius of uncertain linear systems. Several numerical examples are given. Revised versions will be available via anonymous ftp to gate.esat.kuleuven.ac.be in the file pub/SISTA/demoor/reports/h2rankdeffinal.ps.Z (compressed postscript format). 1
Adaptive Eigenvalue Computations Using Newton's Method on the Grassmann Manifold
 SIAM J. Matrix Anal. Appl
, 1999
"... We consider the problem of updating an invariant subspace of a Hermitian, large and structured matrix when the matrix is modified slightly. The problem can be formulated as that of computing stationary values of a certain function, with orthogonality constraints. The constraint is formulated as the ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
We consider the problem of updating an invariant subspace of a Hermitian, large and structured matrix when the matrix is modified slightly. The problem can be formulated as that of computing stationary values of a certain function, with orthogonality constraints. The constraint is formulated as the requirement that the solution must be on the Grassmann manifold, and Newton's method on the manifold is used. In each Newton iteration a Sylvester equation is to be solved. We discuss the properties of the Sylvester equation and conclude that for large problems preconditioned iterative methods can be used. Preconditioning techniques are discussed. Numerical examples from signal subspace computations are given, where the matrix is Toeplitz and we compute a partial singular value decomposition corresponding to the largest singular values. Further we solve numerically the problem of computing the smallest eigenvalues and corresponding eigenvectors of a large sparse matrix that has been slightly...
Shape From Moments  An Estimation Theory Perspective
 IEEE TRANSACTIONS ON SIGNAL PROCESSING ON
, 2004
"... This paper discusses the problem of recovering a planar polygon from its measured complex moments. These moments correspond to an indicator function defined over the polygon's support. Previous work on this problem gave necessary and sufficient conditions for such successful recovery process ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
This paper discusses the problem of recovering a planar polygon from its measured complex moments. These moments correspond to an indicator function defined over the polygon's support. Previous work on this problem gave necessary and sufficient conditions for such successful recovery process and focused mainly on the case of exact measurements being given. In this paper
FourthOrder Cumulant Structure Forcing. Application to Blind Array Processing
, 1992
"... In blind array processing, the array manifold is unknown but, under the signal independence assumption, the signal parameters can be estimated by resorting to higherorder information. We consider the 4thorder cumulant tensor and show that sample cumulant enhancement based on rank and symmetry prop ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
In blind array processing, the array manifold is unknown but, under the signal independence assumption, the signal parameters can be estimated by resorting to higherorder information. We consider the 4thorder cumulant tensor and show that sample cumulant enhancement based on rank and symmetry properties yields cumulant estimates with the exact theoretical structure. Any identification procedure based on enhanced cumulants is then equivalent to cumulant matching, bypassing the need for initialization and optimization. 1. INTRODUCTION This paper deals with a linear data model where a m dimensional complex vector x(t) is assumed to be the superposition of n linear components, possibly corrupted by additive noise. An observation can then be written as: x(t) = X p=1;n sp(t) ap +N(t) (1) where each sp(t) is a complex stationary scalar process, each ap is a deterministic m21 vector, and the m21 vector N(t) represents additive noise. This is the standard model in narrow band array p...
Spectral Compressive Sensing
"... Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals. A great many applications feature smooth or modulated signals that can be modeled as a linear combination of a small number of sinusoids; such signals are sparse in the frequency do ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals. A great many applications feature smooth or modulated signals that can be modeled as a linear combination of a small number of sinusoids; such signals are sparse in the frequency domain. In practical applications, the standard frequency domain signal representation is the discrete Fourier transform (DFT). Unfortunately, the DFT coefficients of a frequencysparse signal are themselves sparse only in the contrived case where the sinusoid frequencies are integer multiples of the DFT’s fundamental frequency. As a result, practical DFTbased CS acquisition and recovery of smooth signals does not perform nearly as well as one might expect. In this paper, we develop a new spectral compressive sensing (SCS) theory for general frequencysparse signals. The key ingredients are an oversampled DFT frame, a signal model that inhibits closely spaced sinusoids, and classical sinusoid parameter estimation algorithms from the field of spectrum estimation. Using peridogram and eigenanalysis based spectrum estimates (e.g., MUSIC), our new SCS algorithms significantly outperform the current stateoftheart CS algorithms while providing provable bounds on the number of measurements required for stable recovery. I.
Hankel matrix rank minimization with applications in system identification and realization. Submitted for publication. Preprint available at http://faculty.washington.edu/mfazel
, 2011
"... In honor of Professor Paul Tseng, who went missing while on a kayak trip on the Jinsha river, China, on August 13, 2009, for his contributions to the theory and algorithms for largescale optimization. Abstract. In this paper, we introduce a flexible optimization framework for nuclear norm minimizat ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In honor of Professor Paul Tseng, who went missing while on a kayak trip on the Jinsha river, China, on August 13, 2009, for his contributions to the theory and algorithms for largescale optimization. Abstract. In this paper, we introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz and moment structures, and catalog applications from diverse fields under this framework. We discuss various firstorder methods for solving the resulting optimization problem, including alternating direction methods, proximal point algorithm and gradient projection methods. We perform computational experiments to compare these methods on system identification problem and system realization problem. For the system identification problem, the gradient projection method (accelerated by Nesterov’s extrapolation techniques) usually outperforms other firstorder methods in terms of CPU time on both real and simulated data; while for the system realization problem, the alternating direction method, as applied to a certain primal reformulation, usually outperforms other firstorder methods in terms of CPU time.
Bregman Monotone Optimization Algorithms
, 2002
"... A broad class of optimization algorithms based on Bregman distances in Banach spaces is unified around the notion of Bregman monotonicity. A systematic investigation of this notion leads to a simpli ed analysis of numerous algorithms and to the development of a new class of parallel blockiterative ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
A broad class of optimization algorithms based on Bregman distances in Banach spaces is unified around the notion of Bregman monotonicity. A systematic investigation of this notion leads to a simpli ed analysis of numerous algorithms and to the development of a new class of parallel blockiterative surrogate Bregman projection schemes. Another key contribution is the introduction of a class of operators that is shown to be intrinsically tied to the notion of Bregman monotonicity and to include the operators commonly found in Bregman optimization methods. Special emphasis is placed on the viability of the algorithms and the importance of Legendre functions in this regard. Various applications are discussed.
Structured Low Rank Approximation
 LINEAR ALGEBRA APPL
, 2002
"... This paper concerns the construction of a structured low rank matrix that is nearest to a given matrix. The notion of structured low rank approximation arises in various applications, ranging from signal enhancement to protein folding to computer algebra, where the empirical data collected in a matr ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This paper concerns the construction of a structured low rank matrix that is nearest to a given matrix. The notion of structured low rank approximation arises in various applications, ranging from signal enhancement to protein folding to computer algebra, where the empirical data collected in a matrix do not maintain either the specified structure or the desirable rank as is expected in the original system. The task to retrieve useful information while maintaining the underlying physical feasibility often necessitates the search for a good structured lower rank approximation of the data matrix. This paper addresses some of the theoretical and numerical issues involved in the problem. Two procedures for constructing the nearest structured low rank matrix are proposed. The procedures are flexible enough that they can be applied to any lower rank, any linear structure, and any matrix norm in the measurement of nearness. The techniques can also be easily implemented by utilizing available optimization packages. The special case of symmetric Toeplitz structure using the Frobenius matrix norm is used to exemplify the ideas throughout the discussion. The concept, rather than the implementation details, is the main emphasis of the paper.