Results 1  10
of
39
Designing Structured Tight Frames via an Alternating Projection Method
, 2003
"... Tight frames, also known as general WelchBoundEquality sequences, generalize orthonormal systems. Numerous applicationsincluding communications, coding and sparse approximationrequire finitedimensional tight frames that possess additional structural properties. This paper proposes an alterna ..."
Abstract

Cited by 84 (10 self)
 Add to MetaCart
Tight frames, also known as general WelchBoundEquality sequences, generalize orthonormal systems. Numerous applicationsincluding communications, coding and sparse approximationrequire finitedimensional tight frames that possess additional structural properties. This paper proposes an alternating projection method that is versatile enough to solve a huge class of inverse eigenvalue problems, which includes the frame design problem. To apply this method, one only needs to solve a matrix nearness problem that arises naturally from the design specifications. Therefore, it is fast and easy to develop versions of the algorithm that target new design problems. Alternating projection will often succeed even if algebraic constructions are unavailable. To demonstrate
Applications of convex analysis to multidimensional scaling
 Recent Developments in Statistics
, 1977
"... Abstract. In this paper we discuss the convergence of an algorithm for metric and nonmetric multidimensional scaling that is very similar to the Cmatrix algorithm of Guttman. The paper improves some earlier results in two respects. In the first place the analysis is extended to cover general Minkov ..."
Abstract

Cited by 75 (5 self)
 Add to MetaCart
Abstract. In this paper we discuss the convergence of an algorithm for metric and nonmetric multidimensional scaling that is very similar to the Cmatrix algorithm of Guttman. The paper improves some earlier results in two respects. In the first place the analysis is extended to cover general Minkovski metrics, in the second place a more elementary proof of convergence based on results of Robert is presented. This paper was originally presented at the European Meeting of Statisticians,
Trace ratio vs. ratio trace for dimensionality reduction
 Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR
, 2007
"... A large family of algorithms for dimensionality reduction end with solving a Trace Ratio problem in the form of argmaxW Tr(WTSpW)/Tr(WTSlW)1, which is generally transformed into the corresponding Ratio Trace form argmaxW Tr [ (WTSlW)−1(WTSpW) ] for obtaining a closedform but inexact solution. In ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
(Show Context)
A large family of algorithms for dimensionality reduction end with solving a Trace Ratio problem in the form of argmaxW Tr(WTSpW)/Tr(WTSlW)1, which is generally transformed into the corresponding Ratio Trace form argmaxW Tr [ (WTSlW)−1(WTSpW) ] for obtaining a closedform but inexact solution. In this work, an efficient iterative procedure is presented to directly solve the Trace Ratio problem. In each step, a Trace Difference problem argmaxW Tr[WT (Sp−λSl)W] is solved with λ being the trace ratio value computed from the previous step. Convergence of the projection matrix W, as well as the global optimum of the trace ratio value λ, are proven based on pointtoset map theories. In addition, this procedure is further extended for solving trace ratio problems with more general constraint WTCW=I and providing exact solutions for kernelbased subspace learning problems. Extensive experiments on faces and UCI data demonstrate the high convergence speed of the proposed solution, as well as its superiority in classification capability over corresponding solutions to the ratio trace problem. 1.
On the convergence of concaveconvex procedure
 In NIPS Workshop on Optimization for Machine Learning
, 2009
"... The concaveconvex procedure (CCCP) is a majorizationminimization algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms like sparse support vector machines (SVMs), transductive ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
(Show Context)
The concaveconvex procedure (CCCP) is a majorizationminimization algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms like sparse support vector machines (SVMs), transductive SVMs, sparse principal component analysis, etc. Though widely used in many applications, the convergence behavior of CCCP has not gotten a lot of specific attention. Yuille and Rangarajan analyzed its convergence in their original paper, however, we believe the analysis is not complete. Although the convergence of CCCP can be derived from the convergence of the d.c. algorithm (DCA), its proof is more specialized and technical than actually required for the specific case of CCCP. In this paper, we follow a different reasoning and show how Zangwill’s global convergence theory of iterative algorithms provides a natural framework to prove the convergence of CCCP, allowing a more elegant and simple proof. This underlines Zangwill’s theory as a powerful and general framework to deal with the convergence issues of iterative algorithms, after also being used to prove the convergence of algorithms like expectationmaximization, generalized alternating minimization, etc. In this paper, we provide a rigorous analysis of the convergence of CCCP by addressing these questions: (i) When does CCCP find a local minimum or a stationary point of the d.c. program under consideration? (ii) When does the sequence generated by CCCP converge? We also present an open problem on the issue of local convergence of CCCP. 1
Blockrelaxation Algorithms in Statistics
, 1994
"... this paper we discuss four such classes of algorithms. Or, more precisely, we discuss a single class of algorithms, and we show how some wellknown classes of statistical algorithms fit in this common class. The subclasses are, in logical order, blockrelaxation methods augmentation methods majoriza ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
this paper we discuss four such classes of algorithms. Or, more precisely, we discuss a single class of algorithms, and we show how some wellknown classes of statistical algorithms fit in this common class. The subclasses are, in logical order, blockrelaxation methods augmentation methods majorization methods ExpectationMaximization Alternating Least Squares Alternating Conditional Expectations
Convergent incremental optimization transfer algorithms: Application to tomography
 IEEE Trans. Med. Imag., Submitted
"... Abstract—No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters (Ahn and Fessler, 2003), and ..."
Abstract

Cited by 38 (15 self)
 Add to MetaCart
Abstract—No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters (Ahn and Fessler, 2003), and methods based on the incremental expectation maximization (EM) approach (Hsiao et al., 2002). This paper generalizes the incremental EM approach by introducing a general framework that we call “incremental optimization transfer. ” Like incremental EM methods, the proposed algorithms accelerate convergence speeds and ensure global convergence (to a stationary point) under mild regularity conditions without requiring inconvenient relaxation parameters. The general optimization transfer framework enables the use of a very broad family of nonEM surrogate functions. In particular, this paper provides the first convergent OStype algorithm for transmission tomography. The general approach is applicable to both monoenergetic and polyenergetic transmission scans as well as to other image reconstruction problems. We propose a particular incremental optimization transfer method for (nonconcave) penalizedlikelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS). Results show that the new “transmission incremental optimization transfer (TRIOT) ” algorithm is faster than nonincremental ordinary SPS and even OSSPS yet is convergent. I.
Ficaro, “Maximum Likelihood Transmission Image Reconstruction for Overlapping Transmission Beams
 IEEE Tr. Med. Im
, 2000
"... In many transmission imaging geometries, the transmitted “beams ” of photons overlap on the detector, such that a detector element may record photons that originated in different sources or source locations and thus traversed different paths through the object. Examples include systems based on sc ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
(Show Context)
In many transmission imaging geometries, the transmitted “beams ” of photons overlap on the detector, such that a detector element may record photons that originated in different sources or source locations and thus traversed different paths through the object. Examples include systems based on scanning line sources or on multiple parallel rod sources. The overlap of these beams has been disregarded by both conventional analytical reconstruction methods as well as by previous statistical reconstruction methods. We propose a new algorithm for statistical image reconstruction of attenuation maps that explicitly accounts for overlapping beams in transmission scans. The algorithm is guaranteed to monotonically increase the objective function at each iteration. The availability of this algorithm enables the possibility of deliberately increasing the beam overlap so as to increase count rates. Simulated SPECT transmission scans based on a multiple line source array demonstrate that the proposed method yields improved resolution/noise tradeoffs relative to “conventional ” reconstruction algorithms, both statistical and nonstatistical. I.
Constructing Packings in Grassmannian Manifolds via Alternating Projection
, 2008
"... ..."
(Show Context)
A general approach to sparse basis selection: Majorization, concavity, and affine scaling
 IN PROCEEDINGS OF THE TWELFTH ANNUAL CONFERENCE ON COMPUTATIONAL LEARNING THEORY
, 1997
"... Measures for sparse best–basis selection are analyzed and shown to fit into a general framework based on majorization, Schurconcavity, and concavity. This framework facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed concentration measures use ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
Measures for sparse best–basis selection are analyzed and shown to fit into a general framework based on majorization, Schurconcavity, and concavity. This framework facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed concentration measures useful for sparse basis selection. It also allows one to define new concentration measures, and several general classes of measures are proposed and analyzed in this paper. Admissible measures are given by the Schurconcave functions, which are the class of functions consistent with the socalled Lorentz ordering (a partial ordering on vectors also known as majorization). In particular, concave functions form an important subclass of the Schurconcave functions which attain their minima at sparse solutions to the best basis selection problem. A general affine scaling optimization algorithm obtained from a special factorization of the gradient function is developed and proved to converge to a sparse solution for measures chosen from within this subclass.