Results 1 - 10
of
26
Penalty decomposition methods for rank minimization
, 2010
"... In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank ..."
Abstract
-
Cited by 8 (6 self)
- Add to MetaCart
(Show Context)
In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. Under some suitable assumptions, we show that any accumulation point of the sequence generated by our method when applied to the rank constrained minimization problem is a stationary point of a nonlinear reformulation of the problem. Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. Key words: rank minimization, penalty decomposition methods, matrix completion, nearest lowrank correlation matrix
A globally convergent algorithm for nonconvex optimization based on block coordinate update,” arXiv preprint arXiv:1410.1386
, 2014
"... Abstract. Nonconvex optimization problems arise in many areas of computational science and engineering and are (approx-imately) solved by a variety of algorithms. Existing algorithms usually only have local convergence or subsequence convergence of their iterates. We propose an algorithm for a gener ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Nonconvex optimization problems arise in many areas of computational science and engineering and are (approx-imately) solved by a variety of algorithms. Existing algorithms usually only have local convergence or subsequence convergence of their iterates. We propose an algorithm for a generic nonconvex optimization formulation, establish the convergence of its whole iterate sequence to a critical point along with a rate of convergence, and numerically demonstrate its efficiency. Specially, we consider the problem of minimizing a nonconvex objective function. Its variables can be treated as one block or be partitioned into multiple disjoint blocks. It is assumed that each non-differentiable component of the objective function or each constraint applies to one block of variables. The differentiable components of the objective function, however, can apply to one or multiple blocks of variables together. Our algorithm updates one block of variables at time by minimizing a certain prox-linear surrogate. The order of update can be either deterministic or randomly shuffled in each round. In fact, our convergence analysis only needs that each block be updated at least once every fixed number of iterations. We obtain the convergence of the whole iterate sequence to a critical point under fairly loose conditions including, in particular, the Kurdyka- Lojasiewicz (KL) condition, which is satisfied by a broad class of nonconvex/nonsmooth applications. Of course, these results apply to convex optimization as well. We apply our convergence result to the coordinate descent method for non-convex regularized linear regression and also a modified rank-one residue iteration method for nonnegative matrix factorization. We show that both the methods have global convergence. Numerically, we test our algorithm on nonnegative matrix and tensor factorization problems, with random shuffling enable to avoid local solutions.
Orthogonal rank-one matrix pursuit for low rank matrix completion
- SIAM Journal on Scientific Computing
"... ar ..."
(Show Context)
Iterative Reweighted Singular Value Minimization Methods for lp Regularized Unconstrained Matrix Minimization∗
, 2014
"... In this paper we study general lp regularized unconstrained matrix minimization problems. In particular, we first introduce a class of first-order stationary points for them. And we show that the first-order stationary points introduced in [11] for an lp regularized vector minimization problem are e ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
In this paper we study general lp regularized unconstrained matrix minimization problems. In particular, we first introduce a class of first-order stationary points for them. And we show that the first-order stationary points introduced in [11] for an lp regularized vector minimization problem are equivalent to those of an lp regularized ma-trix minimization reformulation. We also establish that any local minimizer of the lp regularized matrix minimization problems must be a first-order stationary point. More-over, we derive lower bounds for nonzero singular values of the first-order stationary points and hence also of the local minimizers for the lp matrix minimization problems. The iterative reweighted singular value minimization (IRSVM) approaches are then pro-posed to solve these problems in which each subproblem has a closed-form solution. We show that any accumulation point of the sequence generated by these methods is a first-order stationary point of the problems. In addition, we study a nonmontone proximal gradient (NPG) method for solving the lp matrix minimization problems and establish its global convergence. Our computational results demonstrate that the IRSVM and NPG methods generally outperform some existing state-of-the-art methods in terms of solution quality and/or speed. Moreover, the IRSVM methods are slightly faster than the NPG method. Key words: lp regularized matrix minimization, iterative reweighted singular value min-imization, iterative reweighted least squares, nonmonotone proximal gradient method
Internet Accessible
- Lanzhou University, Lanzhou
, 1998
"... DNA molecular weight standard control, also called DNA marker (ladder), has been widely used in the experiments of molecular biology. In the paper, we report a method by which DNA marker was prepared based on multiple PCR technique. 100-1000 bp DNA fragments were amplified using the primers designe ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
DNA molecular weight standard control, also called DNA marker (ladder), has been widely used in the experiments of molecular biology. In the paper, we report a method by which DNA marker was prepared based on multiple PCR technique. 100-1000 bp DNA fragments were amplified using the primers designed according to the 6631 ∼ 7630 position of lambda DNA. Target DNA fragments were amplified using Touchdown PCR combined with hot start PCR, respectively, followed extracted by phenol/chloroform, precipitated with ethanol and mixed thoroughly. The results showed that the 100-1000 bp DNA fragments were successfully obtained in one PCR reaction, the bands of prepared DNA marker were clear, the size was right and could be used as control in the molecular biology experiment. This method could save time and be more inexpensive, rapid, simple when compared with the current DNA Ladder prepared means.
New Improved Algorithms for Compressive Sensing Based on p Norm
"... Abstract—A new algorithm for the reconstruction of sparse signals, which is referred to as the p-regularized least squares (p-RLS) algorithm, is proposed. The new algorithm is based on the minimization of a smoothed p-norm regularized square error with p < 1. It uses a conjugate-gradient (CG) opt ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—A new algorithm for the reconstruction of sparse signals, which is referred to as the p-regularized least squares (p-RLS) algorithm, is proposed. The new algorithm is based on the minimization of a smoothed p-norm regularized square error with p < 1. It uses a conjugate-gradient (CG) optimiza-tion method in a sequential minimization strategy that involves a two-parameter continuation technique. An improved version of the new algorithm is also proposed, which entails a bisection technique that optimizes an inherent regularization parameter. Extensive simulation results show that the new algorithm offers improved signal reconstruction performance and requires reduced computational effort relative to several state-of-the-art competing algorithms. The improved version of the p-RLS algorithm of-fers better performance than the basic version, although this is achieved at the cost of increased computational effort. Index Terms—Compressive sensing (CS), conjugate-gradient (CG) optimization, least squares optimization, sequential opti-mization, p-norm.
A Smoothing SQP Framework for a Class of Composite Lq Minimization over Polyhedron
, 2014
"... ar ..."
(Show Context)
Schatten-p Quasi-Norm Regularized Matrix Optimization via Iterative Reweighted Singular Value Minimization
, 2015
"... In this paper we study general Schatten-p quasi-norm (SPQN) regularized matrix minimiza-tion problems. In particular, we first introduce a class of first-order stationary points for them, and show that the first-order stationary points introduced in [11] for an SPQN regularized vec-tor minimization ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
In this paper we study general Schatten-p quasi-norm (SPQN) regularized matrix minimiza-tion problems. In particular, we first introduce a class of first-order stationary points for them, and show that the first-order stationary points introduced in [11] for an SPQN regularized vec-tor minimization problem are equivalent to those of an SPQN regularized matrix minimization reformulation. We also show that any local minimizer of the SPQN regularized matrix mini-mization problems must be a first-order stationary point. Moreover, we derive lower bounds for nonzero singular values of the first-order stationary points and hence also of the local minimiz-ers of the SPQN regularized matrix minimization problems. The iterative reweighted singular value minimization (IRSVM) methods are then proposed to solve these problems, whose sub-problems are shown to have a closed-form solution. In contrast to the analogous methods for the SPQN regularized vector minimization problems, the convergence analysis of these methods is significantly more challenging. We develop a novel approach to establishing the convergence of these methods, which makes use of the expression of a specific solution of their subproblems and avoids the intricate issue of finding the explicit expression for the Clarke subdifferential of
Proximal Iteratively Reweighted Algorithm with Multiple Splitting for Nonconvex Sparsity Optimization
"... This paper proposes the Proximal Iteratively REweighted (PIRE) algorithm for solving a general problem, which involves a large body of nonconvex sparse and structured sparse related problems. Compar-ing with previous iterative solvers for nonconvex sparse problem, PIRE is much more general and effic ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
This paper proposes the Proximal Iteratively REweighted (PIRE) algorithm for solving a general problem, which involves a large body of nonconvex sparse and structured sparse related problems. Compar-ing with previous iterative solvers for nonconvex sparse problem, PIRE is much more general and efficient. The computational cost of PIRE in each iteration is usually as low as the state-of-the-art convex solvers. We further propose the PIRE algorithm with Parallel Splitting (PIRE-PS) and PIRE algorithm with Alternative Updating (PIRE-AU) to handle the multi-variable prob-lems. In theory, we prove that our proposed methods converge and any limit solution is a stationary point. Extensive experiments on both synthesis and real data sets demonstrate that our methods achieve comparative learning performance, but are much more efficient, by comparing with previous nonconvex solvers.
1Smoothed Low Rank and Sparse Matrix Recovery by Iteratively Reweighted Least Squares Minimization
"... This work presents a general framework for solving the low rank and/or sparse matrix minimization problems, which may involve multiple non-smooth terms. The Iteratively Reweighted Least Squares (IRLS) method is a fast solver, which smooths the objective function and minimizes it by alternately updat ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
This work presents a general framework for solving the low rank and/or sparse matrix minimization problems, which may involve multiple non-smooth terms. The Iteratively Reweighted Least Squares (IRLS) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This work generalizes IRLS to solve joint/mixed low rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and `2,q-norm regularized Low-Rank Representation (LRR) problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p, q ≥ 1). Our convergence proof of IRLS is more general than previous one which depends on the special properties of the Schatten-p norm and `2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient. Index Terms—Low-rank and sparse minimization, Iteratively Reweighted Least Squares.