Results 1  10
of
102
Sum power iterative waterfilling for multiantenna Gaussian broadcast channels
 IEEE Trans. Inform. Theory
, 2005
"... In this paper we consider the problem of maximizing sum rate of a multipleantenna Gaussian broadcast channel. It was recently found that dirty paper coding is capacity achieving for this channel. In order to achieve capacity, the optimal transmission policy (i.e. the optimal transmit covariance str ..."
Abstract

Cited by 88 (17 self)
 Add to MetaCart
In this paper we consider the problem of maximizing sum rate of a multipleantenna Gaussian broadcast channel. It was recently found that dirty paper coding is capacity achieving for this channel. In order to achieve capacity, the optimal transmission policy (i.e. the optimal transmit covariance structure) given the channel conditions and power constraint must be found. However, obtaining the optimal transmission policy when employing dirty paper coding is a computationally complex nonconvex problem. We use duality to transform this problem into a wellstructured convex multipleaccess channel problem. We exploit the structure of this problem and derive simple and fast iterative algorithms that provide the optimum transmission policies for the multipleaccess channel, which can easily be mapped to the optimal broadcast channel policies.
Blind Separation of Synchronous CoChannel Digital Signals Using an Antenna Array. Part I. Algorithms
 IEEE Transactions on Signal Processing
, 1995
"... We propose a maximumlikelihood approach for separating and estimating multiple synchronous digital signals arriving at an antenna array. The spatial response of the array is assumed to be known imprecisely or unknown. We exploit the finite alphabet (FA) property of digital signals to simultaneou ..."
Abstract

Cited by 67 (6 self)
 Add to MetaCart
We propose a maximumlikelihood approach for separating and estimating multiple synchronous digital signals arriving at an antenna array. The spatial response of the array is assumed to be known imprecisely or unknown. We exploit the finite alphabet (FA) property of digital signals to simultaneously determine the array response and the symbol sequence for each signal. Uniqueness of the estimates is established for signals with linear modulation formats. We introduce a signal detection technique based on the FA property which is different from a standard linear combiner. Computationally efficient algorithms for both block and recursive estimation of the signals are presented. This new approach is applicable to an unknown array geometry and propagation environment, which is particularly useful in wireless communication systems. Simulation results demonstrate its promising performance. Email: talwar@sccm.stanford.edu, Ph: (415) 7230061, Fax: (415) 7232411. This work was suppor...
Multiplicative Updates for Nonnegative Quadratic Programming in Support Vector Machines
 in Advances in Neural Information Processing Systems 15
, 2002
"... We derive multiplicative updates for solving the nonnegative quadratic programming problem in support vector machines (SVMs). The updates have a simple closed form, and we prove that they converge monotonically to the solution of the maximum margin hyperplane. The updates optimize the traditiona ..."
Abstract

Cited by 58 (6 self)
 Add to MetaCart
We derive multiplicative updates for solving the nonnegative quadratic programming problem in support vector machines (SVMs). The updates have a simple closed form, and we prove that they converge monotonically to the solution of the maximum margin hyperplane. The updates optimize the traditionally proposed objective function for SVMs. They do not involve any heuristics such as choosing a learning rate or deciding which variables to update at each iteration. They can be used to adjust all the quadratic programming variables in parallel with a guarantee of improvement at each iteration. We analyze the asymptotic convergence of the updates and show that the coefficients of nonsupport vectors decay geometrically to zero at a rate that depends on their margins. In practice, the updates converge very rapidly to good classifiers.
The Iterative Convex Minorant Algorithm for Nonparametric Estimation
, 1995
"... The problem of minimizing a smooth convex function over a basic cone in is frequently encountered in nonparametric statistics. For that type of problem we suggest an algorithm and show that this algorithm converges to the solution of the minimization problem. ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
The problem of minimizing a smooth convex function over a basic cone in is frequently encountered in nonparametric statistics. For that type of problem we suggest an algorithm and show that this algorithm converges to the solution of the minimization problem.
Blockrelaxation Algorithms in Statistics
, 1994
"... this paper we discuss four such classes of algorithms. Or, more precisely, we discuss a single class of algorithms, and we show how some wellknown classes of statistical algorithms fit in this common class. The subclasses are, in logical order, blockrelaxation methods augmentation methods majoriza ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
this paper we discuss four such classes of algorithms. Or, more precisely, we discuss a single class of algorithms, and we show how some wellknown classes of statistical algorithms fit in this common class. The subclasses are, in logical order, blockrelaxation methods augmentation methods majorization methods ExpectationMaximization Alternating Least Squares Alternating Conditional Expectations
On the convergence of concaveconvex procedure
 In NIPS Workshop on Optimization for Machine Learning
, 2009
"... The concaveconvex procedure (CCCP) is a majorizationminimization algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms like sparse support vector machines (SVMs), transductive ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
The concaveconvex procedure (CCCP) is a majorizationminimization algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms like sparse support vector machines (SVMs), transductive SVMs, sparse principal component analysis, etc. Though widely used in many applications, the convergence behavior of CCCP has not gotten a lot of specific attention. Yuille and Rangarajan analyzed its convergence in their original paper, however, we believe the analysis is not complete. Although the convergence of CCCP can be derived from the convergence of the d.c. algorithm (DCA), its proof is more specialized and technical than actually required for the specific case of CCCP. In this paper, we follow a different reasoning and show how Zangwill’s global convergence theory of iterative algorithms provides a natural framework to prove the convergence of CCCP, allowing a more elegant and simple proof. This underlines Zangwill’s theory as a powerful and general framework to deal with the convergence issues of iterative algorithms, after also being used to prove the convergence of algorithms like expectationmaximization, generalized alternating minimization, etc. In this paper, we provide a rigorous analysis of the convergence of CCCP by addressing these questions: (i) When does CCCP find a local minimum or a stationary point of the d.c. program under consideration? (ii) When does the sequence generated by CCCP converge? We also present an open problem on the issue of local convergence of CCCP. 1
Convergent incremental optimization transfer algorithms: Application to tomography
 IEEE Trans. Med. Imag., Submitted
"... Abstract—No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters (Ahn and Fessler, 2003), and ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
Abstract—No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters (Ahn and Fessler, 2003), and methods based on the incremental expectation maximization (EM) approach (Hsiao et al., 2002). This paper generalizes the incremental EM approach by introducing a general framework that we call “incremental optimization transfer. ” Like incremental EM methods, the proposed algorithms accelerate convergence speeds and ensure global convergence (to a stationary point) under mild regularity conditions without requiring inconvenient relaxation parameters. The general optimization transfer framework enables the use of a very broad family of nonEM surrogate functions. In particular, this paper provides the first convergent OStype algorithm for transmission tomography. The general approach is applicable to both monoenergetic and polyenergetic transmission scans as well as to other image reconstruction problems. We propose a particular incremental optimization transfer method for (nonconcave) penalizedlikelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS). Results show that the new “transmission incremental optimization transfer (TRIOT) ” algorithm is faster than nonincremental ordinary SPS and even OSSPS yet is convergent. I.
Error Stability Properties of Generalized GradientType Algorithms
 Journal of Optimization Theory and Applications
, 1998
"... Abstract. We present a unified framework for convergence analysis of generalized subgradienttype algorithms in the presence of perturbations. A principal novel feature of our analysis is that perturbations need not tend to zero in the limit. It is established that the iterates of the algorithms are ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
Abstract. We present a unified framework for convergence analysis of generalized subgradienttype algorithms in the presence of perturbations. A principal novel feature of our analysis is that perturbations need not tend to zero in the limit. It is established that the iterates of the algorithms are attracted, in a certain sense, to an estationary set of the problem, where e depends on the magnitude of perturbations. Characterization of the attraction sets is given in the general (nonsmooth and nonconvex) case. The results are further strengthened for convex, weakly sharp, and strongly convex problems. Our analysis extends and unifies previously known results on convergence and stability properties of gradient and subgradient methods, including their incremental, parallel, and heavy ball modifications.
A Survey of Algorithms for Convex Multicommodity Flow Problems
, 1997
"... There are many problems related to the design of networks. Among them, the message routing problem plays a determinant role in the optimization of network performance. Much of the motivation for this work comes from this problem which is shown to belong to the class of nonlinear convex multicommodit ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
There are many problems related to the design of networks. Among them, the message routing problem plays a determinant role in the optimization of network performance. Much of the motivation for this work comes from this problem which is shown to belong to the class of nonlinear convex multicommodity flow problems. This paper emphasizes the message routing problem in data networks, but it includes a broader literature overview of convex multicommodity flow problems. We present and discuss the main solution techniques proposed for solving this class of largescale convex optimization problems. We conduct some numerical experiments on the message routing problem with some different techniques. 1 Introduction The literature dealing with multicommodity flow problems is rich since the publication of the works of Ford and Fulkerson's [19] and T.C. Hu [30] in the beginning of the 1960s. These problems usually have a very large number of variables and constraints and arise in a great variety o...