Results 1  10
of
118
Sum power iterative waterfilling for multiantenna Gaussian broadcast channels
 IEEE Trans. Inform. Theory
, 2005
"... In this paper we consider the problem of maximizing sum rate of a multipleantenna Gaussian broadcast channel. It was recently found that dirty paper coding is capacity achieving for this channel. In order to achieve capacity, the optimal transmission policy (i.e. the optimal transmit covariance str ..."
Abstract

Cited by 117 (15 self)
 Add to MetaCart
In this paper we consider the problem of maximizing sum rate of a multipleantenna Gaussian broadcast channel. It was recently found that dirty paper coding is capacity achieving for this channel. In order to achieve capacity, the optimal transmission policy (i.e. the optimal transmit covariance structure) given the channel conditions and power constraint must be found. However, obtaining the optimal transmission policy when employing dirty paper coding is a computationally complex nonconvex problem. We use duality to transform this problem into a wellstructured convex multipleaccess channel problem. We exploit the structure of this problem and derive simple and fast iterative algorithms that provide the optimum transmission policies for the multipleaccess channel, which can easily be mapped to the optimal broadcast channel policies.
Blind Separation of Synchronous CoChannel Digital Signals Using an Antenna Array. Part I. Algorithms
 IEEE Transactions on Signal Processing
, 1995
"... We propose a maximumlikelihood approach for separating and estimating multiple synchronous digital signals arriving at an antenna array. The spatial response of the array is assumed to be known imprecisely or unknown. We exploit the finite alphabet (FA) property of digital signals to simultaneou ..."
Abstract

Cited by 84 (6 self)
 Add to MetaCart
(Show Context)
We propose a maximumlikelihood approach for separating and estimating multiple synchronous digital signals arriving at an antenna array. The spatial response of the array is assumed to be known imprecisely or unknown. We exploit the finite alphabet (FA) property of digital signals to simultaneously determine the array response and the symbol sequence for each signal. Uniqueness of the estimates is established for signals with linear modulation formats. We introduce a signal detection technique based on the FA property which is different from a standard linear combiner. Computationally efficient algorithms for both block and recursive estimation of the signals are presented. This new approach is applicable to an unknown array geometry and propagation environment, which is particularly useful in wireless communication systems. Simulation results demonstrate its promising performance. Email: talwar@sccm.stanford.edu, Ph: (415) 7230061, Fax: (415) 7232411. This work was suppor...
Multiplicative Updates for Nonnegative Quadratic Programming in Support Vector Machines
 in Advances in Neural Information Processing Systems 15
, 2002
"... We derive multiplicative updates for solving the nonnegative quadratic programming problem in support vector machines (SVMs). The updates have a simple closed form, and we prove that they converge monotonically to the solution of the maximum margin hyperplane. The updates optimize the traditiona ..."
Abstract

Cited by 76 (7 self)
 Add to MetaCart
(Show Context)
We derive multiplicative updates for solving the nonnegative quadratic programming problem in support vector machines (SVMs). The updates have a simple closed form, and we prove that they converge monotonically to the solution of the maximum margin hyperplane. The updates optimize the traditionally proposed objective function for SVMs. They do not involve any heuristics such as choosing a learning rate or deciding which variables to update at each iteration. They can be used to adjust all the quadratic programming variables in parallel with a guarantee of improvement at each iteration. We analyze the asymptotic convergence of the updates and show that the coefficients of nonsupport vectors decay geometrically to zero at a rate that depends on their margins. In practice, the updates converge very rapidly to good classifiers.
A multipitch analyzer based on harmonic temporal structured clustering
 IEEE Trans. Audio, Speech, Lang. Process
, 2007
"... Abstract—This paper proposes a multipitch analyzer called the harmonic temporal structured clustering (HTC) method, that jointly estimates pitch, intensity, onset, duration, etc., of each underlying source in a multipitch audio signal. HTC decomposes the energy patterns diffused in timefrequency sp ..."
Abstract

Cited by 50 (15 self)
 Add to MetaCart
Abstract—This paper proposes a multipitch analyzer called the harmonic temporal structured clustering (HTC) method, that jointly estimates pitch, intensity, onset, duration, etc., of each underlying source in a multipitch audio signal. HTC decomposes the energy patterns diffused in timefrequency space, i.e., the power spectrum time series, into distinct clusters such that each has originated from a single source. The problem is equivalent to approximating the observed power spectrum time series by superimposed HTC source models, whose parameters are associated with the acoustic features that we wish to extract. The update equations of the HTC are explicitly derived by formulating the HTC source model with a Gaussian kernel representation. We verified through experiments the potential of the HTC method. Index Terms—Computational acoustic scene analysis, harmonic temporal structured clustering (HTC), multipitch analyzer. I.
On the convergence of concaveconvex procedure
 In NIPS Workshop on Optimization for Machine Learning
, 2009
"... The concaveconvex procedure (CCCP) is a majorizationminimization algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms like sparse support vector machines (SVMs), transductive ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
(Show Context)
The concaveconvex procedure (CCCP) is a majorizationminimization algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms like sparse support vector machines (SVMs), transductive SVMs, sparse principal component analysis, etc. Though widely used in many applications, the convergence behavior of CCCP has not gotten a lot of specific attention. Yuille and Rangarajan analyzed its convergence in their original paper, however, we believe the analysis is not complete. Although the convergence of CCCP can be derived from the convergence of the d.c. algorithm (DCA), its proof is more specialized and technical than actually required for the specific case of CCCP. In this paper, we follow a different reasoning and show how Zangwill’s global convergence theory of iterative algorithms provides a natural framework to prove the convergence of CCCP, allowing a more elegant and simple proof. This underlines Zangwill’s theory as a powerful and general framework to deal with the convergence issues of iterative algorithms, after also being used to prove the convergence of algorithms like expectationmaximization, generalized alternating minimization, etc. In this paper, we provide a rigorous analysis of the convergence of CCCP by addressing these questions: (i) When does CCCP find a local minimum or a stationary point of the d.c. program under consideration? (ii) When does the sequence generated by CCCP converge? We also present an open problem on the issue of local convergence of CCCP. 1
1 Dictionary Learning for Sparse Approximations with the Majorization Method
"... Abstract—In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by ..."
Abstract

Cited by 37 (10 self)
 Add to MetaCart
(Show Context)
Abstract—In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation (e.g. Expectation Maximization (EM)) problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.
The Iterative Convex Minorant Algorithm for Nonparametric Estimation
, 1995
"... The problem of minimizing a smooth convex function over a basic cone in is frequently encountered in nonparametric statistics. For that type of problem we suggest an algorithm and show that this algorithm converges to the solution of the minimization problem. ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
The problem of minimizing a smooth convex function over a basic cone in is frequently encountered in nonparametric statistics. For that type of problem we suggest an algorithm and show that this algorithm converges to the solution of the minimization problem.
Blockrelaxation Algorithms in Statistics
, 1994
"... this paper we discuss four such classes of algorithms. Or, more precisely, we discuss a single class of algorithms, and we show how some wellknown classes of statistical algorithms fit in this common class. The subclasses are, in logical order, blockrelaxation methods augmentation methods majoriza ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
this paper we discuss four such classes of algorithms. Or, more precisely, we discuss a single class of algorithms, and we show how some wellknown classes of statistical algorithms fit in this common class. The subclasses are, in logical order, blockrelaxation methods augmentation methods majorization methods ExpectationMaximization Alternating Least Squares Alternating Conditional Expectations
Convergent incremental optimization transfer algorithms: Application to tomography
 IEEE Trans. Med. Imag., Submitted
"... Abstract—No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters (Ahn and Fessler, 2003), and ..."
Abstract

Cited by 31 (12 self)
 Add to MetaCart
Abstract—No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters (Ahn and Fessler, 2003), and methods based on the incremental expectation maximization (EM) approach (Hsiao et al., 2002). This paper generalizes the incremental EM approach by introducing a general framework that we call “incremental optimization transfer. ” Like incremental EM methods, the proposed algorithms accelerate convergence speeds and ensure global convergence (to a stationary point) under mild regularity conditions without requiring inconvenient relaxation parameters. The general optimization transfer framework enables the use of a very broad family of nonEM surrogate functions. In particular, this paper provides the first convergent OStype algorithm for transmission tomography. The general approach is applicable to both monoenergetic and polyenergetic transmission scans as well as to other image reconstruction problems. We propose a particular incremental optimization transfer method for (nonconcave) penalizedlikelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS). Results show that the new “transmission incremental optimization transfer (TRIOT) ” algorithm is faster than nonincremental ordinary SPS and even OSSPS yet is convergent. I.