Results 1  10
of
11
Joint TxRx beamforming design for multicarrier MIMO channels: a unified framework for convex optimization
 IEEE TRANS. SIGNAL PROCESSING
, 2003
"... This paper addresses the joint design of transmit and receive beamforming or linear processing (commonly termed linear precoding at the transmitter and equalization at the receiver) for multicarrier multipleinput multipleoutput (MIMO) channels under a variety of design criteria. Instead of consid ..."
Abstract

Cited by 127 (12 self)
 Add to MetaCart
This paper addresses the joint design of transmit and receive beamforming or linear processing (commonly termed linear precoding at the transmitter and equalization at the receiver) for multicarrier multipleinput multipleoutput (MIMO) channels under a variety of design criteria. Instead of considering each design criterion in a separate way, we generalize the existing results by developing a unified framework based on considering two families of objective functions that embrace most reasonable criteria to design a communication system: Schurconcave and Schurconvex functions. Once the optimal structure of the transmitreceive processing is known, the design problem simplifies and can be formulated within the powerful framework of convex optimization theory, in which a great number of interesting design criteria can be easily accommodated and efficiently solved, even though closedform expressions may not exist. From this perspective, we analyze a variety of design criteria, and in particular, we derive optimal beamvectors in the sense of having minimum average bit error rate (BER). Additional constraints on the peaktoaverage ratio (PAR) or on the signal dynamic range are easily included in the design. We propose two multilevel waterfilling practical solutions that perform very close to the optimal in terms of average BER with a low implementation complexity. If cooperation among the processing operating at different carriers is allowed, the performance improves significantly. Interestingly, with carrier cooperation, it turns out that the exact optimal solution in terms of average BER can be obtained in closed form.
Convex Optimization Problems Involving Finite autocorrelation sequences
, 2001
"... We discuss convex optimization problems where some of the variables are constrained to be finite autocorrelation sequences. Problems of this form arise in signal processing and communications, and we describe applications in filter design and system identification. Autocorrelation constraints in opt ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
We discuss convex optimization problems where some of the variables are constrained to be finite autocorrelation sequences. Problems of this form arise in signal processing and communications, and we describe applications in filter design and system identification. Autocorrelation constraints in optimization problems are often approximated by sampling the corresponding power spectral density, which results in a set of linear inequalities. They can also be cast as linear matrix inequalities via the KalmanYakubovichPopov lemma. The linear matrix inequality formulation is exact, and results in convex optimization problems that can be solved using interiorpoint methods for semidefinite programming. However, it has an important drawback: to represent an autocorrelation sequence of length n, it requires the introduction of a large number (n(n + 1)/2) of auxiliary variables. This results in a high computational cost when generalpurpose semidefinite programming solvers are used. We present a more efficient implementation based on duality and on interiorpoint methods for convex problems with generalized linear inequalities.
Linear Matrix Inequality Formulation of Spectral Mask Constraints
, 2000
"... The design of a finite impulse response filter often involves a spectral 'mask' which the mag nitude spectrum must satisfy. This constraint can be awkward because it is semiinfinite, since it yields two inequality constraints for each frequency point. In current practice, spectral masks are oft ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
The design of a finite impulse response filter often involves a spectral 'mask' which the mag nitude spectrum must satisfy. This constraint can be awkward because it is semiinfinite, since it yields two inequality constraints for each frequency point. In current practice, spectral masks are often approximated by discretization, but in this paper we will show that piecewise constant masks can be precisely enforced in a finite and convex manner via linear matrix inequalities.
Connections Between SemiInfinite and Semidefinite Programming
"... We consider convex optimization problems with linear matrix inequality (LMI) constraints, i.e., constraints of the form F (x) =F0+x1F1+ +xmFm 0; (1.1) where the matrices Fi = F T ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We consider convex optimization problems with linear matrix inequality (LMI) constraints, i.e., constraints of the form F (x) =F0+x1F1+ +xmFm 0; (1.1) where the matrices Fi = F T
Handling nonnegative constraints in spectral estimation
 in Proceedings of the 34th Asilomar Conference on Signals, Systems, and Computers
, 2000
"... We consider convex optimization problems with the constraint that the variables form a finite autocorrelation sequence, or equivalently, that the corresponding power spectral density is nonnegative. This constraint is often approximated by sampling the power spectral density, which results in a set ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider convex optimization problems with the constraint that the variables form a finite autocorrelation sequence, or equivalently, that the corresponding power spectral density is nonnegative. This constraint is often approximated by sampling the power spectral density, which results in a set of linear inequalities. It can also be cast as a linear matrix inequality via the positivereal lemma. The linear matrix inequality formulation is exact, and results in convex optimization problems that can be solved using interiorpoint methods for semidefinite programming. However, these methods require O(n^6) floating point operations per iteration, if a generalpurpose implementation is used. We introduce a much more efficient method with a complexity of O(n³) flops per iteration.
EXTREME OPTICS AND THE SEARCH FOR EARTHLIKE PLANETS
, 2006
"... Abstract. In this paper I describe a new and exciting application of optimization technology. The problem is to design a space telescope capable of imaging Earthlike planets around nearby stars. Because of limitations inherent in the wave nature of light, the design problem is one of diffraction co ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. In this paper I describe a new and exciting application of optimization technology. The problem is to design a space telescope capable of imaging Earthlike planets around nearby stars. Because of limitations inherent in the wave nature of light, the design problem is one of diffraction control so as to provide the extremely high contrast needed to image a faint planet positioned very close to its much brighter star. I will describe the mathematics behind the diffraction control problem and explain how modern optimization tools were able to provide unexpected solutions that actually changed NASA’s approach to this problem.
Filter Design with Low Complexity Coefficients
"... We introduce a heuristic for designing filters that have low complexity coefficients, as measured by the total number of nonzeros digits in the binary or canonic signed digit (CSD) representations of the filter coefficients, while still meeting a set of design specifications, such as limits on frequ ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce a heuristic for designing filters that have low complexity coefficients, as measured by the total number of nonzeros digits in the binary or canonic signed digit (CSD) representations of the filter coefficients, while still meeting a set of design specifications, such as limits on frequency response magnitude, phase, and group delay. Numerical examples show that the method is able to attain very low complexity designs with only modest relaxation of the specifications. 1
On the average power of multiple subcarrier intensity modulated optical signals: Nehari’s problem and coding bounds
 in Proc. of IEEE International Conference on Communications, May 11–14 2003
"... Abstract — Multiple subcarrier modulation (MSM) is an attractive technique for optical wireless communication for high speed applications. The main disadvantage of this scheme is its low average power efficiency which is an analogous problem to the high peak to mean envelope power ratio (PMEPR) of m ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract — Multiple subcarrier modulation (MSM) is an attractive technique for optical wireless communication for high speed applications. The main disadvantage of this scheme is its low average power efficiency which is an analogous problem to the high peak to mean envelope power ratio (PMEPR) of multicarrier signals. In this paper, we consider the achievable average power reduction of MSM signals by using optimized reserved carriers and coding methods. Based on Nehari’s result we present a lower bound for the maximum average power of the signal after adding the reserved carriers. It is shown that the mean value of the average required power behaves very close to √ 2n log log n for a BPSK constellation where n is the number of subcarriers. We then consider finding the optimum values for the carriers and the effect of having finite bandwidth for reserved carriers. In the next section, mainly based on recent coding results for the PMEPR of multicarrier signals, we show the existence of very high rate codes with average power of O ( √ n log n) for large values of n, and furthermore the existence of codes with nonvanishing to zero rate and average power of O ( √ n) asymptotically.
RealTime Convex Optimization . . .  Recent advances that make it easier to design and implement algorithms
, 2010
"... Convex optimization has been used in signal processing for a long time to choose coefficients for use in fast (linear) algorithms, such as in filter or array design; more recently, it has been used to carry out (nonlinear) processing on the signal itself. Examples of the latter case include total va ..."
Abstract
 Add to MetaCart
Convex optimization has been used in signal processing for a long time to choose coefficients for use in fast (linear) algorithms, such as in filter or array design; more recently, it has been used to carry out (nonlinear) processing on the signal itself. Examples of the latter case include total variation denoising, compressed sensing, fault detection, and image classification. In both scenarios, the optimization is carried out on time scales of seconds or minutes and without strict time constraints. Convex optimization has traditionally been considered computationally expensive, so its use has been limited to applications where plenty of time is available. Such restrictions are no longer justified. The combination of dramatically increased computing power, modern algorithms, and new coding approaches has delivered an enormous speed increase, which makes it possible to solve modestsized convex optimization problems on microsecond or millisecond time scales and with strict deadlines. This enables realtime convex optimization in signal processing.
Noname manuscript No. (will be inserted by the editor) Fast Fourier Optimization Sparsity
"... Abstract Many interesting and fundamentally practical optimization problems, ranging from optics, to signal processing, to radar and acoustics, involve constraints on the Fourier transform of a function. It is wellknown that the fast Fourier transform (fft) is a recursive algorithm that can dramati ..."
Abstract
 Add to MetaCart
Abstract Many interesting and fundamentally practical optimization problems, ranging from optics, to signal processing, to radar and acoustics, involve constraints on the Fourier transform of a function. It is wellknown that the fast Fourier transform (fft) is a recursive algorithm that can dramatically improve the efficiency for computing the discrete Fourier transform. However, because it is recursive, it is difficult to embed into a linear optimization problem. In this paper, we explain the main idea behind the fast Fourier transform and show how to adapt it in such a manner as to make it encodable as constraints in an optimization problem. We demonstrate a realworld problem from the field of highcontrast imaging. On this problem, dramatic improvements are translated to an ability to solve problems with a much finer grid of discretized points. As we shall show, in general, the “fast Fourier ” version of the optimization constraints produces a larger but sparser constraint matrix and therefore one can think of the fast Fourier transform as a method of sparsifying the constraints in an optimization problem, which is usually a good thing.