Results 1  10
of
136
Learning the Kernel Matrix with SemiDefinite Programming
, 2002
"... Kernelbased learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information ..."
Abstract

Cited by 549 (25 self)
 Add to MetaCart
Kernelbased learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the socalled kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input spaceclassical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied
Duality, achievable rates, and sumrate capacity of Gaussian MIMO broadcast channels
 IEEE TRANS. INFORM. THEORY
, 2003
"... We consider a multiuser multipleinput multipleoutput (MIMO) Gaussian broadcast channel (BC), where the transmitter and receivers have multiple antennas. Since the MIMO BC is in general a nondegraded BC, its capacity region remains an unsolved problem. In this paper, we establish a duality between ..."
Abstract

Cited by 210 (19 self)
 Add to MetaCart
We consider a multiuser multipleinput multipleoutput (MIMO) Gaussian broadcast channel (BC), where the transmitter and receivers have multiple antennas. Since the MIMO BC is in general a nondegraded BC, its capacity region remains an unsolved problem. In this paper, we establish a duality between what is termed the “dirty paper” achievable region (the Caire–Shamai achievable region) for the MIMO BC and the capacity region of the MIMO multipleaccess channel (MAC), which is easy to compute. Using this duality, we greatly reduce the computational complexity required for obtaining the dirty paper achievable region for the MIMO BC. We also show that the dirty paper achievable region achieves the sumrate capacity of the MIMO BC by establishing that the maximum sum rate of this region equals an upper bound on the sum rate of the MIMO BC.
Iterative Waterfilling for Gaussian Vector Multiple Access Channels
 IEEE Transactions on Information Theory
, 2001
"... This paper characterizes the capacity region of a Gaussian multiple access channel with vector inputs and a vector output with or without intersymbol interference. The problem of finding the optimal input distribution is shown to be a convex programming problem, and an efficient numerical algorithm ..."
Abstract

Cited by 191 (11 self)
 Add to MetaCart
This paper characterizes the capacity region of a Gaussian multiple access channel with vector inputs and a vector output with or without intersymbol interference. The problem of finding the optimal input distribution is shown to be a convex programming problem, and an efficient numerical algorithm is developed to evaluate the optimal transmit spectrum under the maximum sum data rate criterion. The numerical algorithm has an it#8 at#8 e wat#8filling int#j pret#4968 . It converges from any starting point and it reaches with in s per output dimension per transmission from the Kuser multiple access sum capacity af t#j just one it#4 at#49 . These results are also applicable to vector multiple access fading channels.
Model selection through sparse maximum likelihood estimation
 Journal of Machine Learning Research
, 2008
"... We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added ℓ1norm penalty term. The problem as formulated is convex but the memor ..."
Abstract

Cited by 157 (1 self)
 Add to MetaCart
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added ℓ1norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive ℓ1norm penalized regression. Our second algorithm, based on Nesterov’s first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright and Jordan, 2006), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.
Model selection and estimation in the Gaussian graphical model
 BIOMETRIKA (2007), PP. 1–17
, 2007
"... ..."
A game theoretic approach to controller design for hybrid systems
 Proceedings of the IEEE
, 2000
"... We present a method to design controllers for safety specifications in hybrid systems. The hybrid system combines discrete event dynamics with nonlinear continuous dynamics: the discrete event dynamics model linguistic and qualitative information and naturally accommodate mode switching logic, and t ..."
Abstract

Cited by 89 (29 self)
 Add to MetaCart
We present a method to design controllers for safety specifications in hybrid systems. The hybrid system combines discrete event dynamics with nonlinear continuous dynamics: the discrete event dynamics model linguistic and qualitative information and naturally accommodate mode switching logic, and the continuous dynamics model the physical processes themselves, such as the continuous response of an aircraft to the forces of aileron and throttle. Input variables model both continuous and discrete control and disturbance parameters. We translate safety specifications into restrictions on the system’s reachable sets of states. Then, using analysis based on optimal control and game theory for automata and continuous dynamical systems, we derive Hamilton–Jacobi equations whose solutions describe the boundaries of reachable sets. These equations are the heart of our general controller synthesis technique for hybrid systems, in which we calculate feedback control laws for
Asymptotically Optimal WaterFilling in Vector MultipleAccess Channels
 IEEE Trans. Inform. Theory
, 2001
"... Dynamic resource allocation is an important means to increase the sum capacity of fading multipleaccess channels (MACs). In this paper, we consider vector multiaccess channels (channels where each user has multiple degrees of freedom) and study the effect of power allocation as a function of the ch ..."
Abstract

Cited by 62 (4 self)
 Add to MetaCart
Dynamic resource allocation is an important means to increase the sum capacity of fading multipleaccess channels (MACs). In this paper, we consider vector multiaccess channels (channels where each user has multiple degrees of freedom) and study the effect of power allocation as a function of the channel state on the sum capacity (or spectral efficiency) defined as the maximum sum of rates of users per unit degree of freedom at which the users can jointly transmit reliably, in an information theoretic sense, assuming random directions of received signal. Directsequence codedivision multipleaccess (DSCDMA) channels and MACs with multiple antennas at the receiver are two systems that fall under the purview of our model. Our main result is the identification of a simple dynamic powerallocation scheme that is optimal in a large system, i.e., with a large number of users and a correspondingly large number of degrees of freedom. A key feature of this policy is that, for any user, it depends on the instantaneous amplitude of channel state of that user alone and the structure of the policy is "waterfilling." In the context of DSCDMA and in the special case of no fading, the asymptotically optimal power policy of waterfilling simplifies to constant power allocation over all realizations of signature sequences; this result verifies the conjecture made in [28]. We study the behavior of the asymptotically optimal waterfilling policy in various regimes of number of users per unit degree of freedom and signaltonoise ratio (SNR). We also generalize this result to multiple classes, i.e., the situation when users in different classes have different average power constraints.
Robust minimum variance beamforming
 IEEE Transactions on Signal Processing
, 2005
"... Abstract—This paper introduces an extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response. Sources of this uncertainty include imprecise knowledge of the angle of arrival and uncertainty in the array manifold. In our method, uncerta ..."
Abstract

Cited by 62 (10 self)
 Add to MetaCart
Abstract—This paper introduces an extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response. Sources of this uncertainty include imprecise knowledge of the angle of arrival and uncertainty in the array manifold. In our method, uncertainty in the array manifold is explicitly modeled via an ellipsoid that gives the possible values of the array for a particular look direction. We choose weights that minimize the total weighted power output of the array, subject to the constraint that the gain should exceed unity for all array responses in this ellipsoid. The robust weight selection process can be cast as a secondorder cone program that can be solved efficiently using Lagrange multiplier techniques. If the ellipsoid reduces to a single point, the method coincides with Capon’s method. We describe in detail several methods that can be used to derive an appropriate uncertainty ellipsoid for the array response. We form separate uncertainty ellipsoids for each component in the signal path (e.g., antenna, electronics) and then determine an aggregate uncertainty ellipsoid from these. We give new results for modeling the elementwise products of ellipsoids. We demonstrate the robust beamforming and the ellipsoidal modeling methods with several numerical examples. Index Terms—Ellipsoidal calculus, Hadamard product, robust beamforming, secondorder cone programming.
Convex Optimization & Euclidean Distance Geometry
, 2005
"... ISBN 9781847280640 (International) Version 07.07.2006 available in print as conceived in color. cybersearch: I. convex optimization II. convex cones III. convex geometry IV. distance geometry V. distance matrix programs and graphics by Matlab typesetting by with donations from SIAM and AMS. This ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
ISBN 9781847280640 (International) Version 07.07.2006 available in print as conceived in color. cybersearch: I. convex optimization II. convex cones III. convex geometry IV. distance geometry V. distance matrix programs and graphics by Matlab typesetting by with donations from SIAM and AMS. This searchable electronic color pdfBook is clicknavigable within the text by page, section, subsection, chapter, theorem, example, definition, cross reference, citation, equation, figure, table, and hyperlink. A pdfBook has no electronic copy protection and can be read and printed by most computers. The publisher hereby grants the right to reproduce this work in any format but limited to personal use. �2005 Jon Dattorro
Statistical Timing for Parametric Yield Prediction of Digital Integrated Circuits
, 2003
"... Uncertainty in circuit performance due to manufacturing and environmental variations is increasing with each new generation of technology. It is therefore important to predict the performance of a chip as a probabilistic quantity. This paper proposes three novel algorithms for statistical timing ana ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
Uncertainty in circuit performance due to manufacturing and environmental variations is increasing with each new generation of technology. It is therefore important to predict the performance of a chip as a probabilistic quantity. This paper proposes three novel algorithms for statistical timing analysis and parametric yield prediction of digital integrated circuits. The methods have been implemented in the context of the 42660 static timing analyzer. Numerical results are presented to study the strengths and weaknesses of these complementary approaches. Acrossthechip variability continues to be accommodated by 39516 's "Linear Combination of Delay (LCD)" mode. Timing analysis results in the face of statistical temperature and V dd variations are presented on an industrial ASIC part on which a bounded timing methodology leads to surprisingly wrong results.