Results 1  10
of
14
Just Relax: Convex Programming Methods for Identifying Sparse Signals in Noise
, 2006
"... This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that ..."
Abstract

Cited by 302 (1 self)
 Add to MetaCart
This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis.
Onestep sparse estimates in nonconcave penalized likelihood models
 ANN. STATIST.
, 2008
"... Fan and Li propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective funct ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
Fan and Li propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective function is nondifferentiable and nonconcave. In this article, we propose a new unified algorithm based on the local linear approximation (LLA) for maximizing the penalized likelihood for a broad class of concave penalty functions. Convergence and other theoretical properties of the LLA algorithm are established. A distinguished feature of the LLA algorithm is that at each LLA step, the LLA estimator can naturally adopt a sparse representation. Thus, we suggest using the onestep LLA estimator from the LLA algorithm as the final estimates. Statistically, we show that if the regularization parameter is appropriately chosen, the onestep LLA estimates enjoy the oracle properties with good initial estimators. Computationally, the onestep LLA estimation methods dramatically reduce the computational cost in maximizing the nonconcave penalized likelihood. We conduct some Monte Carlo simulation to assess the finite sample performance of the onestep sparse estimation methods. The results are very encouraging.
Variable Selection Using MM Algorithm
 Annals of Statistics
, 2005
"... Variable selection is fundamental to highdimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
Variable selection is fundamental to highdimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize–maximize (MM) algorithm. MM algorithms are useful extensions of the wellknown class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton–Raphsonlike aspect of these algorithms
Asymptotic analysis of MAP estimation via the replica method and applications to compressed sensing
, 2009
"... The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measureme ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of anndimensional vector “decouples ” asnscalar MAP estimators. The result is a counterpart to Guo and Verdú’s replica analysis of minimum meansquared error estimation. The replica MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero normregularized estimation. In the case of lasso estimation the scalar estimator reduces to a softthresholding operator, and for zero normregularized estimation it reduces to a hardthreshold. Among other benefits, the replica method provides a computationallytractable method for exactly computing various performance metrics including meansquared error and sparsity pattern recovery probability.
From Bernoulli–Gaussian Deconvolution to Sparse Signal Restoration
"... © 2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
© 2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Abstract—Formulated as a least square problem under an 0 constraint, sparse signal restoration is a discrete optimization problem, known to be NP complete. Classical algorithms include, by increasing cost and efficiency, matching pursuit (MP), orthogonal matching pursuit (OMP), orthogonal least squares (OLS), stepwise regression algorithms and the exhaustive search. We revisit the single most likely replacement (SMLR) algorithm, developed in the mid1980s for Bernoulli–Gaussian signal restoration. We show that the formulation of sparse signal restoration as a limit case of Bernoulli–Gaussian signal restoration leads to an 0penalized least square minimization problem, to which SMLR can be straightforwardly adapted. The resulting algorithm, called single best replacement (SBR), can be interpreted as a forward–backward extension of OLS sharing similarities with stepwise regression algorithms. Some structural properties of SBR are put forward. A fast and stable implementation is proposed. The approach is illustrated on two inverse problems involving highly correlated dictionaries. We show that SBR is very competitive with popular sparse algorithms in terms of tradeoff between accuracy and computation time. Index Terms—BernoulliGaussian (BG) signal restoration, inverse problems, mixed 2 0 criterion minimization, orthogonal least squares, SMLR algorithm, sparse signal estimation, stepwise regression algorithms. I.
The Knowledge Gradient Algorithm For Online Subset Selection
"... Abstract — We derive a oneperiod lookahead policy for online subset selection problems, where learning about one subset also gives us information about other subsets. We show that the resulting decision rule is easily computable, and present experimental evidence that the policy is competitive aga ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract — We derive a oneperiod lookahead policy for online subset selection problems, where learning about one subset also gives us information about other subsets. We show that the resulting decision rule is easily computable, and present experimental evidence that the policy is competitive against other online learning policies. I.
Balancing comfort: Occupants' control of window blinds in private offices
, 2005
"... Balancing comfort: Occupants' control of window blinds in private offices by Vorapat Inkarojrit Doctor of Philosophy in Architecture University of California, Berkeley Professor Charles C. Benton, Chair The goal of this study was to develop predictive models of window blind control that could be use ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Balancing comfort: Occupants' control of window blinds in private offices by Vorapat Inkarojrit Doctor of Philosophy in Architecture University of California, Berkeley Professor Charles C. Benton, Chair The goal of this study was to develop predictive models of window blind control that could be used as a function in energy simulation programs and provide the basis for the development of future automated shading systems. Toward this goal, a twopart study, consisting of a window blind usage survey and a field study, was conducted in Berkeley, California, USA, during a period spanning from the vernal equinox to window solstice. A total of one hundred and thirteen office building occupants participated in the survey. Twentyfive occupants participated in the field study, in which measurements of physical environmental conditions were crosslinked to the participants' assessment of visual and thermal comfort sensations. Results from the survey showed that the primary window blind closing reason was to reduce glare from sunlight and bright windows. For the field study, a total of thirteen predictive window blind control logistic models were derived using the Generalized Estimating Equations (GEE) technique. TABLE OF CONTENTS TABLE OF CONTENTS........................................................................................... i LIST OF FIGURES................................................................................................... v LIST OF TABLES..................................................................................................... xi ACKNOWLEDGEMENTS.......................................................................................xiii CHAPTER 1:
Sparse Approximations for HighFidelity Compression of Network Traffic Data
 In Proceedings of ACM/USENIX Internet Measurement Conference (IMC
, 2005
"... An important component of traffic analysis and network monitoring is the ability to correlate events across multiple data streams, from different sources and from different time periods. Storing such a large amount of data for visualizing traffic trends and for building prediction models of “normal ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
An important component of traffic analysis and network monitoring is the ability to correlate events across multiple data streams, from different sources and from different time periods. Storing such a large amount of data for visualizing traffic trends and for building prediction models of “normal ” network traffic represents a great challenge because the data sets are enormous. In this paper we present the application and analysis of signal processing techniques for effective practical compression of network traffic data. We propose to use a sparse approximation of the network traffic data over a rich collection of natural building blocks, with several natural dictionaries drawn from the networking community’s experience with traffic data. We observe that with such natural dictionaries, high fidelity compression of the original traffic data can be achieved such that even with a compression ratio of around 1:6, the compression error, in terms of the energy of the original signal lost, is less than 1%. We also observe that the sparse representations are stable over time, and that the stable components correspond to welldefined periodicities in network traffic. 1
Bayesian Modelling of Music: Algorithmic Advances and . . .
, 2005
"... In order to perform many signal processing tasks such as classification, pattern recognition and coding, it is helpful to specify a signal model in terms of meaningful signal structures. In general, designing such a model is complicated and for many signals it is not feasible to specify the appropri ..."
Abstract
 Add to MetaCart
In order to perform many signal processing tasks such as classification, pattern recognition and coding, it is helpful to specify a signal model in terms of meaningful signal structures. In general, designing such a model is complicated and for many signals it is not feasible to specify the appropriate structure. Adaptive models overcome this problem by learning structures from a set of signals. Such adaptive models need to be general enough, so that they can represent relevant structures. However, more general models often require additional constraints to guide the learning procedure. In this thesis
Date
"... Computer Sciences Simulation is the research tool of choice for a majority of the mobile ad hoc network (MANET) community. However, while the use of simulation has increased, the credibility of the simulation results has decreased. To determine the state of MANET simulation studies, we surveyed the ..."
Abstract
 Add to MetaCart
Computer Sciences Simulation is the research tool of choice for a majority of the mobile ad hoc network (MANET) community. However, while the use of simulation has increased, the credibility of the simulation results has decreased. To determine the state of MANET simulation studies, we surveyed the 20002005 proceedings of the ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). We present the results of our survey and summarize common simulation study pitfalls. We develop standards and algorithms that help enable MANET researchers to move toward the goal of simulationbased research with credible scenarios. We also document a large variable analysis of the Location Aided Routing (LAR) protocol. This study discovers several variables that have a significant impact on LAR performance, but are not always considered in a MANET simulation study. Finally, we discuss tools we created that aid the development of credible simulation studies. We