Results 1  10
of
180
Regularization paths for generalized linear models via coordinate descent
, 2009
"... We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, twoclass logistic regression, and multinomial regression problems while the penalties include ℓ1 (the lasso), ℓ2 (ridge regression) and mixtures of the two (the elastic ..."
Abstract

Cited by 228 (8 self)
 Add to MetaCart
We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, twoclass logistic regression, and multinomial regression problems while the penalties include ℓ1 (the lasso), ℓ2 (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.
NESTA: A Fast and Accurate FirstOrder Method for Sparse Recovery
, 2009
"... Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel firstorder ..."
Abstract

Cited by 82 (2 self)
 Add to MetaCart
Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel firstorder methods in convex optimization, most notably Nesterov’s smoothing technique, this paper introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov’s work, one of the key ideas of this algorithm is a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradientdescent algorithms. This paper demonstrates that this approach is ideally suited for solving largescale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed stateoftheart methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as totalvariation minimization, and
Trust region Newton method for largescale logistic regression
 In Proceedings of the 24th International Conference on Machine Learning (ICML
, 2007
"... Largescale logistic regression arises in many applications such as document classification and natural language processing. In this paper, we apply a trust region Newton method to maximize the loglikelihood of the logistic regression model. The proposed method uses only approximate Newton steps in ..."
Abstract

Cited by 69 (12 self)
 Add to MetaCart
(Show Context)
Largescale logistic regression arises in many applications such as document classification and natural language processing. In this paper, we apply a trust region Newton method to maximize the loglikelihood of the logistic regression model. The proposed method uses only approximate Newton steps in the beginning, but achieves fast convergence in the end. Experiments show that it is faster than the commonly used quasi Newton approach for logistic regression. We also compare it with existing linear SVM implementations. 1
An Empirical Study of Context in Object Detection
"... This paper presents an empirical evaluation of the role of context in a contemporary, challenging object detection task – the PASCAL VOC 2008. Previous experiments with context have mostly been done on homegrown datasets, often with nonstandard baselines, making it difficult to isolate the contrib ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
(Show Context)
This paper presents an empirical evaluation of the role of context in a contemporary, challenging object detection task – the PASCAL VOC 2008. Previous experiments with context have mostly been done on homegrown datasets, often with nonstandard baselines, making it difficult to isolate the contribution of contextual information. In this work, we present our analysis on a standard dataset, using topperforming local appearance detectors as baseline. We evaluate several different sources of context and ways to utilize it. While we employ many contextual cues that have been used before, we also propose a few novel ones including the use of geographic context and a new approach for using object spatial support. 1.
Regularization and feature selection in leastsquares temporal difference learning
, 2009
"... We consider the task of reinforcement learning with linear value function approximation. Temporal difference algorithms, and in particular the LeastSquares Temporal Difference (LSTD) algorithm, provide a method for learning the parameters of the value function, but when the number of features is la ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
We consider the task of reinforcement learning with linear value function approximation. Temporal difference algorithms, and in particular the LeastSquares Temporal Difference (LSTD) algorithm, provide a method for learning the parameters of the value function, but when the number of features is large this algorithm can overfit to the data and is computationally expensive. In this paper, we propose a regularization framework for the LSTD algorithm that overcomes these difficulties. In particular, we focus on the case of l1 regularization, which is robust to irrelevant features and also serves as a method for feature selection. Although the l1 regularized LSTD solution cannot be expressed as a convex optimization problem, we present an algorithm similar to the Least Angle Regression (LARS) algorithm that can efficiently compute the optimal solution. Finally, we demonstrate the performance of the algorithm experimentally.
Sparse Channel Estimation for Multicarrier Underwater Acoustic Communication: From Subspace Methods to Compressed Sensing
"... Abstract—In this paper, we present various channel estimators that exploit the channel sparsity in a multicarrier underwater acoustic system, including subspace algorithms from the array precessing literature, namely rootMUSIC and ESPRIT, and recent compressed sensing algorithms in form of Orthogon ..."
Abstract

Cited by 43 (27 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present various channel estimators that exploit the channel sparsity in a multicarrier underwater acoustic system, including subspace algorithms from the array precessing literature, namely rootMUSIC and ESPRIT, and recent compressed sensing algorithms in form of Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP). Numerical simulation and experimental data of an OFDM blockbyblock receiver are used to evaluate the proposed algorithms in comparison to the conventional leastsquares (LS) channel estimator. We observe that subspace methods can tolerate small to moderate Doppler effects, and outperform the LS approach when the channel is indeed sparse. On the other hand, compressed sensing algorithms uniformly outperform the LS and subspace methods. Coupled with a channel equalizer mitigating intercarrier interference, the compressed sensing algorithms can handle channels with significant Doppler spread.
Highly undersampled magnetic resonance image reconstruction via homotopic ℓ0minimization
 IEEE Trans. Med. Imaging
, 2009
"... any reduction in scan time offers a number of potential benefits ranging from hightemporalrate observation of physiological processes to improvements in patient comfort. Following recent developments in Compressive Sensing (CS) theory, several authors have demonstrated that certain classes of MR i ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
(Show Context)
any reduction in scan time offers a number of potential benefits ranging from hightemporalrate observation of physiological processes to improvements in patient comfort. Following recent developments in Compressive Sensing (CS) theory, several authors have demonstrated that certain classes of MR images which possess sparse representations in some transform domain can be accurately reconstructed from very highly undersampled Kspace data by solving a convex ℓ1minimization problem. Although ℓ1based techniques are extremely powerful, they inherently require a degree of oversampling above the theoretical minimum sampling rate to guarantee that exact reconstruction can be achieved. In this paper, we propose a generalization of the Compressive Sensing paradigm based on homotopic approximation of the ℓ0 quasinorm and show how MR image reconstruction can be pushed even further below the Nyquist limit and significantly closer to the theoretical bound. Following a brief review of standard Compressive Sensing methods and the developed theoretical extensions, several example MRI reconstructions from highly undersampled Kspace data are presented.
Recovering timevarying networks of dependencies in social and biological studies
 Proc. Nat. Acad. Sci
, 2009
"... A plausible representation of the relational information among entities in dynamic systems such as a living cell or a social community is a stochastic network that is topologically rewiring and semantically evolving over time. While there is a rich literature in modeling static or temporally invaria ..."
Abstract

Cited by 33 (8 self)
 Add to MetaCart
(Show Context)
A plausible representation of the relational information among entities in dynamic systems such as a living cell or a social community is a stochastic network that is topologically rewiring and semantically evolving over time. While there is a rich literature in modeling static or temporally invariant networks, little has been done toward recovering the network structure when the networks are not observable in a dynamic context. In this paper, we present a new machine learning method called TESLA, which builds on a temporally smoothed l1regularized logistic regression formalism that can be cast as a standard convexoptimization problem and solved efficiently using generic solvers scalable to large networks. We report promising results on recovering simulated timevarying networks, and on reverse engineering the latent sequence of temporally rewiring political and academic social networks from longitudinal data, and the evolving gene networks over more than 4000 genes during the life cycle of Drosophila melanogaster from a microarray time course at a resolution limited only by sample frequency.
Fingerprinting the datacenter: Automated classification of performance crises
 In Proceedings of EuroSys’10
, 2010
"... Contemporary datacenters comprise hundreds or thousands of machines running applications requiring high availability and responsiveness. Although a performance crisis is easily detected by monitoring key endtoend performance indicators (KPIs) such as response latency or request throughput, the var ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
(Show Context)
Contemporary datacenters comprise hundreds or thousands of machines running applications requiring high availability and responsiveness. Although a performance crisis is easily detected by monitoring key endtoend performance indicators (KPIs) such as response latency or request throughput, the variety of conditions that can lead to KPI degradation makes it difficult to select appropriate recovery actions. We propose and evaluate a methodology for automatic classification and identification of crises, and in particular for detecting whether a given crisis has been seen before, so that a known solution may be immediately applied. Our approach is based on a new and efficient representation of the datacenter’s state called a fingerprint, constructed by statistical selection and summarization of the hundreds of performance metrics typically collected on such systems. Our evaluation uses 4 months of troubleticket data from a production datacenter with hundreds of machines running a 24x7 enterpriseclass userfacing application. In experiments in a realistic and rigorous operational setting, our approach provides operators the information necessary to initiate recovery actions with 80 % correctness in an average of 10 minutes, which is 50 minutes earlier than the deadline provided to us by the operators. To the best of our knowledge this is Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.