Results 1  10
of
311
Pegasos: Primal Estimated subgradient solver for SVM
"... We describe and analyze a simple and effective stochastic subgradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract

Cited by 279 (15 self)
 Add to MetaCart
We describe and analyze a simple and effective stochastic subgradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ɛ2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total runtime of our method is Õ(d/(λɛ)), where d is a bound on the number of nonzero features in each example. Since the runtime does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach also extends to nonlinear kernels while working solely on the primal objective function, though in this case the runtime does depend linearly on the training set size. Our algorithm is particularly well suited for large text classification problems, where we demonstrate an orderofmagnitude speedup over previous SVM learning methods.
Opportunistic transmission scheduling with resourcesharing constraints in wireless networks
 IEEE Journal on Selected Areas in Communications
, 2001
"... We present an “opportunistic ” transmission scheduling policy that exploits timevarying channel conditions and maximizes the system performance stochastically under a certain resource allocation constraint. We establish the optimality of the scheduling scheme, and also that every user experiences ..."
Abstract

Cited by 157 (8 self)
 Add to MetaCart
We present an “opportunistic ” transmission scheduling policy that exploits timevarying channel conditions and maximizes the system performance stochastically under a certain resource allocation constraint. We establish the optimality of the scheduling scheme, and also that every user experiences a performance improvement over any nonopportunistic scheduling policy when users have independent performance values. We demonstrate via simulation results that the scheme is robust to estimation errors, and also works well for nonstationary scenarios, resulting in performance improvements of 20–150 % compared with a scheduling scheme that does not take into account channel conditions. Last, we discuss an extension of our opportunistic scheduling scheme to improve “shortterm ” performance.
A framework for opportunistic scheduling in wireless networks
 COMPUTER NETWORKS
, 2003
"... We present a method, called opportunistic scheduling, for exploiting the timevarying nature of the radio environment to increase the overall performance of the system under certain quality of service/fairness requirements of users. We first introduce a general framework for opportunistic scheduling ..."
Abstract

Cited by 126 (6 self)
 Add to MetaCart
We present a method, called opportunistic scheduling, for exploiting the timevarying nature of the radio environment to increase the overall performance of the system under certain quality of service/fairness requirements of users. We first introduce a general framework for opportunistic scheduling, and then identify three general categories of scheduling problems under this framework. We provide optimal solutions for each of these scheduling problems. All the proposed scheduling policies are implementable online; we provide parameter estimation algorithms and implementation procedures for them. We also show how previous work by us and others directly fits into or is related to this framework. We demonstrate via simulation that opportunistic scheduling schemes result in significant performance improvement compared with nonopportunistic alternatives.
Markov Chain Monte Carlo Estimation of Exponential Random Graph Models
 Journal of Social Structure
, 2002
"... This paper is about estimating the parameters of the exponential random graph model, also known as the p # model, using frequentist Markov chain Monte Carlo (MCMC) methods. The exponential random graph model is simulated using Gibbs or MetropolisHastings sampling. The estimation procedures consider ..."
Abstract

Cited by 104 (15 self)
 Add to MetaCart
This paper is about estimating the parameters of the exponential random graph model, also known as the p # model, using frequentist Markov chain Monte Carlo (MCMC) methods. The exponential random graph model is simulated using Gibbs or MetropolisHastings sampling. The estimation procedures considered are based on the RobbinsMonro algorithm for approximating a solution to the likelihood equation.
Convergence of a stochastic approximation version of the EM algorithm
, 1997
"... The Expectation Maximization (EM) algorithm is a powerful computational technique for locating maxima of functions... ..."
Abstract

Cited by 84 (8 self)
 Add to MetaCart
The Expectation Maximization (EM) algorithm is a powerful computational technique for locating maxima of functions...
Hidden conditional random fields for phone classification
 in Interspeech
, 2005
"... In this paper, we show the novel application of hidden conditional random fields (HCRFs) – conditional random fields with hidden state sequences – for modeling speech. Hidden state sequences are critical for modeling the nonstationarity of speech signals. We show that HCRFs can easily be trained u ..."
Abstract

Cited by 83 (6 self)
 Add to MetaCart
In this paper, we show the novel application of hidden conditional random fields (HCRFs) – conditional random fields with hidden state sequences – for modeling speech. Hidden state sequences are critical for modeling the nonstationarity of speech signals. We show that HCRFs can easily be trained using the simple direct optimization technique of stochastic gradient descent. We present the results on the TIMIT phone classification task and show that HCRFs outperforms comparable ML and CML/MMI trained HMMs. In fact, HCRF results on this task are the best single classifier results known to us. We note that the HCRF framework is easily extensible to recognition since it is a state and label sequence modeling technique. We also note that HCRFs have the ability to handle complex features without any change in training procedure. 1.
Convergence of proportionalfair sharing algorithms under general conditions
 IEEE Trans. Wireless Commun
, 2004
"... We are concerned with the allocation of the base station transmitter time in time varying mobile communications with many users who are transmitting data. Time is divided into small scheduling intervals, and the channel rates for the various users are available at the start of the intervals. Since t ..."
Abstract

Cited by 67 (2 self)
 Add to MetaCart
We are concerned with the allocation of the base station transmitter time in time varying mobile communications with many users who are transmitting data. Time is divided into small scheduling intervals, and the channel rates for the various users are available at the start of the intervals. Since the rates vary randomly, in selecting the current user there is a conflict between full use (by selecting the user with the highest current rate) and fairness (which entails consideration for users with poor throughput to date). The Proportional Fair Scheduler (PFS) of the Qualcomm High Data Rate (HDR) system and related algorithms are designed to deal with such conflicts. The aim here is to put such algorithms on a sure mathematical footing and analyze their behavior. The available analysis [6], while obtaining interesting information, does not address the actual convergence for arbitrarily many users under general conditions. Such algorithms are of the stochastic approximation type and results of stochastic approximation are used to analyze the long term properties. It is shown that the limiting behavior of the sample paths of the throughputs converges to the solution of an intuitively reasonable ordinary differential equation, which is akin to a mean
Adaptive Stochastic Approximation by the Simultaneous Perturbation Method
, 2000
"... Stochastic approximation (SA) has long been applied for problems of minimizing loss functions or root finding with noisy input information. As with all stochastic search algorithms, there are adjustable algorithm coefficients that must be specified, and that can have a profound effect on algorithm p ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
Stochastic approximation (SA) has long been applied for problems of minimizing loss functions or root finding with noisy input information. As with all stochastic search algorithms, there are adjustable algorithm coefficients that must be specified, and that can have a profound effect on algorithm performance. It is known that choosing these coefficients according to an SA analog of the deterministic NewtonRaphson algorithm provides an optimal or nearoptimal form of the algorithm. However, directly determining the required Hessian matrix (or Jacobian matrix for root finding) to achieve this algorithm form has often been difficult or impossible in practice. This paper presents a general adaptive SA algorithm that is based on a simple method for estimating the Hessian matrix, while concurrently estimating the primary parameters of interest. The approach applies in both the gradientfree optimization (KieferWolfowitz) and rootfinding/stochastic gradientbased (RobbinsMonro) settings, and is based on the "simultaneous perturbation (SP)" idea introduced previously. The algorithm requires only a small number of loss function or gradient measurements per iterationindependent of the problem dimensionto adaptively estimate the Hessian and parameters of primary interest. Aside from introducing the adaptive SP approach, this paper presents practical implementation guidance, asymptotic theory, and a nontrivial numerical evaluation. Also included is a discussion and numerical analysis comparing the adaptive SP approach with the iterateaveraging approach to accelerated SA.
The o.d.e. method for convergence of stochastic approximation and reinforcement learning
 SIAM J. CONTROL OPTIM
, 2000
"... It is shown here that stability of the stochastic approximation algorithm is implied by the asymptotic stability of the origin for an associated ODE. This in turn implies convergence of the algorithm. Several specific classes of algorithms are considered as applications. It is found that the result ..."
Abstract

Cited by 60 (12 self)
 Add to MetaCart
It is shown here that stability of the stochastic approximation algorithm is implied by the asymptotic stability of the origin for an associated ODE. This in turn implies convergence of the algorithm. Several specific classes of algorithms are considered as applications. It is found that the results provide (i) a simpler derivation of known results for reinforcement learning algorithms; (ii) a proof for the first time that a class of asynchronous stochastic approximation algorithms are convergent without using any a priori assumption of stability; (iii) a proof for the first time that asynchronous adaptive critic and Qlearning algorithms are convergent for the average cost optimal control problem.