Results 1  10
of
54
Optimal Content Placement for a Largescale VoD System
 In ACM CoNEXT
, 2010
"... IPTV service providers offering VideoonDemand currently use servers at each metropolitan office to store all the videos in their library. With the rapid increase in library sizes, it will soon become infeasible to replicate the entire library at each office. We present an approach for intelligent ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
IPTV service providers offering VideoonDemand currently use servers at each metropolitan office to store all the videos in their library. With the rapid increase in library sizes, it will soon become infeasible to replicate the entire library at each office. We present an approach for intelligent content placement that scales to large library sizes (e.g., 100Ks of videos). We formulate the problem as a mixed integer program (MIP) that takes into account constraints such as disk space, link bandwidth, and content popularity. To overcome the challenges of scale, we employ a Lagrangian relaxationbased decomposition technique combined with integer rounding. Our technique finds a nearoptimal solution (e.g., within 12%) with orders of magnitude speedup relative to solving even the LP relaxation via standard software. We also present simple strategies to address practical issues such as popularity estimation, content updates, shortterm popularity fluctuation, and frequency of placement updates. Using traces from an operational system, we show that our approach significantly outperforms simpler placement strategies. For instance, our MIPbased solution can serve all requests using only half the link bandwidth used by LRU or LFU cache replacement policies. We also investigate the tradeoff between disk space and network bandwidth. 1.
Approximate level method
"... In this paper we propose and analyze a variant of the level method [4], which is an algorithm for minimizing nonsmooth convex functions. The main work per iteration is spent on 1) minimizing a piecewiselinear model of the objective function and on 2) projecting onto the intersection of the feasible ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
In this paper we propose and analyze a variant of the level method [4], which is an algorithm for minimizing nonsmooth convex functions. The main work per iteration is spent on 1) minimizing a piecewiselinear model of the objective function and on 2) projecting onto the intersection of the feasible region and a polyhedron arising as a level set of the model. We show that by replacing exact computations in both cases by approximate computations, in relative scale, the theoretical iteration complexity increases only by the factor of four. This means that while spending less work on the subproblems, we are able to retain the good theoretical properties of the level method.
Approximation algorithms for semidefinite packing problems with applications to MAXCUT and graph coloring
, 2004
"... We describe the semidefinite analog of the vector packing problem, and show that the semidefinite programming relaxations for Maxcut [10] and graph coloring [16] are in this class of problems. We extend a method of Bienstock and Iyengar [4] which was based on ideas from Nesterov [24] to design an al ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We describe the semidefinite analog of the vector packing problem, and show that the semidefinite programming relaxations for Maxcut [10] and graph coloring [16] are in this class of problems. We extend a method of Bienstock and Iyengar [4] which was based on ideas from Nesterov [24] to design an algorithm for computing #approximate solutions for this class of semidefinite programs. Our algorithm is in the spirit of Klein and Lu [17], and decreases the dependence of the runtime on # from # 2 to # 1 . For sparse graphs, our method is faster than the best specialized interior point methods. A significant feature of our method is that it treats both the Maxcut and the graph coloring problem in a unified manner. 1
Beating simplex for fractional packing and covering linear programs. FOCS
, 2007
"... We give an approximation algorithm for packing and covering linear programs (linear programs with nonnegative coefficients). Given a constraint matrix with n nonzeros, r rows, and c columns, the algorithm (with high probability) computes feasible primal and dual solutions whose costs are within a f ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We give an approximation algorithm for packing and covering linear programs (linear programs with nonnegative coefficients). Given a constraint matrix with n nonzeros, r rows, and c columns, the algorithm (with high probability) computes feasible primal and dual solutions whose costs are within a factor of 1 + ε of OPT (the optimal cost) in time O(n + (r + c)log(n)/ε 2). For dense problems (with r, c = O ( √ n)) the time is O(n + √ n log(n)/ε 2) — linear even as ε → 0. In comparison, previous Lagrangianrelaxation algorithms generally take at least Ω(n log(n)/ε 2) time, while (for small ε) the Simplex algorithm typically takes at least Ω(n min(r, c)) time. 1.
Consistent ranking of multivariate volatility models
"... A large number of parameterizations have been proposed to model conditional variance dynamics in a multivariate framework. This paper examines the ranking of multivariate volatility models in terms of their ability to forecast outofsample conditional variance matrices. We investigate how sensitive ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
A large number of parameterizations have been proposed to model conditional variance dynamics in a multivariate framework. This paper examines the ranking of multivariate volatility models in terms of their ability to forecast outofsample conditional variance matrices. We investigate how sensitive the ranking is to alternative statistical loss functions which evaluate the distance between the true covariance matrix and its forecast. The evaluation of multivariate volatility models requires the use of a proxy for the unobservable volatility matrix which may shift the ranking of the models. Therefore, to preserve this ranking conditions with respect to the choice of the loss function have to be discussed. To do this, we extend the conditions defined in Hansen and Lunde (2006) to the multivariate framework. By invoking norm equivalence we are able to extend the class of loss functions that preserve the true ranking. In a simulation study, we sample data from a continuous time multivariate diffusion process to illustrate the sensitivity of the ranking to different choices of the loss functions and to the quality of the proxy. An application to three foreign exchange rates, where we compare the forecasting performance of 16 multivariate GARCH
Probabilistic Analysis of Linear Programming Decoding
"... Abstract—We initiate the probabilistic analysis of linear programming (LP) decoding of lowdensity paritycheck (LDPC) codes. Specifically, we show that for a random LDPC code ensemble, the linear programming decoder of Feldman et al. succeeds in correcting a constant fraction of errors with high pr ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract—We initiate the probabilistic analysis of linear programming (LP) decoding of lowdensity paritycheck (LDPC) codes. Specifically, we show that for a random LDPC code ensemble, the linear programming decoder of Feldman et al. succeeds in correcting a constant fraction of errors with high probability. The fraction of correctable errors guaranteed by our analysis surpasses previous nonasymptotic results for LDPC codes, and in particular, exceeds the best previous finitelength result on LP decoding by a factor greater than ten. This improvement stems in part from our analysis of probabilistic bitflipping channels, as opposed to adversarial channels. At the core of our analysis is a novel combinatorial characterization of LP decoding success, based on the notion of a flow on the Tanner graph of the code. An interesting byproduct of our analysis is to establish the existence of “probabilistic expansion ” in random bipartite graphs, in which one requires only that almost every (as opposed to every) set of a certain size expands, for sets much larger than in the classical worst case setting. Index Terms—Binarysymmetric channel (BSC), channel coding, errorcontrol coding, expanders, factor graphs, linear programming decoding, lowdensity paritycheck (LDPC) codes, randomized algorithms, sum–product algorithm. I.
Approximation algorithms for mixed fractional packing and covering problems
 SIAM J. on Optimization
, 2004
"... Abstract We propose an approximation algorithm based on the Lagrangian or pricedirective decomposition method to compute an¯approximate solution of the mixed fractional packing and covering problem: findÜ�such that�Ü � ¯�,�Ü � ¯�where�Ü��Üare vectors withÅnonnegative convex and concave functions, ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract We propose an approximation algorithm based on the Lagrangian or pricedirective decomposition method to compute an¯approximate solution of the mixed fractional packing and covering problem: findÜ�such that�Ü � ¯�,�Ü � ¯�where�Ü��Üare vectors withÅnonnegative convex and concave functions,�and�areÅ dimensional nonnegative vectors and�is a convex set that can be queried by an optimization or feasibility oracle. We propose an algorithm that needs onlyÇÅ ¯ ÐÒÅ ¯ iterations or calls to the oracle. The main contribution is that the algorithm solves the general mixed fractional packing and covering problem (in contrast to pure fractional packing and covering problems and to the special mixed packing and covering problem with��ÁÊÆ) and runs in time independent of the socalled width of the problem.
Environmental negotiations as dynamic games: Why so selfish
"... We study a tradeoff between economic and environmental indicators using a twostage optimal control setting where the player can switch to a cleaner technology, that is environmentally “efficient”, but economically less productive. We provide an analytical characterization of the solution paths for ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We study a tradeoff between economic and environmental indicators using a twostage optimal control setting where the player can switch to a cleaner technology, that is environmentally “efficient”, but economically less productive. We provide an analytical characterization of the solution paths for the case where the considered utility functions are increasing and strictly concave with respect to consumption and decreasing linearly with respect to the pollution stock. In this context, an isolated player will either immediately start using the environmentally efficient technology, or for ever continue applying the old and “dirty ” technology. In a twoplayer (say, two neighbor countries) dynamic game where the pollution results from a sum of two consumptions, we prove existence of a Nash (openloop) equilibrium, in which each player chooses the technology selfishly i.e., without considering the choice made by the other player. A Stackelberg game solution displays the same properties. Under cooperation, the country reluctant to adopt the technology as an equilibrium solution, chooses to switch to the cleaner technology provided it benefits from some