Results 1  10
of
62
The Structure and Complexity of Nash Equilibria for a Selfish Routing Game
, 2002
"... In this work, we study the combinatorial structure and the computational complexity of Nash equilibria for a certain game that models sel sh routing over a network consisting of m parallel links. We assume a collection of n users, each employing a mixed strategy, which is a probability distribu ..."
Abstract

Cited by 120 (28 self)
 Add to MetaCart
In this work, we study the combinatorial structure and the computational complexity of Nash equilibria for a certain game that models sel sh routing over a network consisting of m parallel links. We assume a collection of n users, each employing a mixed strategy, which is a probability distribution over links, to control the routing of its own assigned trac. In a Nash equilibrium, each user sel shly routes its trac on those links that minimize its expected latency cost, given the network congestion caused by the other users. The social cost of a Nash equilibrium is the expectation, over all random choices of the users, of the maximum, over all links, latency through a link.
Hedging uncertainty: Approximation algorithms for stochastic optimization problems
 In Proceedings of the 10th International Conference on Integer Programming and Combinatorial Optimization
, 2004
"... We initiate the design of approximation algorithms for stochastic combinatorial optimization problems; we formulate the problems in the framework of twostage stochastic optimization, and provide nearly tight approximation algorithms. Our problems range from the simple (shortest path, vertex cover, ..."
Abstract

Cited by 82 (12 self)
 Add to MetaCart
(Show Context)
We initiate the design of approximation algorithms for stochastic combinatorial optimization problems; we formulate the problems in the framework of twostage stochastic optimization, and provide nearly tight approximation algorithms. Our problems range from the simple (shortest path, vertex cover, bin packing) to complex (facility location, set cover), and contain representatives with different approximation ratios. The approximation ratio of the stochastic variant of a typical problem is of the same order of magnitude as its deterministic counterpart. Furthermore, common techniques for designing approximation algorithms such as LP rounding, the primaldual method, and the greedy algorithm, can be carefully adapted to obtain these results. 1
Stochastic Load Balancing and Related Problems
 In FOCS
, 1999
"... We study the problems of makespan minimization (load balancing), knapsack, and bin packing when the jobs have stochastic processing requirements or sizes. If the jobs are all Poisson, we present a two approximation for the first problem using Graham's rule, and observe that polynomial time appr ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
We study the problems of makespan minimization (load balancing), knapsack, and bin packing when the jobs have stochastic processing requirements or sizes. If the jobs are all Poisson, we present a two approximation for the first problem using Graham's rule, and observe that polynomial time approximation schemes can be obtained for the last two problems. If the jobs are all exponential, we present polynomial time approximation schemes for all three problems. We also obtain quasipolynomial time approximation schemes for the last two problems if the jobs are Bernoulli variables. 1 Introduction In traditional scheduling problems, each job has a known deterministic size and duration. There are cases, however, where the exact size of a job is not known at the time when a scheduling decision needs to be made; all that is known is a probability distribution on the size of the job. Given a schedule, the value of the objective function itself becomes a random variable. The goal then is to find...
Approximation algorithms for budgeted learning problems
 In Proc. ACM Symp. on Theory of Computing
, 2007
"... We present the first approximation algorithms for a large class of budgeted learning problems. One classic example of the above is the budgeted multiarmed bandit problem. In this problem each arm of the bandit has an unknown reward distribution on which a prior is specified as input. The knowledge ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
We present the first approximation algorithms for a large class of budgeted learning problems. One classic example of the above is the budgeted multiarmed bandit problem. In this problem each arm of the bandit has an unknown reward distribution on which a prior is specified as input. The knowledge about the underlying distribution can be refined in the exploration phase by playing the arm and observing the rewards. However, there is a budget on the total number of plays allowed during exploration. After this exploration phase, the arm with the highest (posterior) expected reward is chosen for exploitation. The goal is to design the adaptive exploration phase subject to a budget constraint on the number of plays, in order to maximize the expected reward of the arm chosen for exploitation. While this problem is reasonably well understood in the infinite horizon setting or regret bounds, the budgeted version of the problem is NPHard. For this problem, and several generalizations, we provide approximate policies that achieve a reward within constant factor of the reward optimal policy. Our algorithms use a novel linear program rounding technique based on stochastic packing.
Stochastic shortest paths via quasiconvex maximization
 PROCEEDINGS OF EUROPEAN SYMPOSIUM OF ALGORITHMS
, 2006
"... We consider the problem of finding shortest paths in a graph with independent randomly distributed edge lengths. Our goal is to maximize the probability that the path length does not exceed a given threshold value (deadline). We give a surprising exact n Θ(log n) algorithm for the case of normally ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
We consider the problem of finding shortest paths in a graph with independent randomly distributed edge lengths. Our goal is to maximize the probability that the path length does not exceed a given threshold value (deadline). We give a surprising exact n Θ(log n) algorithm for the case of normally distributed edge lengths, which is based on quasiconvex maximization. We then prove average and smoothed polynomial bounds for this algorithm, which also translate to average and smoothed bounds for the parametric shortest path problem, and extend to a more general nonconvex optimization setting. We also consider a number other edge length distributions, giving a range of exact and approximation schemes.
Consolidating virtual machines with dynamic bandwidth demand in data centers
, 2010
"... Abstract—Recent advances in virtualization technology have made it a common practice to consolidate virtual machines(VMs) into a fewer number of servers. An efficient consolidation scheme requires that VMs are packed tightly, yet receive resources commensurate with their demands. However, measuremen ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Recent advances in virtualization technology have made it a common practice to consolidate virtual machines(VMs) into a fewer number of servers. An efficient consolidation scheme requires that VMs are packed tightly, yet receive resources commensurate with their demands. However, measurements from production data centers show that the network bandwidth demands of VMs are dynamic, making it difficult to characterize the demands by a fixed value and to apply traditional consolidation schemes. In this work, we formulate the VM consolidation into a Stochastic Bin Packing problem and propose an online packing algorithm by which the number of servers required is within (1 + ɛ) ( √ 2 + 1) of the optimum for any ɛ> 0. The result can be improved to within ( √ 2 + 1) of the optimum in a special case. In addition, we use numerical experiments to evaluate the proposed consolidation algorithm and observe 30 % server reduction compared to several benchmark algorithms. I.
Schedulability Analysis of Applications with Stochastic Task Execution Times
 Trans. on Embedded Computing Sys
, 2004
"... In the past decade, the limitations of models considering fixed (worst case) task execution times have been acknowledged for large application classes within soft realtime systems. A more realistic model considers the tasks having varying execution times with given probability distributions. Consid ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
(Show Context)
In the past decade, the limitations of models considering fixed (worst case) task execution times have been acknowledged for large application classes within soft realtime systems. A more realistic model considers the tasks having varying execution times with given probability distributions. Considering such a model with specified task execution time probability distribution functions, an important performance indicator of the system is the expected deadline miss ratio of the tasks and of the task graphs. This article presents an approach for obtaining this indicator in an analytic way. Our goal is to keep the analysis cost low, in terms of required analysis time and memory, while considering as general classes of target application models as possible. The following main assumptions have been made on the applications which are modelled as sets of task graphs: the tasks are periodic, the task execution times have given generalised probability distribution functions, the task execution deadlines are given and arbitrary, the scheduling policy can belong to practically any class of nonpreemptive scheduling policies, and a designer supplied maximum number of concurrent instantiations of the same task graph is tolerated in the system. Experiments show the efficiency of the proposed technique for monoprocessor systems.
Stochastic models for budget optimization in searchbased advertising
 In Proc. Workshop on Internet and Network Economics (WINE
"... Internet search companies sell advertisement slots based on users ’ search queries via an auction. Advertisers have to solve a complex optimization problem of how to place bids on the keywords of their interest so that they can maximize their return (the number of user clicks on their ads) for a giv ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
Internet search companies sell advertisement slots based on users ’ search queries via an auction. Advertisers have to solve a complex optimization problem of how to place bids on the keywords of their interest so that they can maximize their return (the number of user clicks on their ads) for a given budget. This is the budget optimization problem. In this paper, we model budget optimization as it arises in Internet search companies and formulate stochastic versions of the problem. The premise is that Internet search companies can predict probability distributions associated with queries in the future. We identify three natural stochastic models. In the spirit of other stochastic optimization problems, two questions arise. • (Evaluation Problem) Given a bid solution, can we evaluate the expected value of the objective function under different stochastic models? • (Optimization Problem) Can we determine a bid solution that maximizes the objective function in expectation under different stochastic models? Our main results are algorithmic and complexity results for both these problems for our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NPhard. 1
Approximation algorithms for 2stage stochastic optimization problems
 SIGACT News
, 2006
"... Abstract. Stochastic optimization is a leading approach to model optimization problems in which there is uncertainty in the input data, whether from measurement noise or an inability to know the future. In this survey, we outline some recent progress in the design of polynomialtime algorithms with p ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Stochastic optimization is a leading approach to model optimization problems in which there is uncertainty in the input data, whether from measurement noise or an inability to know the future. In this survey, we outline some recent progress in the design of polynomialtime algorithms with performance guarantees on the quality of the solutions found for an important class of stochastic programming problems — 2stage problems with recourse. In particular, we show that for a number of concrete problems, algorithmic approaches that have been applied for their deterministic analogues are also effective in this more challenging domain. More specifically, this work highlights the role of tools from linear programming, rounding techniques, primaldual algorithms, and the role of randomization more generally. 1