Results 1  10
of
40
The Structure and Complexity of Nash Equilibria for a Selfish Routing Game
, 2002
"... In this work, we study the combinatorial structure and the computational complexity of Nash equilibria for a certain game that models sel sh routing over a network consisting of m parallel links. We assume a collection of n users, each employing a mixed strategy, which is a probability distribu ..."
Abstract

Cited by 101 (22 self)
 Add to MetaCart
In this work, we study the combinatorial structure and the computational complexity of Nash equilibria for a certain game that models sel sh routing over a network consisting of m parallel links. We assume a collection of n users, each employing a mixed strategy, which is a probability distribution over links, to control the routing of its own assigned trac. In a Nash equilibrium, each user sel shly routes its trac on those links that minimize its expected latency cost, given the network congestion caused by the other users. The social cost of a Nash equilibrium is the expectation, over all random choices of the users, of the maximum, over all links, latency through a link.
Hedging uncertainty: Approximation algorithms for stochastic optimization problems
 In Proceedings of the 10th International Conference on Integer Programming and Combinatorial Optimization
, 2004
"... We initiate the design of approximation algorithms for stochastic combinatorial optimization problems; we formulate the problems in the framework of twostage stochastic optimization, and provide nearly tight approximation algorithms. Our problems range from the simple (shortest path, vertex cover, ..."
Abstract

Cited by 66 (10 self)
 Add to MetaCart
We initiate the design of approximation algorithms for stochastic combinatorial optimization problems; we formulate the problems in the framework of twostage stochastic optimization, and provide nearly tight approximation algorithms. Our problems range from the simple (shortest path, vertex cover, bin packing) to complex (facility location, set cover), and contain representatives with different approximation ratios. The approximation ratio of the stochastic variant of a typical problem is of the same order of magnitude as its deterministic counterpart. Furthermore, common techniques for designing approximation algorithms such as LP rounding, the primaldual method, and the greedy algorithm, can be carefully adapted to obtain these results. 1
Stochastic Load Balancing and Related Problems
 In FOCS
, 1999
"... We study the problems of makespan minimization (load balancing), knapsack, and bin packing when the jobs have stochastic processing requirements or sizes. If the jobs are all Poisson, we present a two approximation for the first problem using Graham's rule, and observe that polynomial time approxima ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
We study the problems of makespan minimization (load balancing), knapsack, and bin packing when the jobs have stochastic processing requirements or sizes. If the jobs are all Poisson, we present a two approximation for the first problem using Graham's rule, and observe that polynomial time approximation schemes can be obtained for the last two problems. If the jobs are all exponential, we present polynomial time approximation schemes for all three problems. We also obtain quasipolynomial time approximation schemes for the last two problems if the jobs are Bernoulli variables. 1 Introduction In traditional scheduling problems, each job has a known deterministic size and duration. There are cases, however, where the exact size of a job is not known at the time when a scheduling decision needs to be made; all that is known is a probability distribution on the size of the job. Given a schedule, the value of the objective function itself becomes a random variable. The goal then is to find...
Schedulability Analysis of Applications with Stochastic Task Execution Times
 Trans. on Embedded Computing Sys
, 2004
"... In the past decade, the limitations of models considering fixed (worst case) task execution times have been acknowledged for large application classes within soft realtime systems. A more realistic model considers the tasks having varying execution times with given probability distributions. Consid ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
In the past decade, the limitations of models considering fixed (worst case) task execution times have been acknowledged for large application classes within soft realtime systems. A more realistic model considers the tasks having varying execution times with given probability distributions. Considering such a model with specified task execution time probability distribution functions, an important performance indicator of the system is the expected deadline miss ratio of the tasks and of the task graphs. This article presents an approach for obtaining this indicator in an analytic way. Our goal is to keep the analysis cost low, in terms of required analysis time and memory, while considering as general classes of target application models as possible. The following main assumptions have been made on the applications which are modelled as sets of task graphs: the tasks are periodic, the task execution times have given generalised probability distribution functions, the task execution deadlines are given and arbitrary, the scheduling policy can belong to practically any class of nonpreemptive scheduling policies, and a designer supplied maximum number of concurrent instantiations of the same task graph is tolerated in the system. Experiments show the efficiency of the proposed technique for monoprocessor systems.
Approximation algorithms for budgeted learning problems
 In Proc. ACM Symp. on Theory of Computing
, 2007
"... We present the first approximation algorithms for a large class of budgeted learning problems. One classic example of the above is the budgeted multiarmed bandit problem. In this problem each arm of the bandit has an unknown reward distribution on which a prior is specified as input. The knowledge ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We present the first approximation algorithms for a large class of budgeted learning problems. One classic example of the above is the budgeted multiarmed bandit problem. In this problem each arm of the bandit has an unknown reward distribution on which a prior is specified as input. The knowledge about the underlying distribution can be refined in the exploration phase by playing the arm and observing the rewards. However, there is a budget on the total number of plays allowed during exploration. After this exploration phase, the arm with the highest (posterior) expected reward is chosen for exploitation. The goal is to design the adaptive exploration phase subject to a budget constraint on the number of plays, in order to maximize the expected reward of the arm chosen for exploitation. While this problem is reasonably well understood in the infinite horizon setting or regret bounds, the budgeted version of the problem is NPHard. For this problem, and several generalizations, we provide approximate policies that achieve a reward within constant factor of the reward optimal policy. Our algorithms use a novel linear program rounding technique based on stochastic packing.
Stochastic models for budget optimization in searchbased advertising
 In Proc. Workshop on Internet and Network Economics (WINE
"... Internet search companies sell advertisement slots based on users ’ search queries via an auction. Advertisers have to solve a complex optimization problem of how to place bids on the keywords of their interest so that they can maximize their return (the number of user clicks on their ads) for a giv ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Internet search companies sell advertisement slots based on users ’ search queries via an auction. Advertisers have to solve a complex optimization problem of how to place bids on the keywords of their interest so that they can maximize their return (the number of user clicks on their ads) for a given budget. This is the budget optimization problem. In this paper, we model budget optimization as it arises in Internet search companies and formulate stochastic versions of the problem. The premise is that Internet search companies can predict probability distributions associated with queries in the future. We identify three natural stochastic models. In the spirit of other stochastic optimization problems, two questions arise. • (Evaluation Problem) Given a bid solution, can we evaluate the expected value of the objective function under different stochastic models? • (Optimization Problem) Can we determine a bid solution that maximizes the objective function in expectation under different stochastic models? Our main results are algorithmic and complexity results for both these problems for our three stochastic models. In particular, our algorithmic results show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good approximations for many cases; we show other cases to be NPhard. 1
Stochastic shortest paths via quasiconvex maximization
 PROCEEDINGS OF EUROPEAN SYMPOSIUM OF ALGORITHMS
, 2006
"... We consider the problem of finding shortest paths in a graph with independent randomly distributed edge lengths. Our goal is to maximize the probability that the path length does not exceed a given threshold value (deadline). We give a surprising exact n Θ(log n) algorithm for the case of normally ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
We consider the problem of finding shortest paths in a graph with independent randomly distributed edge lengths. Our goal is to maximize the probability that the path length does not exceed a given threshold value (deadline). We give a surprising exact n Θ(log n) algorithm for the case of normally distributed edge lengths, which is based on quasiconvex maximization. We then prove average and smoothed polynomial bounds for this algorithm, which also translate to average and smoothed bounds for the parametric shortest path problem, and extend to a more general nonconvex optimization setting. We also consider a number other edge length distributions, giving a range of exact and approximation schemes.
Approximation algorithms for 2stage stochastic optimization problems
 SIGACT News
, 2006
"... Abstract. Stochastic optimization is a leading approach to model optimization problems in which there is uncertainty in the input data, whether from measurement noise or an inability to know the future. In this survey, we outline some recent progress in the design of polynomialtime algorithms with p ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Abstract. Stochastic optimization is a leading approach to model optimization problems in which there is uncertainty in the input data, whether from measurement noise or an inability to know the future. In this survey, we outline some recent progress in the design of polynomialtime algorithms with performance guarantees on the quality of the solutions found for an important class of stochastic programming problems — 2stage problems with recourse. In particular, we show that for a number of concrete problems, algorithmic approaches that have been applied for their deterministic analogues are also effective in this more challenging domain. More specifically, this work highlights the role of tools from linear programming, rounding techniques, primaldual algorithms, and the role of randomization more generally. 1
Asking the right questions: Modeldriven optimization using probes
 In Proc. of the 2006 ACM Symp. on Principles of Database Systems
, 2006
"... In several database applications, parameters like selectivities and load are known only with some associated uncertainty, which is specified, or modeled, as a distribution over values. The performance of query optimizers and monitoring schemes can be improved by spending resources like time or bandw ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
In several database applications, parameters like selectivities and load are known only with some associated uncertainty, which is specified, or modeled, as a distribution over values. The performance of query optimizers and monitoring schemes can be improved by spending resources like time or bandwidth in observing or resolving these parameters, so that better query plans can be generated. In a resourceconstrained situation, deciding which parameters to observe in order to best optimize the expected quality of the plan generated (or in general, optimize the expected value of a certain objective function) itself becomes an interesting optimization problem. We present a framework for studying such problems, and present several scenarios arising in anomaly detection in complex systems, monitoring extreme values in sensor networks, load shedding in data stream systems, and estimating rates in wireless channels and minimum latency routes in networks, which can be modeled in this framework with the appropriate objective functions. Even for several simple objective functions, we show the problems are NpHard. We present greedy algorithms with good performance bounds. The proof of the performance bounds are via novel submodularity arguments.
Simultaneous optimization via approximate majorization for concave profits or convex costs
 Algorithmica
, 2002
"... For multicriteria problems and problems with poorly characterized objective, it is often desirable to simultaneously approximate the optimum solution for a large class of objective functions. We consider two such classes: 1. Maximizing all symmetric concave functions, and 2. Minimizing all symmetri ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
For multicriteria problems and problems with poorly characterized objective, it is often desirable to simultaneously approximate the optimum solution for a large class of objective functions. We consider two such classes: 1. Maximizing all symmetric concave functions, and 2. Minimizing all symmetric convex functions. The first class corresponds to maximizing profit for a resource allocation problem (such as allocation of bandwidths in a computer network). The concavity requirement corresponds to the law of diminishing returns in economics. The second class corresponds to minimizing cost or congestion in a load balancing problem, where the congestion/cost is some convex function of the loads. Informally, a simultaneous αapproximation for either class is a feasible solution that is within a factor α of the optimum for all functions in that class. Clearly, the structure of the feasible set has a significant impact on the best possible α and the computational complexity of finding a solution that achieves (or nearly achieves) this α. We develop a framework and a set of techniques to perform simultaneous optimization for a wide variety of problems.