Results 1  10
of
21
An Approximation Scheme for Stochastic Linear Programming and its Application to Stochastic Integer Programs
, 2004
"... Stochastic optimization problems attempt to model uncertainty in the data by assuming that the input is specified by a probability distribution. We consider the wellstudied paradigm of 2stage models with recourse: first, given only distributional information about (some of) the data one commits on ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
Stochastic optimization problems attempt to model uncertainty in the data by assuming that the input is specified by a probability distribution. We consider the wellstudied paradigm of 2stage models with recourse: first, given only distributional information about (some of) the data one commits on initial actions, and then once the actual data is realized (according to the distribution), further (recourse) actions can be taken. We show that for a broad class of 2stage linear models with recourse, one can, for any ɛ> 0, in time polynomial in 1 ɛ and the size of the input, compute a solution of value within a factor (1 + ɛ) of the optimum, in spite of the fact that exponentially many secondstage scenarios may occur. In conjunction with a suitable rounding scheme, this yields the first approximation algorithms for 2stage stochastic integer optimization problems where the underlying random data is given by a “black box ” and no restrictions are placed on the costs in the two stages. Our rounding approach for stochastic integer programs shows that an approximation algorithm for a deterministic analogue yields, with a small constantfactor loss, provably nearoptimal solutions for the stochastic generalization. Among the range of applications we consider are stochastic versions of the multicommodity flow, set cover, vertex cover, and facility location problems.
Uncovering Performance Differences among Backbone ISPs with Netdiff
"... Abstract – We design and implement Netdiff, a system that enables detailed performance comparisons among ISP networks. It helps customers and applications determine, for instance, which ISP offers the best performance for their specific workload. Netdiff is easy to deploy because it requires only a ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Abstract – We design and implement Netdiff, a system that enables detailed performance comparisons among ISP networks. It helps customers and applications determine, for instance, which ISP offers the best performance for their specific workload. Netdiff is easy to deploy because it requires only a modest number of nodes and does not require active cooperation from ISPs. Realizing such a system, however, is challenging as we must aggressively reduce probing cost and ensure that the results are robust to measurement noise. We describe the techniques that Netdiff uses to address these challenges. Netdiff has been measuring eighteen backbone ISPs since February 2007. Its techniques allow it to capture an accurate view of an ISP’s performance in terms of latency within fifteen minutes. Using Netdiff, we find that the relative performance of ISPs depends on many factors, including the geographic properties of traffic and the popularity of destinations. Thus, the detailed comparison that Netdiff provides is important for identifying ISPs that perform well for a given workload. 1
Ascertaining the Reality of Network Neutrality Violation in Backbone ISPs
 In Proc. 7th ACM Workshop on Hot Topics in Networks (HotnetsVII
, 2008
"... On the Internet today, a growing number of QoS sensitive network applications exist, such as VoIP, imposing more stringent requirements on ISPs besides the basic reachability assurance. Thus, the demand on ISPs for Service Level Agreements (SLAs) with better guarantees is increasing. However, despit ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
On the Internet today, a growing number of QoS sensitive network applications exist, such as VoIP, imposing more stringent requirements on ISPs besides the basic reachability assurance. Thus, the demand on ISPs for Service Level Agreements (SLAs) with better guarantees is increasing. However, despite overprovisioning in core ISP networks, resource contention still exists leading to congestion and associated performance degradations. For example, residential broadband networks ratelimit or even block bandwidth intensive applications such as peertopeer file sharing thereby violating network neutrality. In addition, traffic associated with specific applications, such as Skype, could also be discriminated against for competitive business reasons. So far, little work has been done regarding the existence of traffic discrimination inside the core of the Internet. Due to the technical challenges and widespread impact, it seems somewhat inconceivable that ISPs are performing such finegrained discrimination based on the application content. Our study is the first to demonstrate evidence of network neutrality violations within backbone ISPs. We used a scalable and accurate monitoring system – NVLens – to detect traffic discrimination based on various factors such as application types, previoushop, and nexthop ASes. We discuss the implication of such discrimination and how users can counter such unfair practices. 1
Distributed and Parallel Algorithms for Weighted Vertex Cover . . .
, 2009
"... The paper presents distributed and parallel δapproximation algorithms for covering problems, where δ is the maximum number of variables on which any constraint depends (for example, δ = 2 for vertex cover). Specific results include the following. • For weighted vertex cover, the first distributed 2 ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
The paper presents distributed and parallel δapproximation algorithms for covering problems, where δ is the maximum number of variables on which any constraint depends (for example, δ = 2 for vertex cover). Specific results include the following. • For weighted vertex cover, the first distributed 2approximation algorithm taking O(log n) rounds and the first parallel 2approximation algorithm in RNC. The algorithms generalize to covering mixed integer linear programs (CMIP) with two variables per constraint (δ = 2). • For any covering problem with monotone constraints and submodular cost, a distributed δapproximation algorithm taking O(log² C) rounds, where C is the number of constraints. (Special cases include CMIP, facility location, and probabilistic (twostage) variants of these problems.)
Approximability of sparse integer programs
 In Proc. 17th ESA
, 2009
"... The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ǫ> 0, if P = NP this ratio cannot be improved to k − 1 − ǫ, and under the unique games conjecture this ratio cannot be improved to k − ǫ. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsackcover inequalities. Second, for packing integer programs {max cx: Ax ≤ b,0 ≤ x ≤ d} where A has at most k nonzeroes per column, we give a 2 k k 2approximation algorithm. This is the first polynomialtime approximation algorithm for this problem with approximation ratio depending only on k, for any k> 1. Our approach starts from iterated LP relaxation, and then uses probabilistic and greedy methods to recover a feasible solution. Note added after publication: This version includes subsequent developments: a O(k 2) approximation for the latter problem using the iterated rounding framework, and several literature reference updates including a O(k)approximation for the same problem by Bansal et al.
Detecting Traffic Differentiation in Backbone ISPs with
"... Traffic differentiations are known to be found at the edge of the Internet in broadband ISPs and wireless carriers [13, 2]. The ability to detect traffic differentiations is essential for customers to develop effective strategies for improving their application performance. We build a system, called ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Traffic differentiations are known to be found at the edge of the Internet in broadband ISPs and wireless carriers [13, 2]. The ability to detect traffic differentiations is essential for customers to develop effective strategies for improving their application performance. We build a system, called NetPolice, that enables detection of content and routingbased differentiations in backbone ISPs. NetPolice is easy to deploy since it only relies on loss measurement launched from end hosts. The key challenges in building NetPolice include selecting an appropriate set of probing destinations and ensuring the robustness of detection results to measurement noise. We use NetPolice to study 18 large ISPs spanning 3 major continents over 10 weeks in 2008. Our work provides concrete evidence of traffic differentiations based on application types and neighbor ASes. We identify 4 ISPs that exhibit large degree of differentiation on 4 applications and 10 ISPs that perform previousAS hop based differentiation, resulting in up to 5 % actual loss rate differences. The significance of differences increases with network load. Some ISPs simply differentiate traffic based on port numbers irrespective of packet payload and the differentiation policies may only be partially deployed within their networks. We also find strong correlation between performance differences and TypeofService value differences in the traffic.
Weighted Capacitated, Priority, and Geometric Set Cover via Improved QuasiUniform Sampling
, 2011
"... The minimumweight set cover problem is widely known to be O(log n)approximable, with no improvement possible in the general case. We take the approach of exploiting problem structure to achieve better results, by providing a geometryinspired algorithm whose approximation guarantee depends solely ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The minimumweight set cover problem is widely known to be O(log n)approximable, with no improvement possible in the general case. We take the approach of exploiting problem structure to achieve better results, by providing a geometryinspired algorithm whose approximation guarantee depends solely on an instancespecific combinatorial property known as shallow cell complexity (SCC). Roughly speaking, a set cover instance has low SCC if any columninduced submatrix of the corresponding elementset incidence matrix has few distinct rows. By adapting and improving Varadarajan’s recent quasiuniform random sampling method for weighted geometric covering problems, we obtain strong approximation algorithms for a structurally rich class of weighted covering problems with low SCC. We also show how to derandomize our algorithm. Our main result has several immediate consequences. Among them, we settle an open question of Chakrabarty et al. [8] by showing that weighted instances of the capacitated covering problem with underlying network structure have O(1)approximations. Additionally, our improvements to Varadarajan’s sampling framework yield several new results for weighted geometric set cover, hitting set, and dominating set problems. In particular, for weighted covering problems exhibiting linear (or nearlinear) union complexity, we obtain approximability results agreeing with those known for the unweighted case. For example, we obtain a constant approximation for the weighted disk cover problem, improving upon the 2 O(log ∗ n)approximation known prior to our work and matching the O(1)approximation known for the unweighted variant.
Distributed Algorithms for Covering, Packing and Maximum Weighted Matching
"... This paper gives polylogarithmicround, distributed δapproximation algorithms for covering problems with submodular cost and monotone covering constraints (Submodularcost Covering). The approximation ratio δ is the maximum number of variables in any constraint. Special cases include Covering Mix ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper gives polylogarithmicround, distributed δapproximation algorithms for covering problems with submodular cost and monotone covering constraints (Submodularcost Covering). The approximation ratio δ is the maximum number of variables in any constraint. Special cases include Covering Mixed Integer Linear Programs (CMIP), and Weighted Vertex Cover (with δ = 2). Via duality, the paper also gives polylogarithmicround, distributed δapproximation algorithms for Fractional Packing linear programs (where δ is the maximum number of constraints in which any variable occurs), and for Max Weighted cMatching in hypergraphs (where δ is the maximum size of any of the hyperedges; for graphs δ = 2). The paper also gives parallel (RNC) 2approximation algorithms for CMIP with two variables per constraint and Weighted Vertex Cover. The algorithms are randomized. All of the approximation ratios exactly match those of comparable centralized algorithms.
Derandomizing the aw matrixvalued chernoff bound using pessimistic estimators and applications
 Electronic Colloquium on Computational Complexity (ECCC
, 2006
"... Ahlswede and Winter [AW02] introduced a Chernoff bound for matrixvalued random variables, which is a nontrivial generalization of the usual Chernoff bound for realvalued random variables. We present an efficient derandomization of their bound using the method of pessimistic estimators (see Raghav ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Ahlswede and Winter [AW02] introduced a Chernoff bound for matrixvalued random variables, which is a nontrivial generalization of the usual Chernoff bound for realvalued random variables. We present an efficient derandomization of their bound using the method of pessimistic estimators (see Raghavan [Rag88]). As a consequence, we derandomize a construction of Alon and Roichman [AR94] (see also [LR04, LS04]) to efficiently construct an expanding Cayley graph of logarithmic degree on any (possibly nonabelian) group. This also gives an optimal solution to the homomorphism testing problem of Shpilka and Wigderson [SW04]. We also apply these pessimistic estimators to the problem of solving semidefinite covering problems, thus giving a deterministic algorithm for the quantum hypergraph cover problem of [AW02]. The results above appear as theorems in the paper [WX05a], as consequences to the main theorem of that paper: a randomness efficient sampler for matrix valued functions via expander walks. However, we discovered an error in the proof of that main theorem (which we briefly describe in the appendix). One purpose of the current paper is to show that the applications in that paper hold true despite this error. 1