Results 1  10
of
23
An Approximation Scheme for Stochastic Linear Programming and its Application to Stochastic Integer Programs
"... Stochastic optimization problems attempt to model uncertainty in the data by assuming that the input is specified by a probability distribution. We consider the wellstudied paradigm of 2stage models with recourse: first, given only distributional information about (some of) the data one commits on ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
Stochastic optimization problems attempt to model uncertainty in the data by assuming that the input is specified by a probability distribution. We consider the wellstudied paradigm of 2stage models with recourse: first, given only distributional information about (some of) the data one commits on initial actions, and then once the actual data is realized (according to the distribution), further (recourse) actions can be taken. We show that for a broad class of 2stage linear models with recourse, one can, for any ɛ> 0, in time polynomial in 1 ɛ and the size of the input, compute a solution of value within a factor (1 + ɛ) of the optimum, in spite of the fact that exponentially many secondstage scenarios may occur. In conjunction with a suitable rounding scheme, this yields the first approximation algorithms for 2stage stochastic integer optimization problems where the underlying random data is given by a “black box” and no restrictions are placed on the costs in the two stages. Our rounding approach for stochastic integer programs shows that an approximation algorithm for a deterministic analogue yields, with a small constantfactor loss, provably nearoptimal solutions for the stochastic generalization. Among the range of applications we consider are stochastic versions of the multicommodity flow, set cover, vertex cover, and facility location problems.
Uncovering Performance Differences among Backbone ISPs with Netdiff
"... Abstract – We design and implement Netdiff, a system that enables detailed performance comparisons among ISP networks. It helps customers and applications determine, for instance, which ISP offers the best performance for their specific workload. Netdiff is easy to deploy because it requires only a ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
Abstract – We design and implement Netdiff, a system that enables detailed performance comparisons among ISP networks. It helps customers and applications determine, for instance, which ISP offers the best performance for their specific workload. Netdiff is easy to deploy because it requires only a modest number of nodes and does not require active cooperation from ISPs. Realizing such a system, however, is challenging as we must aggressively reduce probing cost and ensure that the results are robust to measurement noise. We describe the techniques that Netdiff uses to address these challenges. Netdiff has been measuring eighteen backbone ISPs since February 2007. Its techniques allow it to capture an accurate view of an ISP’s performance in terms of latency within fifteen minutes. Using Netdiff, we find that the relative performance of ISPs depends on many factors, including the geographic properties of traffic and the popularity of destinations. Thus, the detailed comparison that Netdiff provides is important for identifying ISPs that perform well for a given workload. 1
Ascertaining the Reality of Network Neutrality Violation in Backbone ISPs
 In Proc. 7th ACM Workshop on Hot Topics in Networks (HotnetsVII
, 2008
"... On the Internet today, a growing number of QoS sensitive network applications exist, such as VoIP, imposing more stringent requirements on ISPs besides the basic reachability assurance. Thus, the demand on ISPs for Service Level Agreements (SLAs) with better guarantees is increasing. However, despit ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
On the Internet today, a growing number of QoS sensitive network applications exist, such as VoIP, imposing more stringent requirements on ISPs besides the basic reachability assurance. Thus, the demand on ISPs for Service Level Agreements (SLAs) with better guarantees is increasing. However, despite overprovisioning in core ISP networks, resource contention still exists leading to congestion and associated performance degradations. For example, residential broadband networks ratelimit or even block bandwidth intensive applications such as peertopeer file sharing thereby violating network neutrality. In addition, traffic associated with specific applications, such as Skype, could also be discriminated against for competitive business reasons. So far, little work has been done regarding the existence of traffic discrimination inside the core of the Internet. Due to the technical challenges and widespread impact, it seems somewhat inconceivable that ISPs are performing such finegrained discrimination based on the application content. Our study is the first to demonstrate evidence of network neutrality violations within backbone ISPs. We used a scalable and accurate monitoring system – NVLens – to detect traffic discrimination based on various factors such as application types, previoushop, and nexthop ASes. We discuss the implication of such discrimination and how users can counter such unfair practices. 1
Distributed and Parallel Algorithms for Weighted Vertex Cover . . .
, 2009
"... The paper presents distributed and parallel δapproximation algorithms for covering problems, where δ is the maximum number of variables on which any constraint depends (for example, δ = 2 for vertex cover). Specific results include the following. • For weighted vertex cover, the first distributed 2 ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
The paper presents distributed and parallel δapproximation algorithms for covering problems, where δ is the maximum number of variables on which any constraint depends (for example, δ = 2 for vertex cover). Specific results include the following. • For weighted vertex cover, the first distributed 2approximation algorithm taking O(log n) rounds and the first parallel 2approximation algorithm in RNC. The algorithms generalize to covering mixed integer linear programs (CMIP) with two variables per constraint (δ = 2). • For any covering problem with monotone constraints and submodular cost, a distributed δapproximation algorithm taking O(log² C) rounds, where C is the number of constraints. (Special cases include CMIP, facility location, and probabilistic (twostage) variants of these problems.)
On Columnrestricted and Priority Covering Integer Programs ⋆
"... Abstract. In a columnrestricted covering integer program (CCIP), all the nonzero entries of any column of the constraint matrix are equal. Such programs capture capacitated versions of covering problems. In this paper, we study the approximability of CCIPs, in particular, their relation to the int ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In a columnrestricted covering integer program (CCIP), all the nonzero entries of any column of the constraint matrix are equal. Such programs capture capacitated versions of covering problems. In this paper, we study the approximability of CCIPs, in particular, their relation to the integrality gaps of the underlying 0,1CIP. If the underlying 0,1CIP has an integrality gap O(γ), and assuming that the integrality gap of the priority version of the 0,1CIP is O(ω), we give a factor O(γ + ω) approximation algorithm for the CCIP. Priority versions of 0,1CIPs (PCIPs) naturally capture quality of service type constraints in a covering problem. We investigate priority versions of the line (PLC) and the (rooted) tree cover (PTC) problems. Apart from being natural objects to study, these problems fall in a class of fundamental geometric covering problems. We bound the integrality of certain classes of this PCIP by a constant. Algorithmically, we give a polytime exact algorithm for PLC, show that the PTC problem is APXhard, and give a factor 2approximation algorithm for it. 1
Weighted Capacitated, Priority, and Geometric Set Cover via Improved QuasiUniform Sampling
, 2011
"... The minimumweight set cover problem is widely known to be O(log n)approximable, with no improvement possible in the general case. We take the approach of exploiting problem structure to achieve better results, by providing a geometryinspired algorithm whose approximation guarantee depends solely ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
The minimumweight set cover problem is widely known to be O(log n)approximable, with no improvement possible in the general case. We take the approach of exploiting problem structure to achieve better results, by providing a geometryinspired algorithm whose approximation guarantee depends solely on an instancespecific combinatorial property known as shallow cell complexity (SCC). Roughly speaking, a set cover instance has low SCC if any columninduced submatrix of the corresponding elementset incidence matrix has few distinct rows. By adapting and improving Varadarajan’s recent quasiuniform random sampling method for weighted geometric covering problems, we obtain strong approximation algorithms for a structurally rich class of weighted covering problems with low SCC. We also show how to derandomize our algorithm. Our main result has several immediate consequences. Among them, we settle an open question of Chakrabarty et al. [8] by showing that weighted instances of the capacitated covering problem with underlying network structure have O(1)approximations. Additionally, our improvements to Varadarajan’s sampling framework yield several new results for weighted geometric set cover, hitting set, and dominating set problems. In particular, for weighted covering problems exhibiting linear (or nearlinear) union complexity, we obtain approximability results agreeing with those known for the unweighted case. For example, we obtain a constant approximation for the weighted disk cover problem, improving upon the 2 O(log ∗ n)approximation known prior to our work and matching the O(1)approximation known for the unweighted variant.
Detecting Traffic Differentiation in Backbone ISPs with
"... Traffic differentiations are known to be found at the edge of the Internet in broadband ISPs and wireless carriers [13, 2]. The ability to detect traffic differentiations is essential for customers to develop effective strategies for improving their application performance. We build a system, called ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Traffic differentiations are known to be found at the edge of the Internet in broadband ISPs and wireless carriers [13, 2]. The ability to detect traffic differentiations is essential for customers to develop effective strategies for improving their application performance. We build a system, called NetPolice, that enables detection of content and routingbased differentiations in backbone ISPs. NetPolice is easy to deploy since it only relies on loss measurement launched from end hosts. The key challenges in building NetPolice include selecting an appropriate set of probing destinations and ensuring the robustness of detection results to measurement noise. We use NetPolice to study 18 large ISPs spanning 3 major continents over 10 weeks in 2008. Our work provides concrete evidence of traffic differentiations based on application types and neighbor ASes. We identify 4 ISPs that exhibit large degree of differentiation on 4 applications and 10 ISPs that perform previousAS hop based differentiation, resulting in up to 5 % actual loss rate differences. The significance of differences increases with network load. Some ISPs simply differentiate traffic based on port numbers irrespective of packet payload and the differentiation policies may only be partially deployed within their networks. We also find strong correlation between performance differences and TypeofService value differences in the traffic.
Unsplittable flow in paths and trees and columnrestricted packing integer programs
 IN PROCEEDINGS, INTERNATIONAL WORKSHOP ON APPROXIMATION ALGORITHMS FOR COMBINATORIAL OPTIMIZATION PROBLEMS
, 2009
"... We consider the unsplittable flow problem (UFP) and the closely related columnrestricted packing integer programs (CPIPs). In UFP we are given an edgecapacitated graph G = (V, E) and k request pairs R1,..., Rk, where each Ri consists of a sourcedestination pair (si, ti), a demand di and a weigh ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We consider the unsplittable flow problem (UFP) and the closely related columnrestricted packing integer programs (CPIPs). In UFP we are given an edgecapacitated graph G = (V, E) and k request pairs R1,..., Rk, where each Ri consists of a sourcedestination pair (si, ti), a demand di and a weight wi. The goal is to find a maximum weight subset of requests that can be routed unsplittably in G. Most previous work on UFP has focused on the nobottleneck case in which the maximum demand of the requests is at most the smallest edge capacity. Inspired by the recent work of Bansal et al. [3] on UFP on a path without the above assumption, we consider UFP on paths as well as trees. We give a simple O(log n) approximation for UFP on trees when all weights are identical; this yields an O(log 2 n) approximation for the weighted case. These are the first nontrivial approximations for UFP on trees. We develop an LP relaxation for UFP on paths that has an integrality gap of O(log 2 n); previously there was no relaxation with o(n) gap. We also consider UFP in general graphs and CPIPs without the nobottleneck assumption and obtain new and useful results.
Approximability of sparse integer programs
 In Proc. 17th ESA
, 2009
"... The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ǫ> 0, if P = NP this ratio cannot be improved to k − 1 − ǫ, and under the unique games conjecture this ratio cannot be improved to k − ǫ. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsackcover inequalities. Second, for packing integer programs {max cx: Ax ≤ b,0 ≤ x ≤ d} where A has at most k nonzeroes per column, we give a 2 k k 2approximation algorithm. This is the first polynomialtime approximation algorithm for this problem with approximation ratio depending only on k, for any k> 1. Our approach starts from iterated LP relaxation, and then uses probabilistic and greedy methods to recover a feasible solution. Note added after publication: This version includes subsequent developments: a O(k 2) approximation for the latter problem using the iterated rounding framework, and several literature reference updates including a O(k)approximation for the same problem by Bansal et al.