Results 1 
9 of
9
Additive Spanners and (α, β)Spanners
"... An (α, β)spanner of an unweighted graph G is a subgraph H that distorts distances in G up to a multiplicative factor of α and an additive term β. It is well known that any graph contains a (multiplicative) (2k − 1, 0)spanner of size O(n 1+1/k) and an (additive) (1, 2)spanner of size O(n 3/2). How ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
An (α, β)spanner of an unweighted graph G is a subgraph H that distorts distances in G up to a multiplicative factor of α and an additive term β. It is well known that any graph contains a (multiplicative) (2k − 1, 0)spanner of size O(n 1+1/k) and an (additive) (1, 2)spanner of size O(n 3/2). However no other additive spanners are known to exist. In this paper we develop a couple of new techniques for constructing (α, β)spanners. Our first result is an additive (1, 6)spanner of size O(n 4/3). The construction algorithm can be understood as an economical agent that assigns costs and values to paths in the graph, purchasing affordable paths and ignoring expensive ones, which are intuitively wellapproximated by paths already purchased. We show that this path buying algorithm can be parameterized in different ways to yield other sparsenessdistortion tradeoffs. Our second result addresses the problem of which (α, β)spanners can be computed efficiently, ideally in linear time. We show that for any k, a (k, k − 1)spanner with size O(kn 1+1/k) can be found in linear time, and further, that in a distributed network the algorithm terminates in a constant number of rounds. Previous spanner constructions with similar performance had roughly twice the multiplicative distortion.
Combinatorial algorithms for nearest neighbors, nearduplicates and smallworld design
 In Proceedings of the 20th Annual ACMSIAM Symposium on Discrete Algorithms, SODA’09
, 2009
"... We study the so called combinatorial framework for algorithmic problems in similarity spaces. Namely, the input dataset is represented by a comparison oracle that given three points x, y, y ′ answers whether y or y ′ is closer to x. We assume that the similarity order of the dataset satisfies the fo ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
We study the so called combinatorial framework for algorithmic problems in similarity spaces. Namely, the input dataset is represented by a comparison oracle that given three points x, y, y ′ answers whether y or y ′ is closer to x. We assume that the similarity order of the dataset satisfies the four variations of the following disorder inequality: if x is the a’th most similar object to y and y is the b’th most similar object to z, then x is among the D(a + b) most similar objects to z, where D is a relatively small disorder constant. Though the oracle gives much less information compared to the standard general metric space model where distance values are given, one can still design very efficient algorithms for various fundamental computational tasks. For nearest neighbor search we present deterministic and exact algorithm with almost linear time and space complexity of preprocessing, and nearlogarithmic time complexity of search. Then, for nearduplicate detection we present the first known deterministic algorithm that requires just nearlinear time + time proportional to the size of output. Finally, we show that for any dataset satisfying the disorder inequality a visibility graph can be constructed: all outdegrees are nearlogarithmic and greedy routing deterministically converges to the nearest neighbor of a target in logarithmic number of steps. The later result is the first known workaround for Navarro’s impossibility of generalizing Delaunay graphs. The technical contribution of the paper consists of handling “false positives ” in data structures and an algorithmic technique upasidedownfilter.
Local global tradeoffs in metric embeddings
 In Proceedings of the FortyEighth Annual IEEE Symposium on Foundations of Computer Science
, 2007
"... Suppose that every k points in a n point metric space X are Ddistortion embeddable into ℓ1. We give upper and lower bounds on the distortion required to embed the entire space X into ℓ1. This is a natural mathematical question and is also motivated by the study of relaxations obtained by liftandp ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Suppose that every k points in a n point metric space X are Ddistortion embeddable into ℓ1. We give upper and lower bounds on the distortion required to embed the entire space X into ℓ1. This is a natural mathematical question and is also motivated by the study of relaxations obtained by liftandproject methods for graph partitioning problems. In this setting, we show that X can be embedded into ℓ1 with distortion O(D×log(n/k)). Moreover, we give a lower bound showing that this result is tight if D is bounded away from 1. For D = 1+δ we give a lower bound of Ω(log(n/k) / log(1/δ)); and for D = 1, we give a lower bound of Ω(log n/(log k + log log n)). Our bounds significantly improve on the results of Arora, Lovász, Newman, Rabani, Rabinovich and Vempala, who initiated a study of these questions.
Compact Routing with Slack in Low Doubling Dimension ABSTRACT
"... We consider the problem of compact routing with slack in networks of low doubling dimension. Namely, we seek nameindependent routing schemes with (1 + ɛ) stretch and polylogarithmic storage at each node: since existing lower bound precludes such a scheme, we relax our guarantees to allow for (i) a s ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of compact routing with slack in networks of low doubling dimension. Namely, we seek nameindependent routing schemes with (1 + ɛ) stretch and polylogarithmic storage at each node: since existing lower bound precludes such a scheme, we relax our guarantees to allow for (i) a small fraction of nodes to have large storage, say size of O(n log n) bits, or (ii) a small fraction of sourcedestination pairs to have larger, but still constant, stretch. In this paper, given any constant ɛ ∈ (0, 1), any δ ∈ Θ(1 / polylog n) and any connected edgeweighted undirected graph G with doubling dimension α ∈ O(log log n) andarbitrary node names, we present
Efficient computation of distance sketches in distributed networks
 In Proc. 24th ACM Symp. on Parallelism in Algorithms and Architectures
, 2012
"... ar ..."
(Show Context)
On Randomized Representations of Graphs Using Short Labels
 in "Proc. 21st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)", 2009, p. 131137 IL
"... Informative labeling schemes consist in labeling the nodes of graphs so that queries regarding any two nodes (e.g., are the two nodes adjacent?) can be answered by inspecting merely the labels of the corresponding nodes. Typically, the main goal of such schemes is to minimize the label size, that is ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Informative labeling schemes consist in labeling the nodes of graphs so that queries regarding any two nodes (e.g., are the two nodes adjacent?) can be answered by inspecting merely the labels of the corresponding nodes. Typically, the main goal of such schemes is to minimize the label size, that is, the maximum number of bits stored in a label. This concept was introduced by Kannan et al. [STOC’88] and was illustrated by giving very simple and elegant labeling schemes, for supporting adjacency and ancestry queries in nnode trees; both these schemes have label size 2 log n. Motivated by relations between such schemes and other important notions such as universal graphs, extensive research has been made by the community to further reduce the label sizes of such schemes as much as possible. The current state of the art adjacency labeling scheme for trees has label size log n + O(log ∗ n) by Alstrup and Rauhe [FOCS’02], and the best known ancestry scheme for (rooted) trees has label size log n + O ( p log n) by Abiteboul et al., [SICOMP 2006]. This paper aims at investigating the above notions from a probabilistic point of view. Informally, the goal is to investigate whether the label sizes can be improved if one allows for some probability of mistake when answering a query, and, if so, by how much. For that, we first present a model for probabilistic labeling schemes, and then construct various probabilistic onesided error schemes for the adjacency and ancestry problems on trees. Some of our schemes significantly improve the bound on the label size of the corresponding deterministic schemes, while the others are matched with appropriate lower bounds showing that, for the resulting guarantees of success, one cannot expect to do much better in term of label size.
Algorithms and Models for Problems in Networking
, 2010
"... Many interesting theoretical problems arise from computer networks. In this thesis we will consider three of them: algorithms and data structures for problems involving distances in networks (in particular compact routing schemes, distance labels, and distance oracles), algorithms for wireless capac ..."
Abstract
 Add to MetaCart
Many interesting theoretical problems arise from computer networks. In this thesis we will consider three of them: algorithms and data structures for problems involving distances in networks (in particular compact routing schemes, distance labels, and distance oracles), algorithms for wireless capacity and scheduling problems, and algorithms for optimizing iBGP overlays in autonomous systems on the Internet. While at first glance these problems may seem extremely different, they are similar in that they all attempt to look at a previously studied networking problem in new, more realistic frameworks. In other words, they are all as much about new models for old problems as they are about new algorithms. In this thesis we will define these models, design algorithms for them, and prove hardness and impossibility results for these three types of problems. viAcknowledgments This thesis would have been impossible without the guidance of my advisor, Anupam Gupta. While we may not have written many papers together, he has been an invaluable mentor who always has good ideas and interesting thoughts. I was
Prioritized Metric Structures and Embedding
"... Metric data structures (distance oracles, distance labeling schemes, routing schemes) and lowdistortion embeddings provide a powerful algorithmic methodology, which has been successfully applied for approximation algorithms [LLR95], online algorithms [BBMN11], distributed algorithms [KKM+12] and fo ..."
Abstract
 Add to MetaCart
Metric data structures (distance oracles, distance labeling schemes, routing schemes) and lowdistortion embeddings provide a powerful algorithmic methodology, which has been successfully applied for approximation algorithms [LLR95], online algorithms [BBMN11], distributed algorithms [KKM+12] and for computing sparsifiers [ST04]. However, this methodology appears to have a limitation: the worstcase performance inherently depends on the cardinality of the metric, and one could not specify in advance which vertices/points should enjoy a better service (i.e., stretch/distortion, label size/dimension) than that given by the worstcase guarantee. In this paper we alleviate this limitation by devising a suit of prioritized metric data structures and embeddings. We show that given a priority ranking (x1, x2,..., xn) of the graph vertices (respectively, metric points) one can devise a metric data structure (respectively, embedding) in which the stretch (resp., distortion) incurred by any pair containing a vertex xj will depend on the rank j of the vertex. We also show that other important parameters, such as the label size and (in some sense) the dimension, may depend only on j. In some of our metric data structures (resp., embeddings) we achieve both prioritized stretch (resp., distortion) and label size (resp., dimension) simultaneously. The worstcase performance of our metric data structures and embeddings is typically asymptotically no worse than of their nonprioritized counterparts.
Volume in General Metric Spaces
, 2014
"... A central question in the geometry of finite metric spaces is how well can an arbitrary metric space be “faithfully preserved ” by a mapping into Euclidean space. In this paper1 we present an algorithmic embedding which obtains a new strong measure of faithful preservation: not only does it (approxi ..."
Abstract
 Add to MetaCart
(Show Context)
A central question in the geometry of finite metric spaces is how well can an arbitrary metric space be “faithfully preserved ” by a mapping into Euclidean space. In this paper1 we present an algorithmic embedding which obtains a new strong measure of faithful preservation: not only does it (approximately) preserve distances between pairs of points, but also the volume of any set of k points. Such embeddings are known as volume preserving embeddings. We provide the first volume preserving embedding that obtains constant average volume distortion for sets of any fixed size. Moreover, our embedding provides constant bounds on all bounded moments of the volume distortion while maintaining the best possible worstcase volume distortion. Feige, in his seminal work on volume preserving embeddings defined the volume of a set S = {v1,..., vk} of points in a general metric space: the product of the distances from vi to {v1,..., vi−1}, normalized by 1(k−1)! , where the ordering of the points is that given by Prim’s minimum spanning tree algorithm. Feige also related this notion to the maximal Euclidean volume that a Lipschitz embedding of S into Euclidean space can achieve. Syntactically this definition is similar to the computation of volume in Euclidean spaces, which however is invariant to the order in which the points are taken. We show that a similar robustness property holds for Feige’s definition: the use of any other order in the product affects volume1/(k−1) by only a constant factor. Our robustness result is of independent interest as it presents a new competitive analysis for the greedy algorithm on a variant of the online Steiner tree problem where the cost of buying an edge is logarithmic in its length. This robustness property allows us to obtain our results on volume preserving embedding. 1