Results 1  10
of
45
iPlane: An information plane for distributed services
 In OSDI 2006
"... Abstract — In this paper, we present the design, implementation, and evaluation of the iPlane, a scalable service providing accurate predictions of Internet path performance for emerging overlay services. Unlike the more common black box latency prediction techniques in use today, the iPlane builds ..."
Abstract

Cited by 196 (22 self)
 Add to MetaCart
Abstract — In this paper, we present the design, implementation, and evaluation of the iPlane, a scalable service providing accurate predictions of Internet path performance for emerging overlay services. Unlike the more common black box latency prediction techniques in use today, the iPlane builds an explanatory model of the Internet. We predict endtoend performance by composing measured performance of segments of known Internet paths. This method allows us to accurately and efficiently predict latency, bandwidth, capacity and loss rates between arbitrary Internet hosts. We demonstrate the feasibility and utility of the iPlane service by applying it to several representative overlay services in use today: content distribution, swarming peertopeer filesharing, and voiceoverIP. In each case, we observe that using iPlane’s predictions leads to a significant improvement in end user performance. 1
On the accuracy of embeddings for Internet coordinate systems
 in: Proceedings of the Internet Measurement Conference, ACM
, 2005
"... Internet coordinate systems embed RoundTripTimes (RTTs) between Internet nodes into some geometric space so that unmeasured RTTs can be estimated using distance computation in that space. If accurate, such techniques would allow us to predict Internet RTTs without extensive measurements. The publi ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
Internet coordinate systems embed RoundTripTimes (RTTs) between Internet nodes into some geometric space so that unmeasured RTTs can be estimated using distance computation in that space. If accurate, such techniques would allow us to predict Internet RTTs without extensive measurements. The published techniques appear to work very well when accuracy is measured using metrics such as absolute relative error. Our main observation is that absolute relative error tells us very little about the quality of an embedding as experienced by a user. We define several new accuracy metrics that attempt to quantify various aspects of useroriented quality. Evaluation of current Internet coordinate systems using our new metrics indicates that their quality is not as high as that suggested by the use of absolute relative error. 1
Network coordinates in the wild
 In Proceeding of USENIX NSDI’07
, 2007
"... Network coordinates provide a mechanism for selecting and placing servers efficiently in a large distributed system. This approach works well as long as the coordinates continue to accurately reflect network topology. We conducted a longterm study of a subset of a millionplus node coordinate syste ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
Network coordinates provide a mechanism for selecting and placing servers efficiently in a large distributed system. This approach works well as long as the coordinates continue to accurately reflect network topology. We conducted a longterm study of a subset of a millionplus node coordinate system and found that it exhibited some of the problems for which network coordinates are frequently criticized, for example, inaccuracy and fragility in the presence of violations of the triangle inequality. Fortunately, we show that several simple techniques remedy many of these problems. Using the Azureus BitTorrent network as our testbed, we show that live, largescale network coordinate systems behave differently than their tame PlanetLab and simulationbased counterparts. We find higher relative errors, more triangle inequality violations, and higher churn. We present and evaluate a number of techniques that, when applied to Azureus, efficiently produce accurate and stable network coordinates. 1
Constraintbased geolocation of internet hosts
 IEEE/ACM Transactions on Networking
"... Geolocation of Internet hosts enables a diverse and interesting new class of locationaware applications. Previous measurementbased approaches use reference hosts, called landmarks, with a wellknown geographic location to provide the location estimation of a target host. This leads to a discrete s ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
Geolocation of Internet hosts enables a diverse and interesting new class of locationaware applications. Previous measurementbased approaches use reference hosts, called landmarks, with a wellknown geographic location to provide the location estimation of a target host. This leads to a discrete space of answers, limiting the number of possible location estimates to the number of adopted landmarks. In contrast, we propose ConstraintBased Geolocation (CBG), which infers the geographic location of Internet hosts using multilateration with distance constraints, thus establishing a continuous space of answers instead of a discrete one. CBG accurately transforms delay measurements to geographic distance constraints, and then uses multilateration to infer the geolocation of the target host. Our experimental results show that CBG outperforms the previous measurementbased geolocation techniques. Moreover, in contrast to previous approaches, our method is able to assign a confidence region to each given location estimate. This allows a locationaware application to assess whether the location estimate is sufficiently accurate for its needs.
Towards Network Triangle Inequality Violation Aware Distributed Systems
, 2007
"... Many distributed systems rely on neighbor selection mechanisms to create overlay structures that have good network performance. These neighbor selection mechanisms often assume the triangle inequality holds for Internet delays. However, the reality is that the triangle inequality is violated by Inte ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
Many distributed systems rely on neighbor selection mechanisms to create overlay structures that have good network performance. These neighbor selection mechanisms often assume the triangle inequality holds for Internet delays. However, the reality is that the triangle inequality is violated by Internet delays. This phenomenon creates a strange environment that confuses neighbor selection mechanisms. This paper investigates the properties of triangle inequality violation (TIV) in Internet delays, the impacts of TIV on representative neighbor selection mechanisms, specifically Vivaldi and Meridian, and avenues to reduce these impacts. We propose a TIV alert mechanism that can inform neighbor selection mechanisms to avoid the pitfalls caused by TIVs and improve their effectiveness.
A hierarchical approach to internet distance prediction
 in Proc. of IEEE ICDCS
, 2006
"... Internet distance prediction gives pairwise latency information with limited measurements. Recent studies have revealed that the quality of existing prediction mechanisms from the application perspective is short of satisfactory. In this paper, we explore the root causes and remedies for this probl ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
Internet distance prediction gives pairwise latency information with limited measurements. Recent studies have revealed that the quality of existing prediction mechanisms from the application perspective is short of satisfactory. In this paper, we explore the root causes and remedies for this problem. Our experience with different landmark selection schemes shows that although selecting nearby landmarks can increase the prediction accuracy for short distances, it can cause the prediction accuracy for longer distances to degrade. Such uneven prediction quality significantly impacts application performance. Instead of trying to select the landmark nodes in some “intelligent ” fashion, we propose a hierarchical prediction approach with straightforward landmark selection. Hierarchical prediction utilizes multiple coordinate sets at multiple distance scales, with the “right ” scale being chosen for prediction each time. Experiments with Internet measurement datasets show that this hierarchical approach is extremely promising for increasing the accuracy of network distance prediction. 1.
On suitability of Euclidean embedding of Internet hosts
 In Proc. SIGMETRICS 2006
, 2006
"... In this paper, we investigate the suitability of embedding Internet hosts into a Euclidean space given their pairwise distances (as measured by roundtrip time). Using the classical scaling and matrix perturbation theories, we first establish the (sum of the) magnitude of negative eigenvalues of the ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
In this paper, we investigate the suitability of embedding Internet hosts into a Euclidean space given their pairwise distances (as measured by roundtrip time). Using the classical scaling and matrix perturbation theories, we first establish the (sum of the) magnitude of negative eigenvalues of the (doublycentered, squared) distance matrix as a measure of suitability of Euclidean embedding. We then show that the distance matrix among Internet hosts contains negative eigenvalues of large magnitude, implying that embedding the Internet hosts in a Euclidean space would incur relatively large errors. Motivated by earlier studies, we demonstrate that the inaccuracy of Euclidean embedding is caused by a large degree of triangle inequality violation (TIV) in the Internet distances, which leads to negative eigenvalues of large magnitude. Moreover, we show that the TIVs are likely to occur locally, hence, the distances among these closeby hosts cannot be estimated accurately using a global Euclidean embedding, in addition, increasing the dimension of embedding does not reduce the embedding errors. Based on these insights, we propose a new hybrid model for embedding the network nodes using only a 2dimensional Euclidean coordinate system and small error adjustment terms. We show that the accuracy of the proposed embedding technique is as good as, if not better, than that of a 7dimensional Euclidean embedding.
Matchmaking for online games and other latencysensitive P2P systems
 In SIGCOMM
, 2009
"... ABSTRACT – The latency between machines on the Internet can dramatically affect users ’ experience for many distributed applications. Particularly, in multiplayer online games, players seek to cluster themselves so that those in the same session have low latency to each other. A system that predicts ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
ABSTRACT – The latency between machines on the Internet can dramatically affect users ’ experience for many distributed applications. Particularly, in multiplayer online games, players seek to cluster themselves so that those in the same session have low latency to each other. A system that predicts latencies between machine pairs allows such matchmaking to consider many more machine pairs than can be probed in a scalable fashion while users are waiting. Using a farreaching trace of latencies between players on over 3.5 million game consoles, we designed Htrae, a latency prediction system for game matchmaking scenarios. One novel feature of Htrae is its synthesis of geolocation with a network coordinate system. It uses geolocation to select reasonable initial network coordinates for new machines joining the system, allowing it to converge more quickly than standard network coordinate systems and produce substantially lower prediction error than stateoftheart latency prediction systems. For instance, it produces 90th percentile errors less than half those of iPlane and Pyxida. Our design is general enough to make it a good fit for other latencysensitive peertopeer applications besides game matchmaking.
A structural approach to latency prediction
 in Proc. of ACM SIGCOMM Internet Measurement Conference, 2006
"... Several models have been recently proposed for predicting the latency of end to end Internet paths. These models treat the Internet as a blackbox, ignoring its internal structure. While these models are simple, they can often fail systematically; for example, the most widely used models use metric ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
Several models have been recently proposed for predicting the latency of end to end Internet paths. These models treat the Internet as a blackbox, ignoring its internal structure. While these models are simple, they can often fail systematically; for example, the most widely used models use metric embeddings that predict no benefit to detour routes even though half of all Internet routes can benefit from detours. In this paper, we adopt a structural approach that predicts path latency based on measurements of the Internet’s routing topology, PoP connectivity, and routing policy. We find that our approach outperforms Vivaldi, the most widely used blackbox model. Furthermore, unlike metric embeddings, our approach successfully predicts 65 % of detour routes in the Internet. The number of measurements used in our approach is comparable with that required by black box techniques, but using traceroutes instead of pings.
Network Topologies: Inference, Modelling and Generation
 IEEE COMMUNICATIONS SURVEYS & TUTORIALS
"... Accurate measurement, inference and modelling techniques are fundamental to Internet topology research. Spatial analysis of the Internet is needed to develop network planning, optimal routing algorithms and failure detection measures. A first step towards achieving such goals is the availability of ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
Accurate measurement, inference and modelling techniques are fundamental to Internet topology research. Spatial analysis of the Internet is needed to develop network planning, optimal routing algorithms and failure detection measures. A first step towards achieving such goals is the availability of network topologies at different levels of granularity, facilitating realistic simulations of new Internet systems. The main objective of this survey is to familiarize the reader with research on network topology over the past decade. We study techniques for inference, modelling and generation of the Internet topology at both router and administrative level. We also compare the mathematical models assigned to various topologies and the generation tools based on them. We conclude with a look at emerging areas of research and potential future research directions.