Results 1  10
of
40
Matchmaking for online games and other latencysensitive P2P systems
 In SIGCOMM
, 2009
"... ABSTRACT – The latency between machines on the Internet can dramatically affect users ’ experience for many distributed applications. Particularly, in multiplayer online games, players seek to cluster themselves so that those in the same session have low latency to each other. A system that predicts ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
ABSTRACT – The latency between machines on the Internet can dramatically affect users ’ experience for many distributed applications. Particularly, in multiplayer online games, players seek to cluster themselves so that those in the same session have low latency to each other. A system that predicts latencies between machine pairs allows such matchmaking to consider many more machine pairs than can be probed in a scalable fashion while users are waiting. Using a farreaching trace of latencies between players on over 3.5 million game consoles, we designed Htrae, a latency prediction system for game matchmaking scenarios. One novel feature of Htrae is its synthesis of geolocation with a network coordinate system. It uses geolocation to select reasonable initial network coordinates for new machines joining the system, allowing it to converge more quickly than standard network coordinate systems and produce substantially lower prediction error than stateoftheart latency prediction systems. For instance, it produces 90th percentile errors less than half those of iPlane and Pyxida. Our design is general enough to make it a good fit for other latencysensitive peertopeer applications besides game matchmaking.
Scalable LinkBased Relay Selection for Anonymous Routing
"... Abstract. The performance of an anonymous path can be described using many network metrics – e.g., bandwidth, latency, jitter, loss, etc. However, existing relay selection algorithms have focused exclusively on producing paths with high bandwidth. In contrast to traditional nodebased path technique ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
Abstract. The performance of an anonymous path can be described using many network metrics – e.g., bandwidth, latency, jitter, loss, etc. However, existing relay selection algorithms have focused exclusively on producing paths with high bandwidth. In contrast to traditional nodebased path techniques in which relay selection is biased by relays’ nodecharacteristics (i.e., bandwidth), this paper presents the case for linkbased path generation in which relay selection is weighted in favor of the highest performing links. Linkbased relay selection supports more flexible routing, enabling anonymous paths with low latency, jitter, and loss, in addition to high bandwidth. Linkbased approaches are also more secure than nodebased techniques, eliminating “hotspots ” in the network that attract a disproportionate amount of traffic. For example, misbehaving relays cannot advertise themselves as “lowlatency ” nodes to attract traffic, since latency has meaning only when measured between two endpoints. We argue that linkbased path selection is practical for certain anonymity networks, and describe mechanisms for efficiently storing and disseminating link information. 1
Distributed Algorithms for Stable and Secure Network Coordinates
 IMC'08
, 2008
"... Since its inception, the concept of network coordinates has been proposed to solve a wide variety of problems such as overlay optimization, network routing, network localization, and network modeling. However, two practical problems significantly limit the applications of network coordinates today. ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Since its inception, the concept of network coordinates has been proposed to solve a wide variety of problems such as overlay optimization, network routing, network localization, and network modeling. However, two practical problems significantly limit the applications of network coordinates today. First, how can network coordinates be stabilized without losing accuracy so that they can be cached by applications? Second, how can network coordinates be secured such that legitimate nodes ’ coordinates are not impacted by misbehaving nodes? Although these problems have been discussed extensively, solving them in decentralized network coordinates systems remains an open problem. This paper presents new distributed algorithms to solve the coordinates stability and security problems. For the stability problem, we propose an error elimination model that can achieve stability without hurting accuracy. A novel algorithm based on this model is presented. For the security problem, we show that recently proposed statistical detection mechanisms cannot achieve an acceptable level of security against even simple attacks. We propose to address the security problem in two parts. First, we show how the computation of coordinates can be protected by a customized Byzantine fault detection algorithm. Second, we adopt a triangle inequality violation detection algorithm to protect delay measurements. These algorithms can be integrated together to provide stable and secure network coordinates.
Network Distance Prediction Based on Decentralized Matrix Factorization
 In Proc. of IFIP Networking
, 2010
"... Abstract. Network Coordinate Systems (NCS) are promising techniques to predict unknown network distances from a limited number of measurements. Most NCS algorithms are based on metric space embedding and suffer from the inability to represent distance asymmetries and Triangle Inequality Violations ( ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Abstract. Network Coordinate Systems (NCS) are promising techniques to predict unknown network distances from a limited number of measurements. Most NCS algorithms are based on metric space embedding and suffer from the inability to represent distance asymmetries and Triangle Inequality Violations (TIVs). To overcome these drawbacks, we formulate the problem of network distance prediction as guessing the missing elements of a distance matrix and solve it by matrix factorization. A distinct feature of our approach, called Decentralized Matrix Factorization (DMF), is that it is fully decentralized. The factorization of the incomplete distance matrix is collaboratively and iteratively done at all nodes with each node retrieving only a small number of distance measurements. There are no special nodes such as landmarks nor a central node where the distance measurements are collected and stored. We compare DMF with two popular NCS algorithms: Vivaldi and IDES. The former is based on metric space embedding, while the latter is also based on matrix factorization but uses landmarks. Experimental results show that DMF achieves competitive accuracy with the double advantage of having no landmarks and of being able to represent distance asymmetries and TIVs.
A Survey on Network Coordinates Systems, Design, and Security
"... During the last decade, a new class of largescale globallydistributed network services and applications have emerged. Those systems are flexible in the sense that they can select their communication path among a set of available ones. However, ceaselessly gathering network information such as late ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
During the last decade, a new class of largescale globallydistributed network services and applications have emerged. Those systems are flexible in the sense that they can select their communication path among a set of available ones. However, ceaselessly gathering network information such as latency to select a path is infeasible due to the large amount of measurement traffic it would generate. To overcome this issue, Network Coordinates Systems (NCS) have been proposed. An NCS allows hosts to predict latencies without performing direct measurements and, consequently, reduce the network resources consumption. During these last years, NCS opened new research fields in which the networking community has produced an impressive amount of work. We believe it is now time to stop and take stock of what has been achieved so far. In this paper, we survey the various NCS proposed as well as their intrinsic limits. In particular, we focus on security issues and solutions proposed to fix them. We also discuss potential future NCS developments, in particular how to use NCS for predicting bandwidth.
On the Internet delay space dimensionality
, 2008
"... We investigate the dimensionality properties of the Internet delay space, i.e., the matrix of measured roundtrip latencies between Internet hosts. Previous work on network coordinates has indicated that this matrix can be embedded, with reasonably low distortion, into a 4 to 9dimensional Euclidea ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We investigate the dimensionality properties of the Internet delay space, i.e., the matrix of measured roundtrip latencies between Internet hosts. Previous work on network coordinates has indicated that this matrix can be embedded, with reasonably low distortion, into a 4 to 9dimensional Euclidean space. The application of Principal Component Analysis (PCA) reveals the same dimensionality values. Our work addresses the question: to what extent is the dimensionality an intrinsic property of the delay space, defined without reference to a host metric such as Euclidean space? Is the intrinsic dimensionality of the Internet delay space approximately equal to the dimension determined using embedding techniques or PCA? If not, what explains the discrepancy? What properties of the network contribute to its overall dimensionality? Using datasets obtained via the King [14] method, we study different measures of dimensionality to establish the following conclusions. First, based on its powerlaw behavior, the structure of the delay space can be better characterized by fractal measures. Second, the intrinsic dimension is significantly smaller than the value predicted by the previous studies; in fact by our measures it is less than 2. Third, we demonstrate a particular way in which the AS topology is reflected in the delay space; subnetworks composed of hosts which share an upstream Tier1 autonomous system in common possess lower dimensionality than the combined delay space. Finally, we observe that fractal measures, due to their sensitivity to nonlinear structures, display higher precision for measuring the influence of subtle features of the delay space geometry.
Measurement manipulation and space selection in network coordinates
 in ICDCS
, 2008
"... Internet coordinate systems have emerged as an efficient method to estimate the latency between pairs of nodes without any communication between them. However, most coordinate systems have been evaluated solely on data sets built by their authors from measurements gathered over large periods of time ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Internet coordinate systems have emerged as an efficient method to estimate the latency between pairs of nodes without any communication between them. However, most coordinate systems have been evaluated solely on data sets built by their authors from measurements gathered over large periods of time. Although they show good prediction results, it is unclear whether the accuracy is the result of the system design properties or is more connected to the characteristics of the data sets. In this paper, we revisit a simple question: how do the features of the embedding space and the inherent attributes of the data sets interact in producing good embeddings? We adapt the Vivaldi algorithm to use Hyperbolic space for embedding and evaluate both Euclidean and Hyperbolic Vivaldi on seven sets of realworld latencies. Our results show that node filtering and latency distributions can significantly influence the accuracy of the predictions. For example, although Euclidean Vivaldi performs well on data sets that were chosen, constructed and filtered by the designers of the algorithm, its performance and robustness decrease considerably when run on thirdparty data sets that were not filtered a priori. Our results offer important insight into designing and building coordinate systems that are both robust and accurate in Internetlike environments. 1
Detecting triangle inequality violations in internet coordinate systems by supervised learning
 in Proc. IFIP Networking Conference
, 2009
"... Abstract—Internet Coordinate Systems (ICS) have been proposed as a method for estimating delays between hosts without direct measurement. However, they can only be accurate when the triangle inequality holds for Internet delays. Actually Triangle Inequality Violations (TIVs) are frequent and are lik ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Abstract—Internet Coordinate Systems (ICS) have been proposed as a method for estimating delays between hosts without direct measurement. However, they can only be accurate when the triangle inequality holds for Internet delays. Actually Triangle Inequality Violations (TIVs) are frequent and are likely to remain a property of the Internet due to routing policies or path inflation. In this paper we propose methods to detect TIVs with high confidence by observing various metrics such as the relative estimation error on the coordinates. Indeed, the detection of TIVs can be used for mitigating their impact on the ICS itself, by excluding some disturbing nodes from clusters running their own ICS, or more generally by improving their neighbor selection mechanism. Index Terms—Internet delay measurements, Internet Coordinate Systems, Performance, Triangle inequality violations.
Triangle Inequality and Routing Policy Violations in the Internet
"... Abstract. Triangle inequality violations (TIVs) are the effect of packets between two nodes being routed on the longer direct path between them when a shorter detour path through an intermediary is available. TIVs are a natural, widespread and persistent consequence of Internet routing policies. By ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. Triangle inequality violations (TIVs) are the effect of packets between two nodes being routed on the longer direct path between them when a shorter detour path through an intermediary is available. TIVs are a natural, widespread and persistent consequence of Internet routing policies. By exposing opportunities to improve the delay between two nodes, TIVs can help myriad applications that seek to minimize endtoend latency. However, sending traffic along the detour paths revealed by TIVs may influence Internet routing negatively. In this paper we study the interaction between triangle inequality violations and policy routing in the Internet. We use measured and predicted AS paths between Internet nodes to show that 25 % of the detour paths exposed by TIVs are in fact available to BGP but are simply deemed “less efficient”. We also compare the AS paths of detours and direct paths and find that detours use AS edges that are rarely followed by default Internet paths, while avoiding others that BGP seems to prefer. Our study is important both for understanding the various interactions that occur at the routing layer as well as their effects on applications that seek to use TIVs to minimize latency. 1
Triangle Inequality Variations in the Internet
"... Triangle inequality violations (TIVs) are important for latency sensitive distributed applications. On one hand, they can expose opportunities to improve network routing by finding shorter paths between nodes. On the other hand, TIVs can frustrate network embedding or positioning systems that treat ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Triangle inequality violations (TIVs) are important for latency sensitive distributed applications. On one hand, they can expose opportunities to improve network routing by finding shorter paths between nodes. On the other hand, TIVs can frustrate network embedding or positioning systems that treat the Internet as a metric space where the triangle inequality holds. Even though triangle inequality violations are both significant and curious, their study has been limited to aggregate data sets that combine measurements taken over long periods of time. The limitations of these data sets open crucial questions in the design of systems that exploit (or avoid) TIVs: are TIVs stable or transient? Or are they illusions caused by aggregating measurements taken at different times? We collect latency matrices at varying sizes and time granularities and study dynamic properties of triangle inequality violations in the Internet. We show that TIVs are not results of measurement error and that their number varies with time. We examine how latency aggregates of data measured over longer periods of time preserve TIVs. Using medians to compute violations eliminates most of the TIVs that appear sporadically during the measurement but it misses many of the ones that are present for more than five hours.