Results 11  20
of
124
Jellyfish: A conceptual model for the AS internet topology
, 2004
"... Several novel concepts and tools have revolutionized our understanding of the Internet topology. Most of the existing efforts attempt to develop accurate analytical models. In this paper, our goal is to develop an effective conceptual model: a model that can be easily drawn by hand, while at the sam ..."
Abstract

Cited by 69 (6 self)
 Add to MetaCart
Several novel concepts and tools have revolutionized our understanding of the Internet topology. Most of the existing efforts attempt to develop accurate analytical models. In this paper, our goal is to develop an effective conceptual model: a model that can be easily drawn by hand, while at the same time, it captures significant macroscopic properties. We build the foundation for our model with two thrusts: a) we identify new topological properties, and b) we provide metrics to quantify the topological importance of a node. We propose the jellyfish as a model for the interdomain Internet topology. We show that our model captures and represents the most significant topological properties. Furthermore, we observe that the jellyfish has lasting value: it describes the topology for more than six years.
Systematic topology analysis and generation using degree correlations
 In SIGCOMM
"... Researchers have proposed a variety of metrics to measure important graph properties, for instance, in social, biological, and computer networks. Values for a particular graph metric may capture a graph’s resilience to failure or its routing efficiency. Knowledge of appropriate metric values may inf ..."
Abstract

Cited by 66 (7 self)
 Add to MetaCart
Researchers have proposed a variety of metrics to measure important graph properties, for instance, in social, biological, and computer networks. Values for a particular graph metric may capture a graph’s resilience to failure or its routing efficiency. Knowledge of appropriate metric values may influence the engineering of future topologies, repair strategies in the face of failure, and understanding of fundamental properties of existing networks. Unfortunately, there are typically no algorithms to generate graphs matching one or more proposed metrics and there is little understanding of the relationships among individual metrics or their applicability to different settings. We present a new, systematic approach for analyzing network topologies. We first introduce the dKseries of probability distributions specifying all degree correlations within dsized subgraphs of a given graph G. Increasing values of d capture progressively more properties of G at the cost of more complex representation of the probability distribution. Using this series, we can quantitatively measure the distance between two graphs and construct random graphs that accurately reproduce virtually all metrics proposed in the literature. The nature of the dKseries implies that it will also capture any future metrics that may be proposed. Using our approach, we construct graphs for d =0, 1, 2, 3 and demonstrate that these graphs reproduce, with increasing accuracy, important properties of measured and modeled Internet topologies. We find that the d = 2 case is sufficient for most practical purposes, while d = 3 essentially reconstructs the Internet AS and routerlevel topologies exactly. We hope that a systematic method to analyze and synthesize topologies offers a significant improvement to the set of tools available to network topology and protocol researchers.
Resisting Structural Reidentification in Anonymized Social Networks
, 2008
"... We identify privacy risks associated with releasing network data sets and provide an algorithm that mitigates those risks. A network consists of entities connected by links representing relations such as friendship, communication, or shared activity. Maintaining privacy when publishing networked dat ..."
Abstract

Cited by 60 (7 self)
 Add to MetaCart
We identify privacy risks associated with releasing network data sets and provide an algorithm that mitigates those risks. A network consists of entities connected by links representing relations such as friendship, communication, or shared activity. Maintaining privacy when publishing networked data is uniquely challenging because an individual’s network context can be used to identify them even if other identifying information is removed. In this paper, we quantify the privacy risks associated with three classes of attacks on the privacy of individuals in networks, based on the knowledge used by the adversary. We show that the risks of these attacks vary greatly based on network structure and size. We propose a novel approach to anonymizing network data that models aggregate network structure and then allows samples to be drawn from that model. The approach guarantees anonymity for network entities while preserving the ability to estimate a wide variety of network measures with relatively little bias.
Conductance and Congestion in Power Law Graphs
, 2003
"... It has been observed that the degrees of the topologies of several communication networks follow heavy tailed statistics. What is the impact of such heavy tailed statistics on the performance of basic communication tasks that a network is presumed to support? How does performance scale with the size ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
It has been observed that the degrees of the topologies of several communication networks follow heavy tailed statistics. What is the impact of such heavy tailed statistics on the performance of basic communication tasks that a network is presumed to support? How does performance scale with the size of the network? We study routing in families of sparse random graphs whose degrees follow heavy tailed distributions. Instantiations of such random graphs have been proposed as models for the topology of the Internet at the level of Autonomous Systems as well as at the level of routers. Let n be the number of nodes. Suppose that for each pair of nodes with degrees du and dv we have O(dudv ) units of demand. Thus the total demand is O(n ). We argue analytically and experimentally that in the considered random graph model such demand patterns can be routed so that the flow through each link is at most O . This is to be compared with a bound # that holds for arbitrary graphs. Similar results were previously known for sparse random regular graphs, a.k.a. "expander graphs." The significance is that Internetlike topologies, which grow in a dynamic, decentralized fashion and appear highly inhomogeneous, can support routing with performance characteristics comparable to those of their regular counterparts, at least under the assumption of uniform demand and capacities. Our proof uses approximation algorithms for multicommodity flow and establishes strong bounds of a generalization of "expansion," namely "conductance." Besides routing, our bounds on conductance have further implications, most notably on the gap between first and second eigenvalues of the stochastic normalization of the adjacency matrix of the graph.
Compact routing on Internetlike graphs
 In Proc. IEEE INFOCOM
, 2004
"... Abstract — The ThorupZwick (TZ) compact routing scheme is the first generic stretch3 routing scheme delivering a nearly optimal pernode memory upper bound. Using both direct analysis and simulation, we derive the stretch distribution of this routing scheme on Internetlike interdomain topologies. ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
Abstract — The ThorupZwick (TZ) compact routing scheme is the first generic stretch3 routing scheme delivering a nearly optimal pernode memory upper bound. Using both direct analysis and simulation, we derive the stretch distribution of this routing scheme on Internetlike interdomain topologies. By investigating the TZ scheme on random graphs with powerlaw node degree distributions, Pk � k −γ, we find that the average TZ stretch is quite low and virtually independent of γ. In particular, for the Internet interdomain graph with γ � 2.1, the average TZ stretch is around 1.1, with up to 70 % of all pairwise paths being stretch1 (shortest possible). As the network grows, the average stretch slowly decreases. The routing table is very small, too. It is well below its upper bounds, and its size is around 50 records for 10 4node networks. Furthermore, we find that both the average shortest path length (i.e. distance) d and width of the distance distribution σ observed in the real Internet interAS graph have values that are very close to the minimums of the average stretch in the d and σdirections. This leads us to the discovery of a unique critical point of the average TZ stretch as a function of d and σ. The Internet distance distribution is located in a close neighborhood of this point. This is remarkable given the fact that the Internet interdomain topology has evolved without any direct attention paid to properties of the stretch distribution. It suggests the average stretch function may be an indirect indicator of the optimization criteria influencing the Internet’s interdomain topology evolution.
The Temporal and Topological Characteristics of BGP Path Changes
 IN PROCEEDINGS OF IEEE ICNP
, 2003
"... BGP has been deployed in Internet for more than a decade. However, the events that cause BGP topological changes are not well understood. Although large traces of routing updates seen in BGP operation are collected by RIPE RIS and University of Oregon RouteViews, previous work examines this data set ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
BGP has been deployed in Internet for more than a decade. However, the events that cause BGP topological changes are not well understood. Although large traces of routing updates seen in BGP operation are collected by RIPE RIS and University of Oregon RouteViews, previous work examines this data set as individual routing updates. This paper describes methods that group routing updates into events. Since one event (a policy change or peering failure) results in many update messages, we cluster updates both temporally and topologically (based on the path vector information). We propose a new approach to analyzing the update traces, classifying the topological impact of routing events, and approximating the distance to the the Autonomous System originating the event. Our analysis provides some insight into routing behavior: First, at least 45% path changes are caused by events on transit peerings. Second, a significant number (2337%) of path changes are transient, in that routing updates indicate temporary path changes, but they ultimately converge on a path that is identical from the previously stable path. These observations suggest that a content provider cannot guarantee endtoend routing stability based solely on its relationship with its immediate ISP, and that better detection of transient changes may improve routing stability.
The Markov Chain Simulation Method for Generating Connected Power Law Random Graphs
 In Proc. 5th Workshop on Algorithm Engineering and Experiments (ALENEX). SIAM
, 2003
"... Graph models for realworld complex networks such as the Internet, the WWW and biological networks are necessary for analytic and simulationbased studies of network protocols, algorithms, engineering and evolution. To date, all available data for such networks suggest heavy tailed statistics, most ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
Graph models for realworld complex networks such as the Internet, the WWW and biological networks are necessary for analytic and simulationbased studies of network protocols, algorithms, engineering and evolution. To date, all available data for such networks suggest heavy tailed statistics, most notably on the degrees of the underlying graphs. A practical way to generate network topologies that meet the observed data is the following degreedriven approach: First predict the degrees of the graph by extrapolation from the available data, and then construct a graph meeting the degree sequence and additional constraints, such as connectivity and randomness. Within the networking community, this is currently accepted as the most successful approach for modeling the interdomain topology of the Internet.
Toward an OptimizationDriven Framework for Designing and Generating Realistic Internet Topologies
 In ACM HotNetsI
, 2002
"... We propose a novel approach to the study of Internet topology in which we use an optimization framework to model the mechanisms driving incremental growth. While previous methods of topology generation have focused on explicit replication of statistical properties, such as node hierarchies and node ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
We propose a novel approach to the study of Internet topology in which we use an optimization framework to model the mechanisms driving incremental growth. While previous methods of topology generation have focused on explicit replication of statistical properties, such as node hierarchies and node degree distributions, our approach addresses the economic tradeoffs, such as cost and performance, and the technical constraints faced by a single ISP in its network design. By investigating plausible objectives and constraints in the design of actual networks, observed network properties such as certain hierarchical structures and node degree distributions can be expected to be the natural byproduct of an approximately optimal solution chosen by network designers and operators. In short, we advocate here essentially an approach to network topology design, modeling, and generation that is based on the concept of Highly Optimized Tolerance (HOT). In contrast with purely descriptive topology modeling, this opens up new areas of research that focus on the causal forces at work in network design and aim at identifying the economic and technical drivers responsible for the observed largescale network behavior. As a result, the proposed approach should have significantly more predictive power than currently pursued efforts and should provide a scientific foundation for the investigation of other important problems, such as pricing, peering, or the dynamics of routing protocols.
Locationaware topology matching in p2p systems
 In Proceedings of IEEE INFOCOM
, 2004
"... Abstract—PeertoPeer (P2P) computing has emerged as a popular model aiming at further utilizing Internet information and resources, complementing the available clientserver services. However, the mechanism of peers randomly choosing logical neighbors without any knowledge about underlying physical ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
Abstract—PeertoPeer (P2P) computing has emerged as a popular model aiming at further utilizing Internet information and resources, complementing the available clientserver services. However, the mechanism of peers randomly choosing logical neighbors without any knowledge about underlying physical topology can cause a serious topology mismatching between the P2P overlay network and the physical underlying network. The topology mismatching problem brings a great stress in the Internet infrastructure and greatly limits the performance gain from various search or routing techniques. Meanwhile, due to the inefficient overlay topology, the floodingbased search mechanisms cause a large volume of unnecessary traffic. Aiming at alleviating the mismatching problem and reducing the unnecessary traffic, we propose a locationaware topology matching (LTM) technique, an algorithm of building an efficient overlay by disconnecting low productive connections and choosing physically closer nodes as logical neighbors while still retaining the search scope and reducing response time for queries. LTM is scalable and completely distributed in the sense that it does not require any global knowledge of the whole overlay network when each node is optimizing the organization of its logical neighbors. The effectiveness of LTM is demonstrated through simulation studies. Keywordspeertopeer; topology mismatching; blind flooding; locationaware topology matching; search efficiency I.
Network Topologies: Inference, Modelling and Generation
 IEEE COMMUNICATIONS SURVEYS & TUTORIALS
"... Accurate measurement, inference and modelling techniques are fundamental to Internet topology research. Spatial analysis of the Internet is needed to develop network planning, optimal routing algorithms and failure detection measures. A first step towards achieving such goals is the availability of ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
Accurate measurement, inference and modelling techniques are fundamental to Internet topology research. Spatial analysis of the Internet is needed to develop network planning, optimal routing algorithms and failure detection measures. A first step towards achieving such goals is the availability of network topologies at different levels of granularity, facilitating realistic simulations of new Internet systems. The main objective of this survey is to familiarize the reader with research on network topology over the past decade. We study techniques for inference, modelling and generation of the Internet topology at both router and administrative level. We also compare the mathematical models assigned to various topologies and the generation tools based on them. We conclude with a look at emerging areas of research and potential future research directions.