Results 1  10
of
209
Epidemic Thresholds in Real Networks
"... How will a virus propagate in a real network? How long does it take to disinfect a network given particular values of infection rate and virus death rate? What is the single best node to immunize? Answering these questions is essential for devising networkwide strategies to counter viruses. In addi ..."
Abstract

Cited by 95 (10 self)
 Add to MetaCart
How will a virus propagate in a real network? How long does it take to disinfect a network given particular values of infection rate and virus death rate? What is the single best node to immunize? Answering these questions is essential for devising networkwide strategies to counter viruses. In addition, viral propagation is very similar in principle to the spread of rumors, information, and “fads, ” implying that the solutions for viral propagation would also offer insights into these other problem settings. We answer these questions by developing a nonlinear dynamical system (NLDS) that accurately models viral propagation in any arbitrary network, including real and synthesized network graphs. We propose a general epidemic threshold condition for the NLDS system: we prove that the epidemic threshold for a network is exactly the inverse of the largest eigenvalue of its adjacency matrix. Finally, we show that below the epidemic threshold, infections die out at an exponential rate. Our epidemic threshold model subsumes many known thresholds for specialcase graphs (e.g., Erdös–Rényi, BA powerlaw, homogeneous). We demonstrate the predictive power of our model with extensive experiments on real and synthesized graphs, and show that our threshold condition holds for arbitrary graphs. Finally, we show how to utilize our threshold condition for practical uses: It can dictate which nodes to immunize; it can assess the effects of a throttling
Optimal and scalable distribution of content updates over a mobile social network
 In Proc. IEEE INFOCOM
, 2009
"... Number: CRPRL2008080001 ..."
Computing separable functions via gossip
 In Proceedings of the TwentyFifth Annual ACM Symposium on Principles of Distributed Computing (PODC
, 2006
"... Motivated by applications to sensor, peertopeer, and adhoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations or products of function ..."
Abstract

Cited by 73 (6 self)
 Add to MetaCart
(Show Context)
Motivated by applications to sensor, peertopeer, and adhoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations or products of functions of individual variables. The main contribution of this paper is the design of a distributed algorithm for computing separable functions based on properties of exponential random variables. We bound the running time of our algorithm in terms of the running time of an information spreading algorithm used as a subroutine by the algorithm. Since we are interested in totally distributed algorithms, we consider a randomized gossip mechanism for information spreading as the subroutine. Combining these algorithms yields a complete and simple distributed algorithm for computing separable functions. The second contribution of this paper is a characterization of the information spreading time of the gossip algorithm, and therefore the computation time for separable functions, in terms of the conductance of an appropriate stochastic matrix. Specifically, we find that for a class of graphs with small spectral gap, this time is of a smaller order than the time required to compute averages for a known iterative gossip scheme [4]. 1
A systematic framework for unearthing the missing links: measurements and impact
 in Proc. NSDI
, 2007
"... The lack of an accurate representation of the Internet topology at the Autonomous System (AS) level is a limiting factor in the design, simulation, and modeling efforts in interdomain routing protocols. In this paper, we design and implement a framework for identifying AS links that are missing fro ..."
Abstract

Cited by 60 (6 self)
 Add to MetaCart
(Show Context)
The lack of an accurate representation of the Internet topology at the Autonomous System (AS) level is a limiting factor in the design, simulation, and modeling efforts in interdomain routing protocols. In this paper, we design and implement a framework for identifying AS links that are missing from the commonlyused Internet topology snapshots. We apply our framework and show that the new links that we find change the current Internet topology model in a nontrivial way. First, in more detail, our framework provides a largescale comprehensive synthesis of the available sources of information. We crossvalidate and compare BGP routing tables, Internet Routing Registries, and traceroute data, while we extract significant new information from the lessstudied Internet Exchange Points (IXPs). We identify 40 % more edges and approximately 300 % more peertopeer edges compared to commonly used data sets. Second, we identify properties of the new edges and quantify their effects on important topological properties. Given the new peertopeer edges, we find that for some ASes more than 50% of their paths stop going through their ISP providers assuming policyaware routing. A surprising observation is that the degree of a node may be a poor indicator of which ASes it will peer with: the two degrees differ by a factor of four or more in 50 % of the peertopeer links. Finally, we attempt to estimate the number of edges we may still be missing. 1
Fast distributed algorithms for computing separable functions
 IEEE Trans. Inform. Theory
"... Abstract—The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peertopeer, and adhoc networks. The task of computing separable f ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
(Show Context)
Abstract—The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peertopeer, and adhoc networks. The task of computing separable functions, which can be written as linear combinations of functions of individual variables, is studied in this context. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions. The running time of the algorithm is shown to depend on the running time of a minimum computation algorithm used as a subroutine. Using a randomized gossip mechanism for minimum computation as the subroutine yields a complete fully distributed algorithm for computing separable functions. For a class of graphs with small spectral gap, such as grid graphs, the time used by the algorithm to compute averages is of a smaller order than the time required by a known iterative averaging scheme. Index Terms—Data aggregation, distributed algorithms, gossip algorithms, randomized algorithms. I.
Graph theory and networks in biology
 IET Systems Biology, 1:89 – 119
, 2007
"... In this paper, we present a survey of the use of graph theoretical techniques in Biology. In particular, we discuss recent work on identifying and modelling the structure of biomolecular networks, as well as the application of centrality measures to interaction networks and research on the hierarch ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we present a survey of the use of graph theoretical techniques in Biology. In particular, we discuss recent work on identifying and modelling the structure of biomolecular networks, as well as the application of centrality measures to interaction networks and research on the hierarchical structure of such networks and network motifs. Work on the link between structural network properties and dynamics is also described, with emphasis on synchronization and disease propagation. 1
Protecting against network infections: A game theoretic perspective
 In INFOCOM 2009, IEEE
, 2009
"... Abstract — Security breaches and attacks are critical problems in today’s networking. A keypoint is that the security of each host depends not only on the protection strategies it chooses to adopt but also on those chosen by other hosts in the network. The spread of Internet worms and viruses is on ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
(Show Context)
Abstract — Security breaches and attacks are critical problems in today’s networking. A keypoint is that the security of each host depends not only on the protection strategies it chooses to adopt but also on those chosen by other hosts in the network. The spread of Internet worms and viruses is only one example. This class of problems has two aspects. First, it deals with epidemic processes, and as such calls for the employment of epidemic theory. Second, the distributed and autonomous nature of decisionmaking in major classes of networks (e.g., P2P, adhoc, and most notably the Internet) call for the employment of game theoretical approaches. Accordingly, we propose a unified framework that combines the Nintertwined, SIS epidemic model with a noncooperative game model. We determine the existence of a Nash equilibrium of the respective game and characterize its properties. We show that its quality, in terms of overall network security, largely depends on the underlying topology. We then provide a bound on the level of system inefficiency due to the noncooperative behavior, namely, the “price of anarchy ” of the game. We observe that the price of anarchy may be prohibitively high, hence we propose a scheme for steering users towards socially efficient behavior. I.
Modeling CyberInsurance: Towards A Unifying Framework
, 2010
"... We propose a comprehensive formal framework to classify all market models of cyberinsurance we are aware of. The framework features a common terminology and deals with the specific properties of cyberrisk in a unified way: interdependent security, correlated risk, and information asymmetries. A su ..."
Abstract

Cited by 43 (3 self)
 Add to MetaCart
We propose a comprehensive formal framework to classify all market models of cyberinsurance we are aware of. The framework features a common terminology and deals with the specific properties of cyberrisk in a unified way: interdependent security, correlated risk, and information asymmetries. A survey of existing models, tabulated according to our framework, reveals a discrepancy between informal arguments in favor of cyberinsurance as a tool to align incentives for better network security, and analytical results questioning the viability of a market for cyberinsurance. Using our framework, we show which parameters should be considered and endogenized in future models to close this gap. 1
Rumors in a Network: Who’s the Culprit?
"... Motivated by applications such as the detection of sources of worms or viruses in computer networks, identification of the origin of infectious diseases, or determining the causes of cascading failures in large systems such as financial markets, we study the question of inferring the source of a rum ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
(Show Context)
Motivated by applications such as the detection of sources of worms or viruses in computer networks, identification of the origin of infectious diseases, or determining the causes of cascading failures in large systems such as financial markets, we study the question of inferring the source of a rumor in a network. We start by proposing a natural, effective model for the spread of the rumor in a network based on the classical SIR model. We obtain an estimator for the rumor source based on the infected nodes and the underlying network structure – it assigns each node a likelihood, which we call the rumor centrality. We show that the node with the maximal rumor centrality is indeed the maximum likelihood estimator for regular trees. Rumor centrality is a complex combinatoric quantity, but we provide a simple linear time messagepassing algorithm for evaluating it, allowing for fast estimation of the rumor source in large networks. For general trees, we find the following surprising phase transition: asymptotically in the size of the network, the estimator finds the rumor source with probability 0 if the tree grows like a line and it finds the rumor source with probability strictly greater than 0 if the tree grows at a rate quicker than a line. Our notion of rumor centrality naturally extends to arbitrary graphs. With extensive simulations, we establish the effectiveness of our rumor source estimator in different network topologies, such as the popular smallworld and scalefree networks. 1