Results 1  10
of
49
Bipartite Graphs as Models of Complex Networks
 Aspects of Networking
, 2004
"... It appeared recently that the classical random graph model used to represent realworld complex networks does not capture their main properties. Since then, various attempts have been made to provide accurate models. We study here the first model which achieves the following challenges: it produces ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
(Show Context)
It appeared recently that the classical random graph model used to represent realworld complex networks does not capture their main properties. Since then, various attempts have been made to provide accurate models. We study here the first model which achieves the following challenges: it produces graphs which have the three main wanted properties (clustering, degree distribution, average distance), it is based on some realworld observations, and it is sufficiently simple to make it possible to prove its main properties. This model consists in sampling a random bipartite graph with prescribed degree distribution. Indeed, we show that any complex network can be viewed as a bipartite graph with some specific characteristics, and that its main properties can be viewed as consequences of this underlying structure. We also propose a growing model based on this observation. Introduction.
Relevance of Massively Distributed Explorations of the Internet Topology: Simulation Results
, 2005
"... Internet maps are generally constructed using the traceroute tool from a few sources to many destinations. It appeared recently that this exploration process gives a partial and biased view of the real topology, which leads to the idea of increasing the number of sources to improve the quality of ..."
Abstract

Cited by 42 (14 self)
 Add to MetaCart
(Show Context)
Internet maps are generally constructed using the traceroute tool from a few sources to many destinations. It appeared recently that this exploration process gives a partial and biased view of the real topology, which leads to the idea of increasing the number of sources to improve the quality of the maps. In this paper, we present a set of experiments we have conduced to evaluate the relevance of this approach. It appears that the statistical properties of the underlying network have a strong influence on the quality of the obtained maps, which can be improved using massively distributed explorations. Conversely, we show that the exploration process induces some properties on the maps. We validate our analysis using realworld data and experiments and we discuss its implications.
Data Reduction and Exact Algorithms for Clique Cover
, 2007
"... To cover the edges of a graph with a minimum number of cliques is an NPhard problem with many applications. We develop for this problem efficient and effective polynomialtime data reduction rules that, combined with a search tree algorithm, allow for exact problem solutions in competitive time. Th ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
To cover the edges of a graph with a minimum number of cliques is an NPhard problem with many applications. We develop for this problem efficient and effective polynomialtime data reduction rules that, combined with a search tree algorithm, allow for exact problem solutions in competitive time. This is confirmed by experiments with realworld and synthetic data. Moreover, we prove the fixedparameter tractability of covering edges by cliques.
Generalized Preferential Attachment: Towards Realistic SocioSemantic Network Models
"... Abstract. The mechanism of preferential attachment underpins most recent social network formation models. Yet few authors attempt to check or quantify assumptions on this mechanism. We call generalized preferential attachment any kind of preference to interact with other agents with respect to any n ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
Abstract. The mechanism of preferential attachment underpins most recent social network formation models. Yet few authors attempt to check or quantify assumptions on this mechanism. We call generalized preferential attachment any kind of preference to interact with other agents with respect to any node property. We introduce tools for measuring empirically and characterizing comprehensively such phenomena, consequently suggest significant implications for model design, and apply these tools to a sociosemantic network of scientific collaborations, investigating in particular homophilic behavior. This opens the way to a whole class of realistic and credible social network morphogenesis models.
Degree and clustering coefficient in sparse random intersection graphs
 THE ANNALS OF APPLIED PROBABILITY
, 2013
"... ..."
(Show Context)
Complex Network Metrology
"... In order to study some complex networks like the Internet, the Web, social networks or biological networks, one first has to explore them. This gives a partial and biased view of the real object, which is generally assumed to be representative of the whole. However, up to now nobody knows how and ho ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In order to study some complex networks like the Internet, the Web, social networks or biological networks, one first has to explore them. This gives a partial and biased view of the real object, which is generally assumed to be representative of the whole. However, up to now nobody knows how and how much the measure influences the results. Using the example of the Internet and a rough model of its exploration process, we show that the way a given complex network is explored may strongly influence the observed properties. This leads us to argue for the necessity of developing a science of metrology of complex networks. Its aim would be to study how the partial and biased view of a network relates to the properties of the whole network. Introduction. Some complex networks of high interest can only be known after an exploration process. This is in particular true for the Internet (interconnection of computers), the Web (links between pages), social networks (acquaintance relations for example), and biological networks
Impact of Random Failures and Attacks on Poisson and PowerLaw Random Networks
, 2009
"... It appeared recently that the underlying degree distribution of networks may play a crucial role concerning their robustness. Empiric and analytic results have been obtained, based on asymptotic and meanfield approximations. Previous work insisted on the fact that powerlaw degree distributions ind ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
It appeared recently that the underlying degree distribution of networks may play a crucial role concerning their robustness. Empiric and analytic results have been obtained, based on asymptotic and meanfield approximations. Previous work insisted on the fact that powerlaw degree distributions induce high resilience to random failure but high sensitivity to attack strategies, while Poisson degree distributions are quite sensitive in both cases. Then, much work has been done to extend these results. We aim here at studying in depth these results, their origin, and limitations. We review in detail previous contributions and give full proofs in a unified framework, and identify the approximations on which these results rely. We then present new results aimed at enlightening some important aspects. We also provide extensive rigorous experiments which help evaluate the relevance of the analytic results. We reach the conclusion that, even if the basic results of the field are clearly true and important, they are in practice much less striking than generally thought. The differences between random failures and attacks are not so huge and can be explained with simple facts. Likewise, the differences in the behaviors induced by powerlaw and Poisson distributions are not as striking as often claimed.
Known algorithms for edge clique cover are probably optimal. Manuscript ArXiV: 1203.1754v1
, 2012
"... In the EDGE CLIQUE COVER (ECC) problem, given a graph G and an integer k, we ask whether the edges of G can be covered with k complete subgraphs of G or, equivalently, whether G admits an intersection model on kelement universe. Gramm et al. [JEA 2008] have shown a set of simple rules that reduce t ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
In the EDGE CLIQUE COVER (ECC) problem, given a graph G and an integer k, we ask whether the edges of G can be covered with k complete subgraphs of G or, equivalently, whether G admits an intersection model on kelement universe. Gramm et al. [JEA 2008] have shown a set of simple rules that reduce the number of vertices of G to 2k, and no algorithm is known with significantly better running time bound than a bruteforce search on this reduced instance. In this paper we show that the approach of Gramm et al. is essentially optimal: we present a polynomial time algorithm that reduces an arbitrary 3CNFSAT formula with n variables and m clauses to an equivalent ECC instance (G, k) with k = O(log n) and V (G)  = O(n+m). Consequently, there is no 22o(k)poly(n) time algorithm for the ECC problem, unless the Exponential Time Hypothesis fails. To the best of our knowledge, these are the first results for a natural, fixedparameter tractable problem, and proving that a doublyexponential dependency on the parameter is essentially necessary. 1
Basic Notions for the Analysis of Large Affiliation Networks / Bipartite Graphs
, 2008
"... Many realworld complex networks actually have a bipartite nature: their nodes may be separated into two classes, the links being between nodes of different classes only. Despite this, and despite the fact that many adhoc tools have been designed for the study of special cases, very few exist to an ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Many realworld complex networks actually have a bipartite nature: their nodes may be separated into two classes, the links being between nodes of different classes only. Despite this, and despite the fact that many adhoc tools have been designed for the study of special cases, very few exist to analyse (describe, extract relevant information) such networks in a systematic way. We propose here an extension of the most basic notions used nowadays to analyse classical complex networks to the bipartite case. To achieve this, we introduce a set of simple statistics, which we discuss by comparing their values on a representative set of realworld networks and on their random versions.
Colouring random intersection graphs and complex networks
, 2005
"... Random intersection graphs naturally exhibit a certain amount of transitivity and hence can be used to model real–world networks. We study the evolution of the chromatic number of a random intersection graph and show that, in a certain range of parameters, these random graphs can be coloured optimal ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Random intersection graphs naturally exhibit a certain amount of transitivity and hence can be used to model real–world networks. We study the evolution of the chromatic number of a random intersection graph and show that, in a certain range of parameters, these random graphs can be coloured optimally with high probability using different greedy algorithms. Experiments on real network data confirm the positive theoretical predictions and suggest that heuristics for the clique and the chromatic number can work hand in hand proving mutual optimality. 1 Introduction and