Results 1  10
of
17
Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations
, 2005
"... How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include hea ..."
Abstract

Cited by 436 (43 self)
 Add to MetaCart
How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in and outdegree distributions, communities, smallworld phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time. Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) orO(log(log n)). Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a “forest fire” spreading process, that has a simple, intuitive justification, requires very few parameters (like the “flammability” of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study.
Graph evolution: Densification and shrinking diameters
 ACM TKDD
, 2007
"... How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include hea ..."
Abstract

Cited by 190 (14 self)
 Add to MetaCart
(Show Context)
How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in and outdegree distributions, communities, smallworld phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time. Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)). Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a “forest fire” spreading process, that has a simple, intuitive justification, requires very few parameters (like the “flammability ” of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study. We also notice that the “forest fire” model exhibits a sharp transition between sparse graphs and graphs that are densifying. Graphs with decreasing distance between the nodes are generated around this transition point. Last, we analyze the connection between the temporal evolution of the degree distribution and densification of a graph. We find that the two are fundamentally related. We also observe that real networks exhibit this type of r
Scale Independent Bibliometric Indicators
 Measurement: Interdisciplinary Research and Perspectives
, 2005
"... Van Raan (this issue) makes an excellent case for using bibliometric data to measure some central aspects of scientific research and to construct indicators of groups: research groups, university departments, and institutes. He claims that, next to peer review, these indicators are indispensable for ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Van Raan (this issue) makes an excellent case for using bibliometric data to measure some central aspects of scientific research and to construct indicators of groups: research groups, university departments, and institutes. He claims that, next to peer review, these indicators are indispensable for evaluating research and can be used in parallel with peer review processes. By way of an example, van Raan provides a table containing nine indicators for a German medical research institute. Two of these indicators—articles (P) and citations (C)—are established proxy measures for the size of a group and the impact of its published research (Katz & Hicks, 1997). The ratio between citations and articles (CPP) and the ratios between CPP and the mean Journal Citation Score and between the fieldbased world average and the Germanyspecific world average—which are uniquely defined CPP reference values—are used to construct a set of indicators that van Raan suggests can be used to assess international research performance. This commentary focuses solely on the use of bibliometric indicators to compare international research performance. It addresses the fundamental question of whether CPP or measures like CPP can be used to accurately compare the performance of groups of different sizes. SCALING RELATIONS A scaling relation exists between two entities, x and y, if they are correlated by a power law given by the equation y = kx n, where n is the scaling factor and k is a constant. There is evidence to suggest that C and P have a scaling relation when
Classification and Powerlaws: The logarithmic transformation
 Journal of the American Society for Information Science and Technology
, 2006
"... (forthcoming) Logarithmic transformation of the data has been recommended by the literature in the case of highly skewed distributions such as those commonly found in information science. The purpose of the transformation is to make the data conform to the lognormal law of error for inferential purp ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
(Show Context)
(forthcoming) Logarithmic transformation of the data has been recommended by the literature in the case of highly skewed distributions such as those commonly found in information science. The purpose of the transformation is to make the data conform to the lognormal law of error for inferential purposes. How does this transformation affect the analysis? We factor analyze and visualize the citation environment of the Journal of the American Chemical Society (JACS) before and after a logarithmic transformation. The transformation strongly reduces the variance necessary for classificatory purposes and therefore is counterproductive to the purposes of the descriptive statistics. We recommend against the logarithmic transformation when sets cannot be defined unambiguously. The intellectual organization of the sciences is reflected in the curvilinear parts of the citation distributions, while negative powerlaws fit excellently to the tails of the distributions.
Graphs over time: Densification and shrinking diameters
 ACM Transactions on Knowledge Discovery from Data (ACM TKDD
"... How do real graphs evolve over time? What are “normal ” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include he ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
How do real graphs evolve over time? What are “normal ” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in and outdegree distributions, communities, smallworld phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time. Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)). Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a “forest fire ” spreading process, that has a simple, intuitive justification, requires very few parameters (like the “flammability ” of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study. We also notice that the “forest fire ” model exhibits a sharp transition between sparse graphs and graphs that are densifying. Graphs with decreasing distance between the nodes are generated around this transition point. Last, we analyze the connection between the temporal evolution of the degree distribution and densification of a graph. We find that the two are fundamentally related. We also observe that real networks exhibit this type of relation between densification and the degree distribution. 1 2 J. Leskovec et al.
Institutional recognition Scaleindependent indicators and research
"... evaluation ..."
(Show Context)
Research Track Paper Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations
"... How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include hea ..."
Abstract
 Add to MetaCart
(Show Context)
How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in and outdegree distributions, communities, smallworld phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time. Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) orO(log(log n)). Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a “forest fire ” spreading process, that has a simple, intuitive justification, requires very few parameters (like the “flammability ” of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study.
BOOK: Models of science dynamics
, 1201
"... encounters between complexity theory and information sciences CHAPTER 3 Knowledge epidemics and population dynamics models for describing idea diffusion ..."
Abstract
 Add to MetaCart
(Show Context)
encounters between complexity theory and information sciences CHAPTER 3 Knowledge epidemics and population dynamics models for describing idea diffusion
unknown title
"... In this article we present an empirical approach to the study of the statistical properties of bibliometric indicators on a very relevant but not simply “available” aggregation level: the research group. We focus on the distribution functions of a coherent set of indicators that are used frequently ..."
Abstract
 Add to MetaCart
In this article we present an empirical approach to the study of the statistical properties of bibliometric indicators on a very relevant but not simply “available” aggregation level: the research group. We focus on the distribution functions of a coherent set of indicators that are used frequently in the analysis of research performance. In this sense, the coherent set of indicators acts as a measuring instrument. Better insight into the statistical properties of a measuring instrument is necessary to enable assessment of the instrument itself. The most basic distribution in bibliometric analysis is the distribution of citations over publications, and this distribution is very skewed. Nevertheless, we clearly observe the working of the central limit theorem and find that at the level of research groups the distribution functions of the main indicators, particularly the journalnormalized and the fieldnormalized indicators, approach normal distributions. The results of our study underline the importance of the idea of “group oeuvre, ” that is, the role of sets of related publications as a unit of analysis.
1 The impact factor’s Matthew effect: a natural experiment in bibliometrics
"... Since the publication of Robert K. Merton’s theory of cumulative advantage in science (Matthew Effect), several empirical studies have tried to measure its presence at the level of papers, individual researchers, institutions or countries. However, these studies seldom control for the intrinsic “qua ..."
Abstract
 Add to MetaCart
(Show Context)
Since the publication of Robert K. Merton’s theory of cumulative advantage in science (Matthew Effect), several empirical studies have tried to measure its presence at the level of papers, individual researchers, institutions or countries. However, these studies seldom control for the intrinsic “quality ” of papers or of researchers—“better ” (however defined) papers or researchers could receive higher citation rates because they are indeed of better quality. Using an original method for controlling the intrinsic value of papers— identical duplicate papers published in different journals with different impact factors—this paper shows that the journal in which papers are published have a strong influence on their citation rates, as duplicate papers published in high impact journals obtain, on average, twice as much citations as their identical counterparts published in journals with lower impact factors. The intrinsic value of a paper is thus not the only reason a given paper gets cited or not; there is a specific Matthew effect attached to journals and this gives to paper published there an added value over and above their intrinsic quality.