Results 1  10
of
121
Spectra of random graphs with given expected degrees
, 2003
"... In the study of the spectra of power law graphs, there are basically two competing approaches. One is to prove analogues of Wigner’s semicircle law while the other predicts that the eigenvalues follow a power law distributions. Although the semicircle law and the power law have nothing in common, ..."
Abstract

Cited by 104 (17 self)
 Add to MetaCart
In the study of the spectra of power law graphs, there are basically two competing approaches. One is to prove analogues of Wigner’s semicircle law while the other predicts that the eigenvalues follow a power law distributions. Although the semicircle law and the power law have nothing in common, we will show that both approaches are essentially correct if one considers the appropriate matrices. We will prove that (under certain mild conditions) the eigenvalues of the (normalized) Laplacian of a random power law graph follow the semicircle law while the spectrum of the adjacency matrix of a power law graph obeys the power law. Our results are based on the analysis of random graphs with given expected degrees and their relations to several key invariants. Of interest are a number of (new) values for the exponent β where phase transitions for eigenvalue distributions occur. The spectrum distributions have direct implications to numerous graph algorithms such as randomized algorithms that involve rapidly mixing Markov chains, for example.
Statistical Region Merging
 IEEE Trans. on Pattern Analysis and Machine Intelligence
, 2004
"... This paper explores a statistical basis for a process often described in computer vision: image segmentation by region merging following a particular order in the choice of regions. We exhibit a particular blend of algorithmics and statistics whose segmentation error is, as we show, limited from b ..."
Abstract

Cited by 73 (8 self)
 Add to MetaCart
This paper explores a statistical basis for a process often described in computer vision: image segmentation by region merging following a particular order in the choice of regions. We exhibit a particular blend of algorithmics and statistics whose segmentation error is, as we show, limited from both the qualitative and quantitative standpoints. This approach can be efficiently approximated in linear time/space, leading to a fast segmentation algorithm tailored to processing images described using most common numerical pixel attribute spaces. The conceptual simplicity of the approach makes it simple to modify and cope with hard noise corruption, handle occlusion, authorize the control of the segmentation scale, and process unconventional data such as spherical images. Experiments on graylevel and color images, obtained with a short readily available Ccode, display the quality of the segmentations obtained.
ON THE COVERINGS OF GRAPHS
, 1980
"... Let p(n) denote the smallest integer with the property that any graph with n vertices can be covered by p(n) complete bipartite subgraphs. We prove a conjecture of J.C. Bermond by showing p(n) = n + o(n 11’14+c) for any positive E. ..."
Abstract

Cited by 69 (6 self)
 Add to MetaCart
Let p(n) denote the smallest integer with the property that any graph with n vertices can be covered by p(n) complete bipartite subgraphs. We prove a conjecture of J.C. Bermond by showing p(n) = n + o(n 11’14+c) for any positive E.
BALANCED ALLOCATIONS: THE HEAVILY LOADED CASE
, 2006
"... We investigate ballsintobins processes allocating m balls into n bins based on the multiplechoice paradigm. In the classical singlechoice variant each ball is placed into a bin selected uniformly at random. In a multiplechoice process each ball can be placed into one out of d ≥ 2 randomly selec ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
We investigate ballsintobins processes allocating m balls into n bins based on the multiplechoice paradigm. In the classical singlechoice variant each ball is placed into a bin selected uniformly at random. In a multiplechoice process each ball can be placed into one out of d ≥ 2 randomly selected bins. It is known that in many scenarios having more than one choice for each ball can improve the load balance significantly. Formal analyses of this phenomenon prior to this work considered mostly the lightly loaded case, that is, when m ≈ n. In this paper we present the first tight analysis in the heavily loaded case, that is, when m ≫ n rather than m ≈ n. The best previously known results for the multiplechoice processes in the heavily loaded case were obtained using majorization by the singlechoice process. This yields an upper bound of the maximum load of bins of m/n + O ( √ m ln n/n) with high probability. We show, however, that the multiplechoice processes are fundamentally different from the singlechoice variant in that they have “short memory. ” The great consequence of this property is that the deviation of the multiplechoice processes from the optimal allocation (that is, the allocation in which each bin has either ⌊m/n ⌋ or ⌈m/n ⌉ balls) does not increase with the number of balls as in the case of the singlechoice process. In particular, we investigate the allocation obtained by two different multiplechoice allocation schemes,
On the bias of traceroute sampling: or, powerlaw degree distributions in regular graphs
 In ACM STOC
, 2005
"... Understanding the graph structure of the Internet is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining this graph structure can be a surprisingly difficult task, as edges cannot be explicitly queried. For instance, empiri ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
Understanding the graph structure of the Internet is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining this graph structure can be a surprisingly difficult task, as edges cannot be explicitly queried. For instance, empirical studies of the network of Internet Protocol (IP) addresses typically rely on indirect methods like traceroute to build what are approximately singlesource, alldestinations, shortestpath trees. These trees only sample a fraction of the network’s edges, and a recent paper by Lakhina et al. found empirically that the resulting sample is intrinsically biased. Further, in simulations, they observed that the degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson. In this paper, we study the bias of traceroute sampling mathematically and, for a very general class of underlying degree distributions, explicitly calculate the distribution that will be observed. As example applications of our machinery, we prove that traceroute sampling finds powerlaw degree distributions in both δregular and Poissondistributed random graphs. Thus, our work puts the observations of Lakhina et al. on a rigorous footing, and extends them to nearly arbitrary degree distributions.
Fast Concurrent Access to Parallel Disks
"... High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is ..."
Abstract

Cited by 49 (11 self)
 Add to MetaCart
High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is perhaps the main inhibitor for the efficient adaptation of singledisk external memory algorithms to multiple disks. We solve this problem for arbitrary access patterns by randomly mapping blocks of a logical address space to the disks. We show that a shared buffer of O(D) blocks suffices to support efficient writing. The analysis uses the properties of negative association to handle dependencies between the random variables involved. This approach might be of independent interest for probabilistic analysis in general. If two randomly allocated copies of each block exist, N arbitrary blocks can be read within dN=De + 1 I/O steps with high probability. The redundancy can be further reduced from 2 to 1 + 1=r for any integer r without a big impact on reading efficiency. From the point of view of external memory models, these results rehabilitate Aggarwal and Vitter's "singledisk multihead" model [1] that allows access to D arbitrary blocks in each I/O step. This powerful model can be emulated on the physically more realistic independent disk model [2] with small constant overhead factors. Parallel disk external memory algorithms can therefore be developed in the multihead model first. The emulation result can then be applied directly or further refinements can be added.
Eigenvalues of Random Power Law Graphs
, 2003
"... Many graphs arising in various information networks exhibit the “power law” behavior – the number of vertices of degree k is proportional to k −β for some positive β. We show that if β > 2.5, the largest eigenvalue of a random power law graph is almost surely (1+o(1)) √ m where m is the maximum deg ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
Many graphs arising in various information networks exhibit the “power law” behavior – the number of vertices of degree k is proportional to k −β for some positive β. We show that if β > 2.5, the largest eigenvalue of a random power law graph is almost surely (1+o(1)) √ m where m is the maximum degree. When 2 < β < 2.5, the largest eigenvalue is heavily concentrated at cm 3−β for some constant c depending on β and the average degree. This result follows from a more general theorem which shows that the largest eigenvalue of a random graph with a given expected degree sequence is determined by m, the maximum degree, and ˜ d, the weighted average of the squares of the expected degrees. We show that λ is almost surely (1 + o(1)) max { ˜ d, √ m} provided some minor condition is satisfied. Our results have implications on the usage of spectral techniques in many areas related to pattern detection and information retrieval.
Concentration inequalities and martingale inequalities – a survey
 Internet Math
"... Abstract. We examine a number of generalized and extended versions of concentration inequalities and martingale inequalities. These inequalities are effective for analyzing processes with quite general conditions as illustrated in an example for an infinite Polya process and web graphs. 1. ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
Abstract. We examine a number of generalized and extended versions of concentration inequalities and martingale inequalities. These inequalities are effective for analyzing processes with quite general conditions as illustrated in an example for an infinite Polya process and web graphs. 1.
D.X.: Shannon sampling and function reconstruction from point values
 Bull. Am. Math. Soc
, 2004
"... then came to the University of Chicago, where I was starting my job as instructor for the fall of 1956. He, Suzanne, Clara and I became good friends and saw much of each other for many decades, especially at IHES in Paris. Thom’s encouragement and support were important for me, especially in my firs ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
then came to the University of Chicago, where I was starting my job as instructor for the fall of 1956. He, Suzanne, Clara and I became good friends and saw much of each other for many decades, especially at IHES in Paris. Thom’s encouragement and support were important for me, especially in my first years after my Ph.D. I studied his work in cobordism, singularities of maps, and transversality, gaining many insights. I also enjoyed listening to his provocations, for example his disparaging remarks on complex analysis, 19th century mathematics, and Bourbaki. There was also a stormy side in our relationship. Neither of us could hide the pain that our public conflicts over “catastrophe theory ” caused. René Thom was a great mathematician, leaving his impact on a wide part of mathematics. I will always treasure my memories of him.
A Probabilistic and RIPless Theory of Compressed Sensing
, 2010
"... This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) — they make use of a much weaker notion — or a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise.