Results 1  10
of
28
The Laplacian spectrum of graphs
 Graph Theory, Combinatorics, and Applications
, 1991
"... Abstract. The paper is essentially a survey of known results about the spectrum of the Laplacian matrix of graphs with special emphasis on the second smallest Laplacian eigenvalue λ2 and its relation to numerous graph invariants, including connectivity, expanding properties, isoperimetric number, m ..."
Abstract

Cited by 151 (1 self)
 Add to MetaCart
Abstract. The paper is essentially a survey of known results about the spectrum of the Laplacian matrix of graphs with special emphasis on the second smallest Laplacian eigenvalue λ2 and its relation to numerous graph invariants, including connectivity, expanding properties, isoperimetric number, maximum cut, independence number, genus, diameter, mean distance, and bandwidthtype parameters of a graph. Some new results and generalizations are added. † This article appeared in “Graph Theory, Combinatorics, and Applications”, Vol. 2,
Models of Computation  Exploring the Power of Computing
"... Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and oper ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed to explore the efficient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The focus of theoretical computer scientists in the 1960s on languages is reflected in the first textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey Ullman. This influential book led to the creation of many languagecentered theoretical computer science courses; many introductory theory courses today continue to reflect the content of this book and the interests of theoreticians of the 1960s and early 1970s. Although
Eigenvalues, geometric expanders, sorting in rounds and
 Ramsey Theory, Cornbinatorica
, 1986
"... Expanding graphs are relevant to theoretical computer science in several ways. Here we show that the points versus hyperplanes incidence graphs of finite geometries form highly (nonlinear) expanding graphs with essentially the smallest possible number of edges. The expansion properties of the graphs ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
Expanding graphs are relevant to theoretical computer science in several ways. Here we show that the points versus hyperplanes incidence graphs of finite geometries form highly (nonlinear) expanding graphs with essentially the smallest possible number of edges. The expansion properties of the graphs are proved using the eigenvalues of their adjacency matrices. These graphs enable us to improve previous results on a parallel sorting problem that arises in structural modeling, by describing an explicit algorithm to sort n elements in k time units using O(n ~k) parallel processors, where, e.g., cq=7/4, ~q8/5, 0q=26/17 and ~q=22/15. Our approach also yields several applications to Ramsey Theory and other extremal problems
Improvements on Bottleneck Matching and Related Problems Using Geometry
, 1996
"... Let A and B be two sets of n objects in R d , and let M be a (onetoone) matching between A and B. Let min(M ), max(M ), and \Sigma(M ) denote the length of the shortest edge, the length of the longest edge, and the sum of the lengths of the edges of M respectively. Bottleneck matchinga matchi ..."
Abstract

Cited by 34 (7 self)
 Add to MetaCart
Let A and B be two sets of n objects in R d , and let M be a (onetoone) matching between A and B. Let min(M ), max(M ), and \Sigma(M ) denote the length of the shortest edge, the length of the longest edge, and the sum of the lengths of the edges of M respectively. Bottleneck matchinga matching that minimizes max(M )is suggested as a convenient way for measuring the resemblance between A and B. Several algorithms for computing, as well as approximating, this resemblance are proposed. The running time of all the algorithms involving planar objects is close to O(n 1:5 ). For instance, if the objects are points in the plane, the running time of the exact algorithm is O(n 1:5 log n). A semidynamic datastructure for answering containment problems for a set of congruent disks in the plane is developed. This data structure may be of independent interest. Next, the problem of finding a translation of B that maximizes the resemblance to A under the bottleneck matching criterion...
What Do We Know About The Product Replacement Algorithm?
 in: Groups ann Computation III
, 2000
"... . The product replacement algorithm is a commonly used heuristic to generate random group elements in a finite group G, by running a random walk on generating ktuples of G. While experiments showed outstanding performance, until recently there was little theoretical explanation. We give an exten ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
. The product replacement algorithm is a commonly used heuristic to generate random group elements in a finite group G, by running a random walk on generating ktuples of G. While experiments showed outstanding performance, until recently there was little theoretical explanation. We give an extensive review of both positive and negative theoretical results in the analysis of the algorithm. Introduction In the past few decades the study of groups by means of computations has become a wonderful success story. The whole new field, Computational Group Theory, was developed out of needs to discover and prove new results on finite groups. More recently, the probabilistic method became an important tool for creating faster and better algorithms. A number of applications were developed which assume a fast access to (nearly) uniform group elements. This led to a development of the so called "product replacement algorithm", which is a commonly used heuristic to generate random group elemen...
Implementations of Randomized Sorting on Large Parallel Machines
"... Flashsort [RV83,86] and Samplesort [HC83] are related parallel sorting algorithms proposed in the literature. Both utilize a sophisticated randomized sampling technique to form a splitter set, but Samplesort distributes the splitter set to each processor while Flashsort uses splitterdirected routin ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Flashsort [RV83,86] and Samplesort [HC83] are related parallel sorting algorithms proposed in the literature. Both utilize a sophisticated randomized sampling technique to form a splitter set, but Samplesort distributes the splitter set to each processor while Flashsort uses splitterdirected routing. In this
Geometry helps in bottleneck matching and related problems
 Algorithmica
, 2001
"... This paper is accepted for publication in ALGORITHMICA Abstract Let A and B be two sets of n objects in Rd, and let Match be a (onetoone)matching between A and B. Let min(Match), max(Match), and \Sigma (Match) denote thelength of the shortest edge, the length of the longest edge, and the sum of th ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
This paper is accepted for publication in ALGORITHMICA Abstract Let A and B be two sets of n objects in Rd, and let Match be a (onetoone)matching between A and B. Let min(Match), max(Match), and \Sigma (Match) denote thelength of the shortest edge, the length of the longest edge, and the sum of the lengths of the edges of Match respectively. Bottleneck matchinga matching that minimizesmax( Match)is suggested as a convenient way for measuring the resemblance between A and B. Several algorithms for computing, as well as approximating, this resemblanceare proposed. The running time of all the algorithms involving planar objects is roughly O(n1.5). For instance, if the objects are points in the plane, the running time of the exactalgorithm is O(n1.5 log n). A semidynamic datastructure for answering containmentproblems for a set of congruent disks in the plane is developed. This data structure may be of independent interest.Next, the problem of finding a translation of B that maximizes the resemblance to A under the bottleneck matching criterion is considered. When A and B are pointsetsin the plane, an O(n5 log n) time algorithm for determining whether for some translatedcopy the resemblance gets below a given ae is presented, thus improving the previousresult of Alt, Mehlhorn, Wagener and Welzl by a factor of almost n. This result is usedto compute the smallest such ae in time O(n5 log2 n), and an efficient approximationscheme for this problem is also given. The uniform matching problem (also called the balanced assignment problem, or thefair matching problem) is to find Match*U, a matching that minimizes max(Match)min ( Match). A minimum deviation matching Match*D is a matching that minimizes(1 /n)\Sigma (Match) min(Match). Algorithms for computing Match*U and Match*D inroughly O(n10/3) time are presented. These algorithms are more efficient than theprevious
Analysis of Shellsort and related algorithms
 ESA ’96: Fourth Annual European Symposium on Algorithms
, 1996
"... This is an abstract of a survey talk on the theoretical and empirical studies that have been done over the past four decades on the Shellsort algorithm and its variants. The discussion includes: upper bounds, including linkages to numbertheoretic properties of the algorithm; lower bounds on Shellso ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
This is an abstract of a survey talk on the theoretical and empirical studies that have been done over the past four decades on the Shellsort algorithm and its variants. The discussion includes: upper bounds, including linkages to numbertheoretic properties of the algorithm; lower bounds on Shellsort and Shellsortbased networks; averagecase results; proposed probabilistic sorting networks based on the algorithm; and a list of open problems. 1 Shellsort The basic Shellsort algorithm is among the earliest sorting methods to be discovered (by D. L. Shell in 1959 [36]) and is among the easiest to implement, as exhibited by the following C code for sorting an array a[l],..., a[r]: shellsort(itemType a[], int l, int r) { int i, j, h; itemType v;
Optimal slope selection via cuttings
 Computational Geometry: Theory and Applications
, 1998
"... Abstract We give an optimal deterministic O(n log n)time algorithm for slope selection. The algorithm borrows from the optimal solution given in [?], but avoids the complicated machinery of the AKS sorting network and parametric searching. This is achieved by redesigning and refining the O(n log2 n ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Abstract We give an optimal deterministic O(n log n)time algorithm for slope selection. The algorithm borrows from the optimal solution given in [?], but avoids the complicated machinery of the AKS sorting network and parametric searching. This is achieved by redesigning and refining the O(n log2 n)time algorithm of [?] with the help of additional approximation tools. 1 Optimal Slope Selection The problem is computing the line defined by two of n given points that has the median slope among all \Gamma n2 \Delta such lines. Equivalently, the problem can be stated as that of selecting the medianabscissa vertex of the arrangement A(L) of a set L of n lines [?]. For generality, we set out to compute the vertex with rank I\Lambda from left to right, for any given 1 ^ I \Lambda ^ \Gamma n2 \Delta.
Expander Graphs for Digital Stream Authentication and Robust Overlay Networks
 IN PROCEEDINGS OF THE 2002 IEEE SYMPOSIUM ON SECURITY AND PRIVACY
, 2002
"... We use expander graphs to provide efficient new constructions for two security applications: authentication of long digital streams over lossy networks and building scalable, robust overlay networks. Here is a summary of our contributions: (1) To authenticate long digital streams over lossy networks ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We use expander graphs to provide efficient new constructions for two security applications: authentication of long digital streams over lossy networks and building scalable, robust overlay networks. Here is a summary of our contributions: (1) To authenticate long digital streams over lossy networks, we provide a construction with a provable lower bound on the ability to authenticate a packet  and that lower bound is independent of the size of the graph. To achieve this, we present an authentication expander graph with constant degree. (Previous work, such as [MS01], used authentication graphs but required graphs with degree linear in the number of vertices.) (2) To build efficient, robust, and scalable overlay networks, we provide a construction using undirected expander graphs with a provable lower bound on the ability of a broadcast message to successfully reach any receiver. This also gives us a new, more efficient solution to the decentralized certificate revocation problem [WLM00].