Results 1  10
of
57
A reliable multicast framework for lightweight sessions and application level framing
 IEEE/ACM Transactions on Networking
, 1995
"... Abstract — This paper describes Scalable Reliable Multicast (SRM), a reliable multicast framework for lightweight sessions and application level framing. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The SRM framework has ..."
Abstract

Cited by 997 (46 self)
 Add to MetaCart
Abstract — This paper describes Scalable Reliable Multicast (SRM), a reliable multicast framework for lightweight sessions and application level framing. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The SRM framework has been prototyped in wb, a distributed whiteboard application, which has been used on a global scale with sessions ranging from a few to a few hundred participants. The paper describes the principles that have guided the SRM design, including the IP multicast group delivery model, an endtoend, receiverbased model of reliability, and the application level framing protocol model. As with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. We investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. With the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies. Index Terms—Computer networks, computer network performance, Internetworking.
The hardest constraint problems: A double phase transition
 Artif. Intell
, 1994
"... The distribution of hard graph coloring problems as a function of graph connectivity is shown to have two distinct transition behaviors. The first, previously recognized, is a peak in the median search cost near the connectivity at which half the graphs have solutions. This region contains a high pr ..."
Abstract

Cited by 88 (2 self)
 Add to MetaCart
The distribution of hard graph coloring problems as a function of graph connectivity is shown to have two distinct transition behaviors. The first, previously recognized, is a peak in the median search cost near the connectivity at which half the graphs have solutions. This region contains a high proportion of relatively hard problem instances. However, the hardest instances are in fact concentrated at a second, lower, transition point. Near this point, most problems are quite easy, but there are also a few very hard cases. This region of exceptionally hard problems corresponds to the transition between polynomial and exponential scaling of the average search cost, whose location we also estimate theoretically. These behaviors also appear to arise in other constraint problems. This work also shows the limitations of simple measures of the cost distribution, such as mean or median, for identifying outlying cases. 1
Random constraint satisfaction: Flaws and structure
 Constraints
, 2001
"... 4, and Toby Walsh 5 ..."
2005), 'Network dynamics and field evolution: The growth of interorganizational collaboration in the life sciences
 American Journal of Sociology
"... A recursive analysis of network and institutional evolution is offered to account for the decentralized structure of the commercial field of the life sciences. Four alternative logics of attachment—accumulative advantage, homophily, followthetrend, and multiconnectivity—are tested to explain the s ..."
Abstract

Cited by 64 (7 self)
 Add to MetaCart
A recursive analysis of network and institutional evolution is offered to account for the decentralized structure of the commercial field of the life sciences. Four alternative logics of attachment—accumulative advantage, homophily, followthetrend, and multiconnectivity—are tested to explain the structure and dynamics of interorganizational collaboration in biotechnology. Using multiple novel methods, the authors demonstrate how different rules for affiliation shape network evolution. Commercialization strategies pursued by early corporate entrants are supplanted by universities, research institutes, venture capital, and small firms. As organizations increase their collaborative activities and diversify their ties to others, cohesive subnetworks form, characterized by multiple, independent pathways. These structural components, in turn, condition the choices and opportunities available to members of a field, thereby reinforcing an attachment logic based on differential connections to diverse partners.
Replicator Equations, Maximal Cliques, and Graph Isomorphism
, 1999
"... We present a new energyminimization framework for the graph isomorphism problem that is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid1960s, and recently expanded in various ways, which allows us to fo ..."
Abstract

Cited by 53 (11 self)
 Add to MetaCart
We present a new energyminimization framework for the graph isomorphism problem that is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid1960s, and recently expanded in various ways, which allows us to formulate the maximum clique problem in terms of a standard quadratic program. The attractive feature of this formulation is that a clear onetoone correspondence exists between the solutions of the quadratic program and those in the original, combinatorial problem. To solve the program we use the socalled replicator equations—a class of straightforward continuous and discretetime dynamical systems developed in various branches of theoretical biology. We show how, despite their inherent inability to escape from local solutions, they nevertheless provide experimental results that are competitive with those obtained using more elaborate meanfield annealing heuristics.
A Switching Lemma for Small Restrictions and Lower Bounds for kDNF Resolution (Extended Abstract)
 SIAM J. Comput
, 2002
"... We prove a new switching lemma that works for restrictions that set only a small fraction of the variables and is applicable to DNFs with small conjunctions. We use this to prove lower bounds for the Res(k) propositional proof system, an extension of resolution which works with kDNFs instead of cla ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
We prove a new switching lemma that works for restrictions that set only a small fraction of the variables and is applicable to DNFs with small conjunctions. We use this to prove lower bounds for the Res(k) propositional proof system, an extension of resolution which works with kDNFs instead of clauses. We also obtain an exponential separation between depth d circuits of k + 1.
Neighborhood Effects
 PREPARED FOR THE HANDBOOK OF REGIONAL AND URBAN ECONOMICS, VOLUME 4,
, 2003
"... This paper surveys the modern economics literature on the role of neighborhoods in influencing socioeconomic outcomes. Neighborhood effects have been analyzed in a range of theoretical and applied contexts and have proven to be of interest in understanding questions ranging from the asymptotic prope ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
This paper surveys the modern economics literature on the role of neighborhoods in influencing socioeconomic outcomes. Neighborhood effects have been analyzed in a range of theoretical and applied contexts and have proven to be of interest in understanding questions ranging from the asymptotic properties of various evolutionary games to explaining the persistence of poverty in inner cities. As such, the survey covers a range of theoretical, econometric and empirical topics. One conclusion from the survey is that there is a need to better integrate findings from theory and econometrics into empirical studies; until this is done, empirical studies of the nature and magnitude of neighborhood effects are unlikely to persuade those skeptical about their importance.
WorstCase Interactive Communication I: Two Messages are Almost Optimal
 IEEE Transactions on Information Theory
, 1990
"... X and Y are random variables. Person PX knows X, Person P Y knows Y , and both know the joint probability distribution of the pair (X; Y ). Using a predetermined protocol, they communicate over a binary, errorfree, channel in order for P Y to learn X. PX may or may not learn Y . How many informatio ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
X and Y are random variables. Person PX knows X, Person P Y knows Y , and both know the joint probability distribution of the pair (X; Y ). Using a predetermined protocol, they communicate over a binary, errorfree, channel in order for P Y to learn X. PX may or may not learn Y . How many information bits must be transmitted (by both persons) in the worst case if only m messages are allowed? C 1 (XjY ) is the number of bits required when at most one message is allowed, necessarily from PX to P Y . C 2 (XjY ) is the number of bits required when at most two messages are permitted: P Y transmits a message to PX , then PX responds with a message to P Y . C1 (XjY ) is the number of bits required when communication is unrestricted: PX and P Y can communicate back and forth. The maximum reduction in communication achievable via interaction is almost logarithmic. For all (X; Y ) pairs, C1 (XjY ) dlog C 1 (XjY )e + 1, whereas, for a class of (X; Y ) pairs, C1 (XjY ) = dlog C 1 (...
Frozen Development in Graph Coloring
 THEORETICAL COMPUTER SCIENCE
, 2000
"... We define the `frozen development' of coloring random graphs. We identify two nodes in a graph as frozen if they are the same color in all legal colorings. This is analogous to studies of the development of a backbone or spine in SAT (the Satisability problem). We first describe in detail the algori ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
We define the `frozen development' of coloring random graphs. We identify two nodes in a graph as frozen if they are the same color in all legal colorings. This is analogous to studies of the development of a backbone or spine in SAT (the Satisability problem). We first describe in detail the algorithmic techniques used to study frozen development. We present strong empirical evidence that freezing in 3coloring is sudden. A single edge typically causes the size of the graph to collapse in size by 28%. We also use the frozen development to calculate unbiased estimates of probability of colorability in random graphs, even where this probability is as low as 10^300. We investigate the links between frozen development and the solution cost of graph coloring. In SAT, a discontinuity in the order parameter has been correlated with the hardness of SAT instances, and our data for coloring is suggestive of an asymptotic discontinuity. The uncolorability threshold is known to give rise to har...
A Fast, Parallel Spanning Tree Algorithm for Symmetric Multiprocessors (SMPs) (Extended Abstract)
, 2004
"... Our study in this paper focuses on implementing parallel spanning tree algorithms on SMPs. Spanning tree is an important problem in the sense that it is the building block for many other parallel graph algorithms and also because it is representative of a large class of irregular combinatorial probl ..."
Abstract

Cited by 31 (11 self)
 Add to MetaCart
Our study in this paper focuses on implementing parallel spanning tree algorithms on SMPs. Spanning tree is an important problem in the sense that it is the building block for many other parallel graph algorithms and also because it is representative of a large class of irregular combinatorial problems that have simple and efficient sequential implementations and fast PRAM algorithms, but often have no known efficient parallel implementations. In this paper we present a new randomized algorithm and implementation with superior performance that for the firsttime achieves parallel speedup on arbitrary graphs (both regular and irregular topologies) when compared with the best sequential implementation for finding a spanning tree. This new algorithm uses several techniques to give an expected running time that scales linearly with the number p of processors for suitably large inputs (n> p 2). As the spanning tree problem is notoriously hard for any parallel implementation to achieve reasonable speedup, our study may shed new light on implementing PRAM algorithms for sharedmemory parallel computers. The main results of this paper are 1. A new and practical spanning tree algorithm for symmetric multiprocessors that exhibits parallel speedups on graphs with regular and irregular topologies; and 2. An experimental study of parallel spanning tree algorithms that reveals the superior performance of our new approach compared with the previous algorithms. The source code for these algorithms is freelyavailable from our web site hpc.ece.unm.edu.