Results 1  10
of
147
Efficient erasure correcting codes
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2001
"... We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discretetime random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both si ..."
Abstract

Cited by 360 (26 self)
 Add to MetaCart
(Show Context)
We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discretetime random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both sides of the graph which is necessary and sufficient for the decoding process to finish successfully with high probability. By carefully designing these graphs we can construct for any given rate and any given real number a family of linear codes of rate which can be encoded in time proportional to ��@I A times their block length. Furthermore, a codeword can be recovered with high probability from a portion of its entries of length @IC A or more. The recovery algorithm also runs in time proportional to ��@I A. Our algorithms have been implemented and work well in practice; various implementation issues are discussed.
Local Search Strategies for Satisfiability Testing
 DIMACS SERIES IN DISCRETE MATHEMATICS AND THEORETICAL COMPUTER SCIENCE
, 1995
"... It has recently been shown that local search is surprisingly good at finding satisfying assignments for certain classes of CNF formulas [24]. In this paper we demonstrate that the power of local search for satisfiability testing can be further enhanced by employinga new strategy, called "mixed ..."
Abstract

Cited by 311 (28 self)
 Add to MetaCart
It has recently been shown that local search is surprisingly good at finding satisfying assignments for certain classes of CNF formulas [24]. In this paper we demonstrate that the power of local search for satisfiability testing can be further enhanced by employinga new strategy, called "mixed random walk", for escaping from local minima. We present experimental results showing how this strategy allows us to handle formulas that are substantially larger than those that can be solved with basic local search. We also present a detailed comparison of our random walk strategy with simulated annealing. Our results show that mixed random walk is the superior strategy on several classes of computationally difficult problem instances. Finally, we present results demonstrating the effectiveness of local search with walk for solving circuit synthesis and diagnosis problems.
Sharp Thresholds of Graph properties, and the ksat Problem
 J. Amer. Math. Soc
, 1998
"... Given a monotone graph property P , consider p (P ), the probability that a random graph with edge probability p will have P . The function d p (P )=dp is the key to understanding the threshold behavior of the property P . We show that if d p (P )=dp is small (corresponding to a nonsharp thres ..."
Abstract

Cited by 210 (7 self)
 Add to MetaCart
(Show Context)
Given a monotone graph property P , consider p (P ), the probability that a random graph with edge probability p will have P . The function d p (P )=dp is the key to understanding the threshold behavior of the property P . We show that if d p (P )=dp is small (corresponding to a nonsharp threshold), then there is a list of graphs of bounded size such that P can be approximated by the property of having one of the graphs as a subgraph. One striking consequences of this result is that a coarse threshold for a random graph property can only happen when the value of the critical edge probability is a rational power of n.
The Constrainedness of Search
 In Proceedings of AAAI96
, 1999
"... We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrain ..."
Abstract

Cited by 128 (29 self)
 Add to MetaCart
(Show Context)
We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrained". Our definition of constrainedness generalizes a number of parameters used to study phase transition behaviour in a wide variety of problem domains. As well as predicting the location of phase transitions in solubility, constrainedness provides insight into why problems at phase transitions tend to be hard to solve. Such problems are on a constrainedness "knifeedge", and we must search deep into the problem before they look more or less soluble. Heuristics that try to get off this knifeedge as quickly as possible by, for example, minimizing the constrainedness are often very effective. We show that heuristics from a wide variety of problem domains can be seen as minimizing the constrainedness (or proxies ...
Analysis of Random Processes via AndOr Tree Evaluation
 In Proceedings of the 9th Annual ACMSIAM Symposium on Discrete Algorithms
, 1998
"... We introduce a new set of probabilistic analysis tools based on the analysis of AndOr trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random lossresilient ..."
Abstract

Cited by 113 (23 self)
 Add to MetaCart
We introduce a new set of probabilistic analysis tools based on the analysis of AndOr trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random lossresilient codes, solving random kSAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above. 1 Introduction We introduce a new set of probabilistic analysis tools related to the amplification method introduced by [12] and further developed and used in [13, 5]. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including the random lossresilient codes introduced ...
Typical random 3SAT formulae and the satisfiability threshold
 in Proceedings of the Eleventh ACMSIAM Symposium on Discrete Algorithms
, 2000
"... Abstract: We present a new structural (or syntactic) approach for estimating the satisfiability threshold of random 3SAT formulae. We show its efficiency in obtaining a jump from the previous upper bounds, lowering them to 4.506. The method combines well with other techniques, and also applies to o ..."
Abstract

Cited by 97 (4 self)
 Add to MetaCart
(Show Context)
Abstract: We present a new structural (or syntactic) approach for estimating the satisfiability threshold of random 3SAT formulae. We show its efficiency in obtaining a jump from the previous upper bounds, lowering them to 4.506. The method combines well with other techniques, and also applies to other problems, such as the 3colourability of random graphs. 1
Approximating the unsatisfiability threshold of random formulas
, 1998
"... Let ï¿½ be a random Boolean formula that is an instance of 3SAT. We consider the problem of computing the least real number ï¿½ such that if the ratio of the number of clauses over the number of variables of ï¿½ strictly exceeds ï¿½, then ï¿½ is almost certainly unsatisfiable. By a wellknown and ..."
Abstract

Cited by 88 (15 self)
 Add to MetaCart
Let ï¿½ be a random Boolean formula that is an instance of 3SAT. We consider the problem of computing the least real number ï¿½ such that if the ratio of the number of clauses over the number of variables of ï¿½ strictly exceeds ï¿½, then ï¿½ is almost certainly unsatisfiable. By a wellknown and more or less straightforward argument, it can be shown that ï¿½ï¿½5.191. This upper bound was improved by Kamath et al. to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of ï¿½ is around 4.2. In this work, we define, in terms of the random formula ï¿½, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then ï¿½ is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for ï¿½ equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601ï¿½. In general, by letting the
Random Constraint Satisfaction: A More Accurate Picture
, 1997
"... Recently there has been a great amount of interest in Random Constraint Satisfaction Problems, both from an experimental and a theoretical point of view. Rather intruigingly, experimental results with various models for generating random CSP instances suggest a "thresholdlike" behaviou ..."
Abstract

Cited by 85 (7 self)
 Add to MetaCart
Recently there has been a great amount of interest in Random Constraint Satisfaction Problems, both from an experimental and a theoretical point of view. Rather intruigingly, experimental results with various models for generating random CSP instances suggest a "thresholdlike" behaviour and some theoretical work has been done in analyzing these models when the number of variables is asymptotic. In this paper we show that the models commonly used for generating random CSP instances suffer from a wrong parameterization which makes them unsuitable for asymptotic analysis. In particular, when the number of variables becomes large almost all instances they generate are, trivially, overconstrained. We then present a new model that is suitable for asymptotic analysis and, in the spirit of random SAT, we derive lower and upper bounds for its parameters so that the instances generated are "almost surely" over and underconstrained, respectively. Finally, we apply the technique introduced in [19], to one of the popular models in Artificial Intelligence and derive sharper estimates for the probability of being overconstrained as a function of the number of variables. 1
Lower bounds for random 3SAT via differential equations
 THEORETICAL COMPUTER SCIENCE
, 2001
"... ..."