Results 11  20
of
77
Analysis of an Asymmetric Leader Election Algorithm
 Electronic J. Combin
, 1996
"... We consider a leader election algorithm in which a set of distributed objects (people, computers, etc.) try to identify one object as their leader. The election process is randomized, that is, at every stage of the algorithm those objects that survived so far flip a biased coin, and those who rec ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
We consider a leader election algorithm in which a set of distributed objects (people, computers, etc.) try to identify one object as their leader. The election process is randomized, that is, at every stage of the algorithm those objects that survived so far flip a biased coin, and those who received, say a tail, survive for the next round. The process continues until only one objects remains. Our interest is in evaluating the limiting distribution and the first two moments of the number of rounds needed to select a leader. We establish precise asymptotics for the first two moments, and show that the asymptotic expression for the duration of the algorithm exhibits some periodic fluctuations and consequently no limiting distribution exists. These results are proved by analytical techniques of the precise analysis of algorithms such as: analytical poissonization and depoissonization, Mellin transform, and complex analysis.
Multidigit Multiplication For Mathematicians
"... . This paper surveys techniques for multiplying elements of various commutative rings. It covers Karatsuba multiplication, dual Karatsuba multiplication, Toom multiplication, dual Toom multiplication, the FFT trick, the twisted FFT trick, the splitradix FFT trick, Good's trick, the SchonhageStrass ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
. This paper surveys techniques for multiplying elements of various commutative rings. It covers Karatsuba multiplication, dual Karatsuba multiplication, Toom multiplication, dual Toom multiplication, the FFT trick, the twisted FFT trick, the splitradix FFT trick, Good's trick, the SchonhageStrassen trick, Schonhage's trick, Nussbaumer's trick, the cyclic SchonhageStrassen trick, and the CantorKaltofen theorem. It emphasizes the underlying ring homomorphisms. 1.
P.: A discipline of dynamic programming over sequence data
 Science of Computer Programming
, 2004
"... Abstract. Dynamic programming is a classical programming technique, applicable in a wide variety of domains such as stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing of ambiguous languages, and biosequence analysis. Little methodology has ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
Abstract. Dynamic programming is a classical programming technique, applicable in a wide variety of domains such as stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing of ambiguous languages, and biosequence analysis. Little methodology has hitherto been available to guide the design of such algorithms. The matrix recurrences that typically describe a dynamic programming algorithm are difficult to construct, errorprone to implement, and, in nontrivial applications, almost impossible to debug completely. This article introduces a discipline designed to alleviate this problem. We describe an algebraic style of dynamic programming over sequence data. We define its formal framework, based on a combination of grammars and algebras, and including a formalization of Bellmanâ€™s Principle. We suggest a language used for algorithm design on a convenient level of abstraction. We outline three ways of implementing this language, including an embedding in a lazy functional language. The workings of the
The Generation of Random Numbers That Are Probably Prime
 Journal of Cryptology
, 1988
"... In this paper we make two observations on Rabin's probabilistic primality test. The first is a provocative reason why Rabin's test is so good. It turned out that a single iteration has a nonnegligible probability of failing _only_ on composite numbers that can actually be split in expected polynomia ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
In this paper we make two observations on Rabin's probabilistic primality test. The first is a provocative reason why Rabin's test is so good. It turned out that a single iteration has a nonnegligible probability of failing _only_ on composite numbers that can actually be split in expected polynomial time. Therefore, factoring would be easy if Rabin's test systematically failed with a 25% probability on each composite integer (which, of course, it does not). The second observation is more fundamental because is it _not_ restricted to primality testing: it has consequences for the entire field of probabilistic algorithms. The failure probability when using a probabilistic algorithm for the purpose of testing some property is compared with that when using it for the purpose of obtaining a random element hopefully having this property. More specifically, we investigate the question of how reliable Rabin's test is when used to _generate_ a random integer that is probably prime, rather than to _test_ a specific integer for primality.
Key words: factorization, false witnesses, primality testing, probabilistic algorithms, Rabin's test.
Stochastic Problem Solving by Local Computation based
 on Selforganization Paradigm, 27th Hawaii International Conference on System Sciences
, 1994
"... We are developing a new problemsolving methodology based on a selforganization paradigm. To realize our future goal of selforganizing computational systems, we have to study computation based on local information and its emergent behavior, which are considered essential in selforganizing systems ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
We are developing a new problemsolving methodology based on a selforganization paradigm. To realize our future goal of selforganizing computational systems, we have to study computation based on local information and its emergent behavior, which are considered essential in selforganizing systems. This paper presents a stochastic (or nondeterministic) problem solving method using local operations and local evaluation functions. Several constraint satisfaction problems are solved and approximate solutions of several optimization problem are found by this method in polynomial order time in average. Major features of this method are as follows. Problems can be solved using one or a few simple production rules and evaluation functions, both of which work locally, i.e., on a small number of objects. Local maxima of the sum of evaluation function values can sometimes be avoided. Limit cycles of execution can also be avoided. There are two methods for changing the locality of rules. The efficiency of searches and the possibility of falling into local maxima can be controlled by changing the locality. 1.
Parallel RealTime Numerical Computation: Beyond Speedup III
 International Journal of Computers and their Applications, Special Issue on High Performance Computing Systems
"... Parallel computers can do more than simply speed up sequential computations. They are capable of finding solutions that are far better in quality than those obtained by sequential computers. This fact is demonstrated by analyzing sequential and parallel solutions to numerical problems in a realtime ..."
Abstract

Cited by 16 (15 self)
 Add to MetaCart
Parallel computers can do more than simply speed up sequential computations. They are capable of finding solutions that are far better in quality than those obtained by sequential computers. This fact is demonstrated by analyzing sequential and parallel solutions to numerical problems in a realtime paradigm. In this setting, numerical data required to solve a problem are received as input by a computer system, at regular intervals. The computer must process its inputs as soon as they arrive. It must also produce its outputs at regular intervals, as soon as they are available. We show that for some realtime numerical problems a parallel computer can deliver a solution that is significantly more accurate than when computed by a sequential computer. Similar results were derived recently in the areas of realtime optimization and realtime cryptography. Key words and phrases: Parallelism, realtime computation, numerical analysis. This research was supported by the Natural Sciences a...
Parallel RealTime Computation: Sometimes Quantity Means Quality
 Computing and Informatics
, 2000
"... The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. As a consequence, interest in parallel computation to date has naturally focused on the speedup provided by parallel algorithms over their sequential counterparts. Th ..."
Abstract

Cited by 15 (14 self)
 Add to MetaCart
The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. As a consequence, interest in parallel computation to date has naturally focused on the speedup provided by parallel algorithms over their sequential counterparts. The thesis of this paper is that a second equally important motivation for using parallel computers exists. Specifically, the following question is posed: Can parallel computers, thanks to their multiple processors, do more than simply speed up the solution to a problem? We show that within the paradigm of realtime computation, some classes of problems have the property that a solution to a problem in the class, when computed in parallel, is far superior in quality than the best one obtained on a sequential computer. What constitutes a better solution depends on the problem under consideration. Thus, `better' means `closer to optimal' for optimization problems, `more secure' for crypto...
SelfStabilizing Dynamic Programming Algorithms on Trees
 in Proceedings of the Second Workshop on SelfStabilizing Systems
, 1995
"... Dynamic programming is a bottomup approach that is typically used for designing algorithms for optimization problems. Many graphtheoretic optimization problems that are NPhard in general, can be efficiently solved, using dynamic programming, when restricted to trees. Examples of such problems inc ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Dynamic programming is a bottomup approach that is typically used for designing algorithms for optimization problems. Many graphtheoretic optimization problems that are NPhard in general, can be efficiently solved, using dynamic programming, when restricted to trees. Examples of such problems include maximum weighted independent set and minimum weighted edge covering. In this paper, we present a technique to translate certain dynamic programming algorithms into distributed, selfstabilizing algorithms that run on trees. The resulting selfstabilizing algorithms are deterministic and uniform. We prove the correctness of the algorithms produced by our translator assuming a distributed scheduler with readwrite atomicity. We also show that on a tree with radius r, our algorithms stabilize in no more than 2r + 3 rounds. Keywords: Centers, Distributed algorithms, Dynamic programming, Selfstabilization, Trees. 1 Introduction Dynamic programming is a bottomup approach that is typically ...
Parallel BlockDiagonalBordered Sparse Linear Solvers for Electrical Power System Applications
, 1995
"... This thesis presents research into parallel linear solvers for blockdiagonalbordered sparse matrices. The blockdiagonalbordered form identifies parallelism that can be exploited for both direct and iterative linear solvers. We have developed efficient parallel blockdiagonalbordered sparse dire ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
This thesis presents research into parallel linear solvers for blockdiagonalbordered sparse matrices. The blockdiagonalbordered form identifies parallelism that can be exploited for both direct and iterative linear solvers. We have developed efficient parallel blockdiagonalbordered sparse direct methods based on both LU factorization and Choleski factorization algorithms, and we have also developed a parallel blockdiagonalbordered sparse iterative method based on the GaussSeidel method. Parallel factorization algorithms for blockdiagonalbordered form matrices require a specialized ordering step coupled to an explicit load balancing step in order to generate this matrix form and to distribute the computational workload uniformly for an irregular matrix throughout a distributedmemory multiprocessor. Matrix orderings are performed using a diakoptic technique based on nodetearingnodal analysis. Parallel GaussSeidel algorithms for blockdiagonalbordered form matrices require a twopart matrix ordering technique  first to partition the matrix into blockdiagonalbordered form, again, using the nodetearing diakoptic techniques and then to multicolor the data in the last diagonal block using graph coloring techniques. The ordered matrices have extensive parallelism, while maintaining the strict precedence relationships in the GaussSeidel algorithm. Empirical
Analysis Of A Splitting Process Arising In Probabilistic Counting And Other Related Algorithms
, 1996
"... We present an analytical method of analyzing a class of "splitting algorithms" that include probabilistic counting, selecting the leader, estimating the number of questions necessary to identify distinct objects, searching algorithms based on digital tries, approximate counting, and so forth. In our ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We present an analytical method of analyzing a class of "splitting algorithms" that include probabilistic counting, selecting the leader, estimating the number of questions necessary to identify distinct objects, searching algorithms based on digital tries, approximate counting, and so forth. In our discussion we concentrate on the analysis of a generalized probabilistic counting algorithm. Our technique belongs to the toolkit of the analytical analysis of algorithms, and it involves solutions of functional equations, analytical poissonization and depoissonization as well as Mellin transform. In particular, we deal with an instance of the functional equation g(z) = fia(z)g(z=2) + b(z) where a(z) and b(z) are given functions, and fi ! 1 is a constant. With respect to our generalized probabilistic counting algorithm, we obtain asymptotic expansions of the first two moments of an estimate of the cardinality of a set that is computed by the algorithm. We also derive the asymptotic distrib...