Results 11  20
of
119
Pathbased depthfirst search for strong and biconnected components
 Information Processing Letters
, 2000
"... Key words: Graph, depthfirst search, strongly connected component, biconnected component, stack. ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
(Show Context)
Key words: Graph, depthfirst search, strongly connected component, biconnected component, stack.
Packet Routing In FixedConnection Networks: A Survey
, 1998
"... We survey routing problems on fixedconnection networks. We consider many aspects of the routing problem and provide known theoretical results for various communication models. We focus on (partial) permutation, krelation routing, routing to random destinations, dynamic routing, isotonic routing ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
We survey routing problems on fixedconnection networks. We consider many aspects of the routing problem and provide known theoretical results for various communication models. We focus on (partial) permutation, krelation routing, routing to random destinations, dynamic routing, isotonic routing, fault tolerant routing, and related sorting results. We also provide a list of unsolved problems and numerous references.
Multidigit Multiplication For Mathematicians
, 2001
"... This paper surveys techniques for multiplying elements of various commutative rings. It covers Karatsuba multiplication, dual Karatsuba multiplication, Toom multiplication, dual Toom multiplication, the FFT trick, the twisted FFT trick, the splitradix FFT trick, Good's trick, the SchönhageStr ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
This paper surveys techniques for multiplying elements of various commutative rings. It covers Karatsuba multiplication, dual Karatsuba multiplication, Toom multiplication, dual Toom multiplication, the FFT trick, the twisted FFT trick, the splitradix FFT trick, Good's trick, the SchönhageStrassen trick, Schönhage's trick, Nussbaumer's trick, the cyclic SchönhageStrassen trick, and the CantorKaltofen theorem. It emphasizes the underlying ring homomorphisms.
Parallel RealTime Optimization: Beyond Speedup
 PARALLEL PROCESSING LETTERS
, 1999
"... Traditionally, interest in parallel computation centered around the speedup provided by parallel algorithms over their sequential counterparts. In this paper, we ask a different type of question: Can parallel computers, due to their speed, do more than simply speed up the solution to a problem? ..."
Abstract

Cited by 25 (23 self)
 Add to MetaCart
Traditionally, interest in parallel computation centered around the speedup provided by parallel algorithms over their sequential counterparts. In this paper, we ask a different type of question: Can parallel computers, due to their speed, do more than simply speed up the solution to a problem? We show that for realtime optimization problems, a parallel computer can obtain a solution that is better than that obtained by a sequential one. Specifically, a sequential and a parallel algorithm are exhibited for the problem of computing the bestpossible approximation to the minimumweight spanning tree of a connected, undirected and weighted graph whose vertices and edges are not all available at the outset, but instead arrive in real time. While the parallel algorithm succeeds in computing the exact minimumweight spanning tree, the sequential algorithm can only manage to obtain an approximate solution. In the worst case, the ratio of the weight of the solution obtained seque...
Phase Transitions in Relational Learning
, 2000
"... One of the major limitations of relational learning is due to the complexity of verifying hypotheses on examples. In this paper we investigate this task in light of recent published results, which show that many hard problems exhibit a narrow “phase transition ” with respect to some order paramete ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
One of the major limitations of relational learning is due to the complexity of verifying hypotheses on examples. In this paper we investigate this task in light of recent published results, which show that many hard problems exhibit a narrow “phase transition ” with respect to some order parameter, coupled with a large increase in computational complexity. First we show that matching a class of artificially generated Horn clauses on ground instances presents a typical phase transition in solvability with respect to both the number of literals in the clause and the number of constants occurring in the instance to match. Then, we demonstrate that phase transitions also appear in realworld learning problems, and that learners tend to generate inductive hypotheses lying exactly on the phase transition. On the other hand, an extensive experimenting revealed that not every matching problem inside the phase transition region is intractable. However, unfortunately, identifying those that are feasible cannot be done solely on the basis of the order parameters. To face this problem, we propose a method, based on a Monte Carlo algorithm, to estimate online the likelihood that the current matching problem will exceed a given amount of computational resources. The impact of the above findings on relational learning is discussed.
The Generation of Random Numbers That Are Probably Prime
 Journal of Cryptology
, 1988
"... In this paper we make two observations on Rabin's probabilistic primality test. The first is a provocative reason why Rabin's test is so good. It turned out that a single iteration has a nonnegligible probability of failing _only_ on composite numbers that can actually be split in expected ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
In this paper we make two observations on Rabin's probabilistic primality test. The first is a provocative reason why Rabin's test is so good. It turned out that a single iteration has a nonnegligible probability of failing _only_ on composite numbers that can actually be split in expected polynomial time. Therefore, factoring would be easy if Rabin's test systematically failed with a 25% probability on each composite integer (which, of course, it does not). The second observation is more fundamental because is it _not_ restricted to primality testing: it has consequences for the entire field of probabilistic algorithms. The failure probability when using a probabilistic algorithm for the purpose of testing some property is compared with that when using it for the purpose of obtaining a random element hopefully having this property. More specifically, we investigate the question of how reliable Rabin's test is when used to _generate_ a random integer that is probably prime, rather than to _test_ a specific integer for primality.
Key words: factorization, false witnesses, primality testing, probabilistic algorithms, Rabin's test.
Stochastic Problem Solving by Local Computation based
 on Selforganization Paradigm, 27th Hawaii International Conference on System Sciences
, 1994
"... We are developing a new problemsolving methodology based on a selforganization paradigm. To realize our future goal of selforganizing computational systems, we have to study computation based on local information and its emergent behavior, which are considered essential in selforganizing systems ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
We are developing a new problemsolving methodology based on a selforganization paradigm. To realize our future goal of selforganizing computational systems, we have to study computation based on local information and its emergent behavior, which are considered essential in selforganizing systems. This paper presents a stochastic (or nondeterministic) problem solving method using local operations and local evaluation functions. Several constraint satisfaction problems are solved and approximate solutions of several optimization problem are found by this method in polynomial order time in average. Major features of this method are as follows. Problems can be solved using one or a few simple production rules and evaluation functions, both of which work locally, i.e., on a small number of objects. Local maxima of the sum of evaluation function values can sometimes be avoided. Limit cycles of execution can also be avoided. There are two methods for changing the locality of rules. The efficiency of searches and the possibility of falling into local maxima can be controlled by changing the locality. 1.
Statistical Model Checking for Markov Decision Processes
"... Abstract—Statistical Model Checking (SMC) is a computationally very efficient verification technique based on selective system sampling. One well identified shortcoming of SMC is that, unlike probabilistic model checking, it cannot be applied to systems featuring nondeterminism, such as Markov Decis ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Statistical Model Checking (SMC) is a computationally very efficient verification technique based on selective system sampling. One well identified shortcoming of SMC is that, unlike probabilistic model checking, it cannot be applied to systems featuring nondeterminism, such as Markov Decision Processes (MDP). We address this limitation by developing an algorithm that resolves nondeterminism probabilistically, and then uses multiple rounds of sampling and Reinforcement Learning to provably improve resolutions of nondeterminism with respect to satisfying a Bounded Linear Temporal Logic (BLTL) property. Our algorithm thus reduces an MDP to a fully probabilistic Markov chain on which SMC may be applied to give an approximate solution to the problem of checking the probabilistic BLTL property. We integrate our algorithm in a parallelised modification of the PRISM simulation framework. Extensive validation with both new and PRISM benchmarks demonstrates that the approach scales very well in scenarios where symbolic algorithms fail to do so.
Parallel RealTime Computation: Sometimes Quantity Means Quality
 Computing and Informatics
, 2000
"... The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. As a consequence, interest in parallel computation to date has naturally focused on the speedup provided by parallel algorithms over their sequential counterparts. Th ..."
Abstract

Cited by 15 (13 self)
 Add to MetaCart
(Show Context)
The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. As a consequence, interest in parallel computation to date has naturally focused on the speedup provided by parallel algorithms over their sequential counterparts. The thesis of this paper is that a second equally important motivation for using parallel computers exists. Specifically, the following question is posed: Can parallel computers, thanks to their multiple processors, do more than simply speed up the solution to a problem? We show that within the paradigm of realtime computation, some classes of problems have the property that a solution to a problem in the class, when computed in parallel, is far superior in quality than the best one obtained on a sequential computer. What constitutes a better solution depends on the problem under consideration. Thus, `better' means `closer to optimal' for optimization problems, `more secure' for crypto...