Results 11  20
of
420
SpaceEfficiency for Routing Schemes of Stretch Factor Three
 JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
, 1997
"... We deal with routing algorithms on arbitrary nnode networks. A routing algorithm is a deterministic distributed algorithm which routes messages from any source to any destination. It includes not only the classical routing tables, but also the routing algorithm that generates paths with loops. Our ..."
Abstract

Cited by 64 (6 self)
 Add to MetaCart
We deal with routing algorithms on arbitrary nnode networks. A routing algorithm is a deterministic distributed algorithm which routes messages from any source to any destination. It includes not only the classical routing tables, but also the routing algorithm that generates paths with loops. Our goal is to design routing algorithms which minimize, for each router of the network, the amount of routing information that needs to be stored by the router in order to implement its own local routing algorithm. So as to simplify the implementation of a routing algorithm, names of the routers can be chosen in advance. We take also into account the efficiency of the routing, i.e., the length of the routing paths. The stretch factor is the maximum ratio, taken over all sourcedestination pairs, between the length of the paths computed by the routing algorithm and the distance between the source and the destination. We show that there exists an nnode network on which every routing algorithm o...
Routing in networks with low doubling dimension
 In 26 th International Conference on Distributed Computing Systems (ICDCS). IEEE Computer
, 2006
"... This paper studies compact routing schemes for networks with low doubling dimension. Two variants are explored, nameindependent routing and labeled routing. The key results obtained for this model are the following. First, we provide the first nameindependent solution. Specifically, we achieve con ..."
Abstract

Cited by 63 (8 self)
 Add to MetaCart
This paper studies compact routing schemes for networks with low doubling dimension. Two variants are explored, nameindependent routing and labeled routing. The key results obtained for this model are the following. First, we provide the first nameindependent solution. Specifically, we achieve constant stretch and polylogarithmic storage. Second, we obtain the first truly scalefree solutions, namely, the network’s aspect ratio is not a factor in the stretch. Scalefree schemes are given for three problem models: nameindependent routing on graphs, labeled routing on metric spaces, and labeled routing on graphs. Third, we prove a lower bound requiring linear storage for stretch < 3 schemes. This has the important ramification of separating for the first time the nameindependent problem model from the labeled model for these networks, since compact stretch1+ε labeled schemes are known to be possible.
A Mathematica qAnalogue of Zeilberger's Algorithm for Proving qHypergeometric Identities
, 1995
"... Besides an elementary introduction to qidentities and basic hypergeometric series, a newly developed Mathematica implementation of a qanalogue of Zeilberger's fast algorithm for proving terminating qhypergeometric identities together with its theoretical background is described. To illustrate t ..."
Abstract

Cited by 62 (11 self)
 Add to MetaCart
Besides an elementary introduction to qidentities and basic hypergeometric series, a newly developed Mathematica implementation of a qanalogue of Zeilberger's fast algorithm for proving terminating qhypergeometric identities together with its theoretical background is described. To illustrate the usage of the package and its range of applicability, nontrivial examples are presented as well as additional features like the computation of companion and dual identities.
Greatest Factorial Factorization and Symbolic Summation
 J. Symbolic Comput
, 1995
"... This paper is selfcontained, no difference field knowledge but only basic facts from algebra are required. In the following we briefly review its sections. Section 2 presents the basic GFF notions, in particular the Fundamental Lemma and an algorithm for computing the GFFform of a polynomial. In S ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
This paper is selfcontained, no difference field knowledge but only basic facts from algebra are required. In the following we briefly review its sections. Section 2 presents the basic GFF notions, in particular the Fundamental Lemma and an algorithm for computing the GFFform of a polynomial. In Section 3 we investigate the relation to the dispersion function (Abramov, 1971) and discuss "shiftsaturated" polynomials which are polynomials with sufficiently nice GFFform. Due to lattice properties of K[x] with respect to gcd, a minimal shiftsaturated polynomial sat(p) can be assigned to each p 2 K[x]. The canonical Sform of a rational function is introduced as the quotient of two polynomials with denominator of type sat(p). In Section 4 rational telescoping is treated; based on Sform representation, Theorem 4.1 explains why factorials rather than powers play the essential role in summation. Section 5 presents a new and algebraically motivated approach to Gosper's algorithm; together with the basic notions of GFF and Symbolic Summation 3 Section 2 this section can be read independently from the rest of the paper. In Section 6 we consider the general rational summation problem from GFF point of view. Two new algorithms are given. The first one works iteratively similar to the approach sketched by Moenck (1977). His approach is implemented in the computer algebra system Maple to sum rational functions, but due to several gaps in Moenck's original description the Maple algorithm fails on certain rational function inputs as observed by the author of this paper; see Example 6.6. The second algorithm provides an analogue to what is called "Horowitz' Method" or "HermiteOstrogradsky Formula" for rational function integration. In addition, discussing minimaldegree answers to...
Fast Algorithms to Enumerate All Common Intervals of Two Permutations
 Algorithmica
, 2000
"... Given two permutations of n elements, a pair of intervals of these permutations consisting of the same set of elements is called a common interval. Some genetic algorithms based on such common intervals have been proposed for sequencing problems and have exhibited good prospects. In this paper, we p ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
Given two permutations of n elements, a pair of intervals of these permutations consisting of the same set of elements is called a common interval. Some genetic algorithms based on such common intervals have been proposed for sequencing problems and have exhibited good prospects. In this paper, we propose three types of fast algorithms to enumerate all common intervals: i) a simple O(n 2 ) time algorithm (LHP), whose expected running time becomes O(n) for two randomly generated permutations, ii) a practically fast O(n 2 ) time algorithm (MNG) using the reverse Monge property, and iii) an O(n + K) time algorithm (RC), where K ( 0 n 2 ) is the number of common intervals. It will be also shown that the expected number of common intervals for two random permutations is O(1). This result gives a reason for the phenomenon that the expected time complexity O(n) of the algorithm LHP is independent of K. Among the proposed algorithms, RC is most desirable from the theoretical point ...
On the complexity of domainindependent planning
 In Proc. AAAI92. 381–386
, 1992
"... In this paper, we examine how the complexity of domainindependent planning with stripsstyle operators depends on the nature of the planning operators. We show how the time complexity varies depending on a wide variety of conditions: • whether or not delete lists are allowed; • whether or not negat ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
In this paper, we examine how the complexity of domainindependent planning with stripsstyle operators depends on the nature of the planning operators. We show how the time complexity varies depending on a wide variety of conditions: • whether or not delete lists are allowed; • whether or not negative preconditions are allowed; • whether or not the predicates are restricted to be propositions (i.e., 0ary); • whether the planning operators are given as part of the input to the planning problem, or instead are fixed in advance.
General Bounds on Statistical Query Learning and PAC Learning with Noise via Hypothesis Boosting
 in Proceedings of the 34th Annual Symposium on Foundations of Computer Science
, 1993
"... We derive general bounds on the complexity of learning in the Statistical Query model and in the PAC model with classification noise. We do so by considering the problem of boosting the accuracy of weak learning algorithms which fall within the Statistical Query model. This new model was introduced ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
We derive general bounds on the complexity of learning in the Statistical Query model and in the PAC model with classification noise. We do so by considering the problem of boosting the accuracy of weak learning algorithms which fall within the Statistical Query model. This new model was introduced by Kearns [12] to provide a general framework for efficient PAC learning in the presence of classification noise. We first show a general scheme for boosting the accuracy of weak SQ learning algorithms, proving that weak SQ learning is equivalent to strong SQ learning. The boosting is efficient and is used to show our main result of the first general upper bounds on the complexity of strong SQ learning. Specifically, we derive simultaneous upper bounds with respect to 6 on the number of queries, O(log2:), the VapnikChervonenkis dimension of the query space, O(1og log log +), and the inverse of the minimum tolerance, O(+ log 3). In addition, we show that these general upper bounds are nearly optimal by describing a class of learning problems for which we simultaneously lower bound the number of queries by R(1og f) and the inverse of the minimum tolerance by a(:). We further apply our boosting results in the SQ model to learning in the PAC model with classification noise. Since nearly all PAC learning algorithms can be cast in the SQ model, we can apply our boosting techniques to convert these PAC algorithms into highly efficient SQ algorithms. By simulating these efficient SQ algorithms in the PAC model with classification noise, we show that nearly all PAC algorithms can be converted into highly efficient PAC algorithms which *Author was supported by DARPA Contract N0001487K825 and by NSF Grant CCR8914428. Author’s net address: jaaQtheory.lca.rit.edu +.Author was supported by an NDSEG Fellowship and
Tractability of Parameterized Completion Problems on Chordal, Strongly Chordal and Proper Interval Graphs
, 1994
"... We study the parameterized complexity of three NPhard graph completion problems. The MINIMUM FILLIN problem is to decide if a graph can be triangulated by adding at most k edges. We develop O(c m) and O(k mn + f(k)) algorithms for this problem on a graph with n vertices and m edges. Here f(k ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
We study the parameterized complexity of three NPhard graph completion problems. The MINIMUM FILLIN problem is to decide if a graph can be triangulated by adding at most k edges. We develop O(c m) and O(k mn + f(k)) algorithms for this problem on a graph with n vertices and m edges. Here f(k) is exponential in k and the constants hidden by the bigO notation are small and do not depend on k. In particular, this implies that the problem is fixedparameter tractable (FPT). The PROPER
Improved Probabilistic Verification by Hash Compaction
 In Advanced Research Working Conference on Correct Hardware Design and Verification Methods
, 1995
"... . We present and analyze a probabilistic method for verification by explicit state enumeration, which improves on the "hashcompact" method of Wolper and Leroy. The hashcompact method maintains a hash table in which compressed values for states instead of full state descriptors are stored. This metho ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
. We present and analyze a probabilistic method for verification by explicit state enumeration, which improves on the "hashcompact" method of Wolper and Leroy. The hashcompact method maintains a hash table in which compressed values for states instead of full state descriptors are stored. This method saves space but allows a nonzero probability of omitting states during verification, which may cause verification to miss design errors (i.e. verification may produce "false positives"). Our method improves on Wolper and Leroy's by calculating the hash and compressed values independently, and by using a specific hashing scheme that requires a low number of probes in the hash table. The result is a large reduction in the probability of omitting a state. Hence, we can achieve a given upper bound on the probability of omitting a state using fewer bits per compressed state. For example, we can reduce the number of bytes stored for each state from the eight recommended by Wolper and Leroy to o...