Results 1 
8 of
8
Permutations Which Are the Union of an Increasing and a Decreasing Subsequence
, 1998
"... It is shown that there are # 2n n #  # n1 m=0 2 nm1 # 2m m # permutations which are the union of an increasing sequence and a decreasing sequence. 1991 Mathematics Subject Classification 05A15 Submitted: December 1, 1997; Accepted: January 10, 1998 1 Introduction Merge permutations ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
It is shown that there are # 2n n #  # n1 m=0 2 nm1 # 2m m # permutations which are the union of an increasing sequence and a decreasing sequence. 1991 Mathematics Subject Classification 05A15 Submitted: December 1, 1997; Accepted: January 10, 1998 1 Introduction Merge permutations, permutations which are the union of two increasing subsequences, have been studied for many years [3]. It is known that they are characterised by the property of having no decreasing subsequence of length 3 and that there are # 2n n # /(n + 1) such permutations of length n. Recently there has been some interest in permutations which are the union of an increasing subsequence with a decreasing subsequence. We call such permutations skewmerged. Stankova [4] proved that a permutation is skewmerged if and only if it has no subsequence abcd orderedinthesamewayas 2143 or 3412. In [1, 2] the more general problem of partitioning a permutation into given numbers of increasing and decreasing ...
Graph and Hashing Algorithms for Modern Architectures: Design and Performance
 PROC. 2ND WORKSHOP ON ALGORITHM ENG. WAE 98, MAXPLANCK INST. FÜR INFORMATIK, 1998, IN TR MPII981019
, 1998
"... We study the eects of caches on basic graph and hashing algorithms and show how cache effects inuence the best solutions to these problems. We study the performance of basic data structures for storing lists of values and use these results to design and evaluate algorithms for hashing, BreadthFi ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We study the eects of caches on basic graph and hashing algorithms and show how cache effects inuence the best solutions to these problems. We study the performance of basic data structures for storing lists of values and use these results to design and evaluate algorithms for hashing, BreadthFirstSearch (BFS) and DepthFirstSearch (DFS). For the basic
Efficient Algorithms for Finding A Longest Common Increasing Subsequence
 In 16th Annual International Symposium on Algorithms and Computation (ISAAC
, 2005
"... We study the problem of finding a longest common increasing subsequence (LCIS) of multiple sequences of numbers. The LCIS problem is a fundamental issue in various application areas, including the whole genome alignment. In this paper we give an efficient algorithm to find the LCIS of two sequences ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We study the problem of finding a longest common increasing subsequence (LCIS) of multiple sequences of numbers. The LCIS problem is a fundamental issue in various application areas, including the whole genome alignment. In this paper we give an efficient algorithm to find the LCIS of two sequences in O(min(r log ℓ, nℓ+r) log log n+Sort(n)) time where n is the length of each sequence and r is the number of ordered pairs of positions at which the two sequences match, ℓ is the length of the LCIS, and Sort(n) is the time to sort n numbers. For m sequences where m ≥ 3, we find the LCIS in O(min(mr 2, r log ℓ log m r)+m·Sort(n)) time where r is the total number of mtuples of positions at which the m sequences match. The previous results find the LCIS of two sequences in O(n 2) and O(nℓ log log n+Sort(n)) time. Our algorithm is faster when r is relatively small, e.g., for r < min(n 2 /(log ℓ log log n), nℓ / log ℓ). 1
Design and Analysis of Hashing Algorithms with Cache Effects
, 1998
"... : This paper investigates the performance of hashing algorithms by both an experimental and an analytical approach. We examine the performance of three classical hashing algorithms: chaining, double hashing and linear probing. Our experimental results show that, despite the theoretical superiority ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
: This paper investigates the performance of hashing algorithms by both an experimental and an analytical approach. We examine the performance of three classical hashing algorithms: chaining, double hashing and linear probing. Our experimental results show that, despite the theoretical superiority of chaining and double hashing, linear probing outperforms both for random lookups. We explore variations on the data structures used by these traditional algorithms to improve their spatial locality and hence cache performance. Our results also help determine the optimal table size for a given key set size. In addition to time, we also study the average number of probes and cache misses incurred by these algorithms. For most of the algorithms studied in this paper, our analysis agrees with the experimental results. As a supplementary result, we examine the behavior of random lookups to a hash table. This provides a simple way to estimate the cache miss penalties of different machines. Two c...
LZW Text Compression in Haskell
 Proc. of the 1992 Glasgow Workshop on Functional Programming
, 1992
"... Functional programming is largely untested in the industrial environment. This paper summarises the results of a study into the suitability of Haskell in the area of text compression, an area with definite commercial interest. Our program initially performs very poorly in comparison with a versi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Functional programming is largely untested in the industrial environment. This paper summarises the results of a study into the suitability of Haskell in the area of text compression, an area with definite commercial interest. Our program initially performs very poorly in comparison with a version written in C. Experiments reveal the cause of this to be the large disparity in the relative speed of I/O and bitlevel operations and also a space leak inherent in the Haskell definition. 1 Introduction Claims for the advantages of functional programming languages abound [1] but industrial takeup of the paradigm has been almost nonexistent. Two of the main reasons for this are: ffl lack of proof that the advantages apply to largescale developments; ffl lack of industrystrength compilers and environments, particularly for high speed computation. The FLARE project, managed by BT, sets out to address these issues: "The effectiveness of functional programming has been amply ...
A Bijective Approach to the Permutational Power of a Priority Queue
"... A priority queue transforms an input permutation p 1 of a totally ordered set of size n into an output permutation p 2 . Atkinson and Thiyagarajah showed that the number of such pairs (p 1 ; p 2 ) is (n+1) n\Gamma1 , which is well known to be the number of labeled trees on n + 1 vertices. We give ..."
Abstract
 Add to MetaCart
A priority queue transforms an input permutation p 1 of a totally ordered set of size n into an output permutation p 2 . Atkinson and Thiyagarajah showed that the number of such pairs (p 1 ; p 2 ) is (n+1) n\Gamma1 , which is well known to be the number of labeled trees on n + 1 vertices. We give a new proof of this result by finding a bijection from such pairs of permutations to labeled trees.
How Can Manage Balance Tree (BTree)
"... Abstract — In Btrees, internal (nonleaf) nodes can have a variable number of child nodes within some predefined range. When data is inserted or removed from a node, its number of child nodes changes. In order to maintain the predefined range, internal nodes may be joined or split. Because a range ..."
Abstract
 Add to MetaCart
Abstract — In Btrees, internal (nonleaf) nodes can have a variable number of child nodes within some predefined range. When data is inserted or removed from a node, its number of child nodes changes. In order to maintain the predefined range, internal nodes may be joined or split. Because a range of child nodes is permitted, Btrees do not need rebalancing as frequently as other selfbalancing search trees, but may waste some space, since nodes are not entirely full. The lower and upper bounds on the number of child nodes are typically fixed for a particular implementation. For example, in a 23 Btree (often simply referred to as a 23 tree), each internal node may have only 2 or 3 child nodes. I.