Results 11  20
of
70
Cache Performance Analysis of Traversals and Random Accesses
 In Proceedings of the Tenth Annual ACMSIAM Symposium on Discrete Algorithms
, 1999
"... This paper describes a model for studying the cache performance of algorithms in a directmapped cache. Using this model, we analyze the cache performance of several commonly occurring memory access patterns: (i) sequential and random memory traversals, (ii) systems of random accesses, and (iii) com ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
(Show Context)
This paper describes a model for studying the cache performance of algorithms in a directmapped cache. Using this model, we analyze the cache performance of several commonly occurring memory access patterns: (i) sequential and random memory traversals, (ii) systems of random accesses, and (iii) combinations of each. For each of these, we give exact expressions for the number of cache misses per memory access in our model. We illustrate the application of these analyses by determining the cache performance of two algorithms: the traversal of a binary search tree and the counting of items in a large array. Trace driven cache simulations validate that our analyses accurately predict cache performance. 1 Introduction The concrete analysis of algorithms has a long and rich history. It has played an important role in understanding the performance of algorithms in practice. Traditional concrete analysis of algorithms is interested in approximating as closely as possible the number of "cost...
MemoryAdaptive External Sorting
 19th International Conference on Very Large Data Bases
, 1993
"... In realtime and goaloriented database systems, the amount of memory assigned to queries that sort or join large relations may fluctuate due to contention from other higherpriority transactions. This study focuses on techniques that enable external sorts both to reduce their buffer usage when they ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
In realtime and goaloriented database systems, the amount of memory assigned to queries that sort or join large relations may fluctuate due to contention from other higherpriority transactions. This study focuses on techniques that enable external sorts both to reduce their buffer usage when they lose memory, and to effectively utilize any additional buffers that are given to them. We also show how these techniques can be extended to work with sortmerge joins. A series of experiments confirms that our proposed techniques are useful for sorting and joining large relations in the face of memory fluctuations. #################################### This work was partially supported by a scholarship from the Institute of Systems Science, National University of Singapore, and by an IBM Research Initiation Grant. An abridged version of this paper appears in the proceedings of the 19th International Conference on Very Large Data Bases, August 1993. 1. Introduction Database management sys...
Pruning conformant plans by counting models on compiled dDNNF representations
 IN PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON AUTOMATED PLANNING AND SCHEDULING (ICAPS)
, 2005
"... Optimal planners in the classical setting are built around two notions: branching and pruning. SATbased planners for example branch by trying the values of a selected variable, and prune by propagating constraints and checking consistency. In the conformant setting, a similar branching scheme can b ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
Optimal planners in the classical setting are built around two notions: branching and pruning. SATbased planners for example branch by trying the values of a selected variable, and prune by propagating constraints and checking consistency. In the conformant setting, a similar branching scheme can be used if restricted to action variables, but the pruning scheme must be modified. Indeed, pruning branches that encode inconsistent partial plans is not sufficient since a partial plan may be consistent and complete (covering all the action variables) and still fail to be a conformant plan. This happens indeed when the plan does not conform to some possible initial state or transition. A remedy to this problem is to use a criterion stronger than consistency for pruning. This is actually what we do in this paper where the consistencybased
On the Multiplicity of Parts in a Random Composition of a Large Integer
 SIAM J. Discrete Math
, 1999
"... In this paper we study the following question posed by H. S. Wilf: What is, asymptotically as n ! 1, the probablility that a randomly chosen part size in a random composition of an integer n has multiplicity m ? More specifically, given positive integers n and m, suppose that a composition of n is ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
In this paper we study the following question posed by H. S. Wilf: What is, asymptotically as n ! 1, the probablility that a randomly chosen part size in a random composition of an integer n has multiplicity m ? More specifically, given positive integers n and m, suppose that a composition of n is selected uniformly at random and then, out of the set of part sizes in , a part size j is chosen uniformly at random. Let P(A (m) n ) be the probability that j has multiplicity m. We show that for fixed m, P(A (m) n ) goes to 0 at the rate 1= ln n. A more careful analysis uncovers an unexpected result: (ln n)P(A (m) n ) does not have a limit but instead oscillates around the value 1=m as n !1. This work is a counterpart of a recent paper of Corteel, Pittel, Savage and Wilf who studied the same problem in the case of partitions rather than compositions. 1 Introduction In this paper we consider the multiplicity of a randomly chosen part size in a random composition of an integer n. L...
The Continuous Reactive Tabu Search: Blending Combinatorial Optimization and Stochastic Search for Global Optimization
, 1995
"... A novel algorithm for the global optimization of functions (CRTS) is presented, in which a combinatorial optimization method cooperates with a stochastic local minimizer. The combinatorial optimization component, based on the Reactive Tabu Search recently proposed by the authors, locates the most p ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
A novel algorithm for the global optimization of functions (CRTS) is presented, in which a combinatorial optimization method cooperates with a stochastic local minimizer. The combinatorial optimization component, based on the Reactive Tabu Search recently proposed by the authors, locates the most promising "boxes," where starting points for the local minimizer are generated. In order to cover a wide spectrum of possible applications with no user intervention, the method is designed with adaptive mechanisms: the box size is adapted to the local structure of the function to be optimized, the search parameters are adapted to obtain a proper balance of diversification and intensification. The algorithm is compared with some existing algorithms, and the experimental results are presented for a suite of benchmark tasks.
Planning and Control in Artificial Intelligence: A Unifying Perspective
 Applied Intelligence
, 2001
"... The problem of selecting actions in environments that are dynamic and not completely predictable or observable is a central problem in intelligent behavior. In AI, this translates into the problem of designing controllers that can map sequences of observations into actions so that certain goals ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
The problem of selecting actions in environments that are dynamic and not completely predictable or observable is a central problem in intelligent behavior. In AI, this translates into the problem of designing controllers that can map sequences of observations into actions so that certain goals are achieved. Three main approaches have been used in AI for designing such controllers: the programming approach, where the controller is programmed by hand in a suitable highlevel procedural language, the planning approach, where the control is automatically derived from a suitable description of actions and goals, and the learning approach, where the control is derived from a collection of experiences. The three approaches can exhibit successes and limitations. The focus of this paper is on the planning approach. More specifically, we present an approach to planning based on various state models that can handle various types of action dynamics (deterministic and probabilistic) ...
Sorting with FixedLength Reversals
 Discrete Applied Mathematics
, 1996
"... this paper, we study the problem of sorting permutations and circular permutations using as few fixedlength reversals as possible. Our problem is implicit in the popular TOPSPIN ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
this paper, we study the problem of sorting permutations and circular permutations using as few fixedlength reversals as possible. Our problem is implicit in the popular TOPSPIN
Improved Bounds on Sorting with LengthWeighted Reversals (Extended Abstract)
 In: Proc. 15th ACMSIAM Symposium on Discrete Algorithms (SODA). (2004) 912–921
, 2004
"... Michael A. Bender y Dongdong Ge Simai He Haodong Hu Ron Y. Pinter Steven Skiena Firas Swidan Abstract We study the problem of sorting integer sequences and permutations by lengthweighted reversals. We consider a wide class of cost functions, namely f(`) = ` for all 0, where ` ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
Michael A. Bender y Dongdong Ge Simai He Haodong Hu Ron Y. Pinter Steven Skiena Firas Swidan Abstract We study the problem of sorting integer sequences and permutations by lengthweighted reversals. We consider a wide class of cost functions, namely f(`) = ` for all 0, where ` is the length of the reversed subsequence. We present tight or nearly tight upper and lower bounds on the worstcase cost of sorting by reversals. Then we develop algorithms to approximate the optimal cost to sort a given input. Furthermore, we give polynomialtime algorithms to determine the optimal reversal sequence for a restricted but interesting class of sequences and cost functions. Our results have direct application in computational biology to the eld of comparative genomics.
How to Avoid Building DataBlades That Know the Value of Everything and the Cost of Nothing
 Proc. of SSDBM
, 1999
"... The objectrelational database management system (ORDBMS) offers many potential benefits for scientific, multimedia and financial applications. However, work remains in the integration of domainspecific class libraries into ORDBMS query processing. A major problem is that the standard mechanisms fo ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
The objectrelational database management system (ORDBMS) offers many potential benefits for scientific, multimedia and financial applications. However, work remains in the integration of domainspecific class libraries into ORDBMS query processing. A major problem is that the standard mechanisms for query selectivity estimation, taken from relational database systems, rely on properties specific to the standard data types; creation of new mechanisms remains extremely difficult because the software interfaces provided by vendors are relatively lowlevel. In this paper, we discuss extensions of the generalized search tree, or GiST, to support a higherlevel but less typespecific approach. Specifically, we discuss the computation of selectivity estimates with confidence intervals using a variety of indexbased approaches and present results from an experimental comparison of these methods with several estimators from the literature. 1.
Scheduling NonUniform Traffic In High Speed Packet Switches And Routers
, 1998
"... Until recently, Internet routers and ATM switches were generally built around a central pool of shared memory buffers and a fast, sharedbus backplane. However, limitations in both memory and bus bandwidth have led to the use of input queues and switched backplanes. Input queues relieve the bottlene ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Until recently, Internet routers and ATM switches were generally built around a central pool of shared memory buffers and a fast, sharedbus backplane. However, limitations in both memory and bus bandwidth have led to the use of input queues and switched backplanes. Input queues relieve the bottleneck by distributing the memory over each switch input; and a switched backplane allows packet transfers to take place simultaneously. This thesis focuses on the design of switched backplanes with input queues. In particular, we focus on the design of schedulers for switched backplanes. The scheduler decides the order in which packets, or cells, may traverse the backplane. Studies have shown that existing scheduling algorithms are either too complex to operate at high speed or lack the intelligence to perform well over a wide range of traffic patterns. In this thesis, we present two new algorithms that are fast, simple and efficient. Using the methods of Lyapunov, we prove that both algorithms can achieve 100% throughput for all traffic patterns with independent arrivals. We also demonstrate heuristics that can be implemented in fast and relatively simple hardware. Our exploratory design work shows that the heuristics can make a scheduling decision within 1020 nanoseconds when implemented using 0.25 CMOS technology. At this scheduling speed, it is possible to design switches or routers with more than one terabit per second of aggregate bandwidth.