Results 1  10
of
29
Mesh Generation And Optimal Triangulation
, 1992
"... We survey the computational geometry relevant to finite element mesh generation. We especially focus on optimal triangulations of geometric domains in two and threedimensions. An optimal triangulation is a partition of the domain into triangles or tetrahedra, that is best according to some cri ..."
Abstract

Cited by 188 (7 self)
 Add to MetaCart
(Show Context)
We survey the computational geometry relevant to finite element mesh generation. We especially focus on optimal triangulations of geometric domains in two and threedimensions. An optimal triangulation is a partition of the domain into triangles or tetrahedra, that is best according to some criterion that measures the size, shape, or number of triangles. We discuss algorithms both for the optimization of triangulations on a fixed set of vertices and for the placement of new vertices (Steiner points). We briefly survey the heuristic algorithms used in some practical mesh generators.
On RAM priority queues
, 1996
"... Priority queues are some of the most fundamental data structures. They are used directly for, say, task scheduling in operating systems. Moreover, they are essential to greedy algorithms. We study the complexity of priority queue operations on a RAM with arbitrary word size. We present exponential i ..."
Abstract

Cited by 72 (9 self)
 Add to MetaCart
Priority queues are some of the most fundamental data structures. They are used directly for, say, task scheduling in operating systems. Moreover, they are essential to greedy algorithms. We study the complexity of priority queue operations on a RAM with arbitrary word size. We present exponential improvements over previous bounds, and we show tight relations to sorting. Our first result is a RAM priority queue supporting insert and extractmin operations in worst case time O(log log n) where n is the current number of keys in the queue. This is an exponential improvement over the O( p log n) bound of Fredman and Willard from STOC'90. Our algorithm is simple, and it only uses AC 0 operations, meaning that there is no hidden time dependency on the word size. Plugging this priority queue into Dijkstra's algorithm gives an O(m log log m) algorithm for the single source shortest path problem on a graph with m edges, as compared with the previous O(m p log m) bound based on Fredman...
Nearoptimal routing lookups with bounded worst case performance
 In IEEE INFOCOM’00
, 2000
"... Abstract — The problem of route address lookup has received much attention recently and several algorithms and data structures for performing address lookups at high speeds have been proposed. In this paper we consider one such data structure – a binary search tree built on the intervals created by ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
Abstract — The problem of route address lookup has received much attention recently and several algorithms and data structures for performing address lookups at high speeds have been proposed. In this paper we consider one such data structure – a binary search tree built on the intervals created by the routing table prefixes. We wish to exploit the difference in the probabilities with which the various leaves of the tree (where the intervals are stored) are accessed by incoming packets in order to speedup the lookup process. More precisely, we seek an answer to the question “How can the search tree be drawn so as to minimize the average packet lookup time while keeping the worstcase lookup time within a fixed bound? ” We use ideas from information theory to derive efficient algorithms for computing nearoptimal routing lookup trees. Finally, we consider the practicality of our algorithms through analysis and simulation.
Nearly Optimal ExpectedCase Planar Point Location
"... We consider the planar point location problem from the perspective of expected search time. We are given a planar polygonal subdivision S and for each polygon of the subdivision the probability that a query point lies within this polygon. The goal is to compute a search structure to determine which ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
We consider the planar point location problem from the perspective of expected search time. We are given a planar polygonal subdivision S and for each polygon of the subdivision the probability that a query point lies within this polygon. The goal is to compute a search structure to determine which cell of the subdivision contains a given query point, so as to minimize the expected search time. This is a generalization of the classical problem of computing an optimal binary search tree for onedimensional keys. In the onedimensional case it has long been known that the entropy H of the distribution is the dominant term in the lower bound on the expectedcase search time, and further there exist search trees achieving expected search times of at most H + 2. Prior to this work, there has been no known structure for planar point location with an expected search time better than 2H, and this result required strong assumptions on the nature of the query point distribution. Here we present a data structure whose expected search time is nearly equal to the entropy lower bound, namely H + o(H). The result holds for any polygonal subdivision in which the number of sides of each of the polygonal cells is bounded, and there are no assumptions on the query distribution within each cell. We extend these results to subdivisions with convex cells, assuming a uniform query distribution within each cell.
Adaptive Statistical Optimization Techniques for Firewall Packet Filtering
 In IEEE INFOCOM
, 2006
"... Packet filtering plays a critical role in the performance of many network devices such as firewalls, IPSec gateways, DiffServ and QoS routers. A tremendous amount of research was proposed to optimize packet filters. However, most of the related works use deterministic techniques and do not exploit t ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
Packet filtering plays a critical role in the performance of many network devices such as firewalls, IPSec gateways, DiffServ and QoS routers. A tremendous amount of research was proposed to optimize packet filters. However, most of the related works use deterministic techniques and do not exploit the traffic characteristics in their optimization schemes. In addition, most packet classifiers give no specific consideration for optimizing packet rejection, which is important for many filtering devices like firewalls.
Huffman algebras for independent random variables
 IBM RC
, 1994
"... Based on a rearrangement inequality by Hardy, Littlewood and Polya, we define twooperator algebras for independent random variables. These algebras are called Huffman algebras since the Huffman algorithm on these algebras produces an optimal binary tree that minimizes the weighted lengths of leaves ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Based on a rearrangement inequality by Hardy, Littlewood and Polya, we define twooperator algebras for independent random variables. These algebras are called Huffman algebras since the Huffman algorithm on these algebras produces an optimal binary tree that minimizes the weighted lengths of leaves. Many examples of such algebras are given. For the case with random weights of the leaves, we prove the optimality of the tree constructed by the power of two rule, i.e., the Huffman algorithm assuming identical weights, when the weights of the leaves are independent and identically distributed.
Splay Trees for Data Compression
, 1995
"... We present applications of splay trees to two topics in data compression. First is a variant of the movetofront (mtf) data compression (of Bentley,Sleator Tarjan and Wei) algorithm, where we introduce secondary list(s). This seems to capture higherorder correlations. An implementation of this alg ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We present applications of splay trees to two topics in data compression. First is a variant of the movetofront (mtf) data compression (of Bentley,Sleator Tarjan and Wei) algorithm, where we introduce secondary list(s). This seems to capture higherorder correlations. An implementation of this algorithm with SleatorTarjan splay trees runs in time (provably) proportional to the entropy of the input sequence. When tested on some telephony data, compression ratio and run time showed significant improvements over original mtfalgorithm, making it competitive or better than popular programs. For stationary ergodic sources, we analyse the compression and output distribution of the original mtfalgorithm, which suggests why the secondary list is appropriate to introduce. We also derive analytical upper bounds on the average codeword length in terms of stochastic parameters of the source. Secondly, we consider the compression (or coding) of source sequences where the codewords are required ...
Lossless Compression for Text and Images
 International Journal of High Speed Electronics and Systems
, 1995
"... Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ones, or ones arising in medical and remotesensing applications, or ones that may be required to be certified true for legal reasons. Moreover, during the process of lossy compression, many occasions for lossless compression of coefficients or other information arise. This paper surveys techniques for lossless compression. The process of compression can be broken down into modeling and coding. We provide an extensive discussion of coding techniques, and then introduce methods of modeling that are appropriate for text and images. Standard methods used in popular utilities (in the case of text) and international standards (in the case of images) are described. Keywords Text compression, ima...
Restructuring Ordered Binary Trees
"... We consider the problem of restructuring an ordered binary tree T, preserving the inorder sequence of its nodes, so as to reduce its height to some target value h. Such a restructuring necessarily involves the downward displacement of some of the nodes of T. Our results, focusing both on the maximu ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of restructuring an ordered binary tree T, preserving the inorder sequence of its nodes, so as to reduce its height to some target value h. Such a restructuring necessarily involves the downward displacement of some of the nodes of T. Our results, focusing both on the maximum displacement over all nodes and on the maximum displacement
Parallel Construction Of Optimal Alphabetic Trees
 PROCEEDINGS OF THE 5 TH ACM SYMPOSIUM ON PARALLEL ALGORITHMS AND ARCHITECTURES
, 1993
"... A parallel algorithm is given which constructs an optimal alphabetic tree in O(log³ n) time with n² log n processors. The construction is basically a parallelization of the GarsiaWachs version [5] of the Hutucker algorithm [8]. The best previous NC algorithm for the problem uses n 6 = log O(1) ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
A parallel algorithm is given which constructs an optimal alphabetic tree in O(log³ n) time with n² log n processors. The construction is basically a parallelization of the GarsiaWachs version [5] of the Hutucker algorithm [8]. The best previous NC algorithm for the problem uses n 6 = log O(1) n processors. [15] Our method is an extension of techniques used first in [3] and later used in [13] for the Huffman coding problem, which can be viewed as the alphabetic tree problem for the special case of a monotone weight sequence. In this paper, we extend to the case of certain "almost monotone " sequences, which we call "sorted regular valleys." The processing of such subsequences depends on a quadrangle inequality, while the total number of global iterations depends on a kind of tree contraction. Altogether we can view our algorithmic approach as (quadrangle inequality + tree contraction). An optimal alphabetic tree is a special case of an optimal binary search tree where all...