Results 1 
7 of
7
On Fundamental Tradeoffs between Delay Bounds and Computational Complexity in Packet Scheduling Algorithms
 in Proceedings of ACM SIGCOMM ’02
, 2002
"... concerning the computational complexity for packet scheduling algorithms to achieve tight endtoend delay bounds. We rst focus on the dierence between the time a packet nishes service in a scheduling algorithm and its virtual nish time under a GPS (General Processor Sharing) scheduler, called GPS ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
concerning the computational complexity for packet scheduling algorithms to achieve tight endtoend delay bounds. We rst focus on the dierence between the time a packet nishes service in a scheduling algorithm and its virtual nish time under a GPS (General Processor Sharing) scheduler, called GPSrelative delay. We prove that, under a slightly restrictive but reasonable computational model, the lower bound computational complexity of any scheduling algorithm that guarantees O(1) GPSrelative delay bound is log2n) (widely believed as a \folklore theorem" but never proved). We also discover that, surprisingly, the complexity lower bound remains the same even if the delay bound is relaxed to O(n ) for 0 < a < 1. This implies that the delaycomplexity tradeo curve is \at" in the \interval" [O(1), O(n)). We later extend both complexity results (for O(1) ) delay) to a much stronger computational model. Finally, we show that the same complexity lower bounds are conditionally applicable to guaranteeing tight endtoend delay bounds. This is done by untangling the relationship between the GPSrelative delay bound and the endtoend delay bound.
Versatile document image content extraction
 In Proc., SPIE/IS&T Document Recognition & Retrieval XIII Conf
, 2006
"... We offer a preliminary report on a research program to investigate versatile algorithms for document image content extraction, that is locating regions containing handwriting, machineprint text, graphics, lineart, logos, photographs, noise, etc. To solve this problem in its full generality require ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
We offer a preliminary report on a research program to investigate versatile algorithms for document image content extraction, that is locating regions containing handwriting, machineprint text, graphics, lineart, logos, photographs, noise, etc. To solve this problem in its full generality requires coping with a vast diversity of document and image types. Automatically trainable methods are highly desirable, as well as extremely high speed in order to process large collections. Significant obstacles include the expense of preparing correctly labeled (“groundtruthed”) samples, unresolved methodological questions in specifying the domain (e.g. what is a representative collection of document images?), and a lack of consensus among researchers on how to evaluate contentextraction performance. Our research strategy emphasizes versatility first: that is, we concentrate at the outset on designing methods that promise to work across the broadest possible range of cases. This strategy has several important implications: the classifiers must be trainable in reasonable time on vast data sets; and expensive groundtruthed data sets must be complemented by amplification using generative models. These and other design and architectural issues are discussed. We propose a trainable classification methodology that marries kd trees and hashdriven table lookup and describe preliminary experiments.
A Probabilistic Minimum Spanning Tree Algorithm
 Information Processing Letters
, 1978
"... This paper is concerned with the problem of computing spanning tree (MST) for n points in a pdimensional space where the "distance" between each pair of points i and j satisfies the relationship' dq max {Ixti  xtql} , where xki is the coordinate of object i along the ktti dimension. This relatio ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper is concerned with the problem of computing spanning tree (MST) for n points in a pdimensional space where the "distance" between each pair of points i and j satisfies the relationship' dq max {Ixti  xtql} , where xki is the coordinate of object i along the ktti dimension. This relationship is clearly satisfied by all Minkowski metrics dq = [ Ixki  xnjl r] x/r, r > 1
Efficient Hierarchical Clustering Algorithms using Partially Overlapping Partitions
 5th Pacific Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2001, HongKong
, 2001
"... . Clustering is an important data exploration task. A prominent clustering algorithm is agglomerative hierarchical clustering. Roughly, in each iteration, it merges the closest pair of clusters. It was first proposed way back in 1951, and since then there have been numerous modifications. Some o ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. Clustering is an important data exploration task. A prominent clustering algorithm is agglomerative hierarchical clustering. Roughly, in each iteration, it merges the closest pair of clusters. It was first proposed way back in 1951, and since then there have been numerous modifications. Some of its good features are: a natural, simple, and nonparametric grouping of similar objects which is capable of finding clusters of different shape such as spherical and arbitrary. But large CPU time and high memory requirement limit its use for large data. In this paper we show that geometric metric (centroid, median, and minimum variance) algorithms obey a 9010 relationship where roughly the first 90iterations are spent on merging clusters with distance less than 10the maximum merging distance. This characteristic is exploited by partially overlapping partitioning. It is shown with experiments and analyses that different types of existing algorithms benefit excellently by drastical...
Pyramid Computer Solutions of the Closest Pair Problem
, 1982
"... Given an N x N array of OS and Is, the closest pair problem is to determine the minimum distance between any pair of ones. Let D be this minimum distance (or D = 2N if there are fewer than two Is). Two solutions to this problem are given, one requiring O(log ( N) + D) time and the other O(log ( N)). ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Given an N x N array of OS and Is, the closest pair problem is to determine the minimum distance between any pair of ones. Let D be this minimum distance (or D = 2N if there are fewer than two Is). Two solutions to this problem are given, one requiring O(log ( N) + D) time and the other O(log ( N)). These solutions are for two types of parallel computers arranged in a pyramid fashion with the base of the pyramid containing the matrix. The results improve upon an algorithm of Dyer that requires o(N) time on a more pOWerfd COmpUter. 0 19135 Academic Press. Inc. 1.
Printed in Israel? Applied Probability Trust 1986 BOUNDARY DOMINATION AND THE DISTRIBUTION OF THE LARGEST NEARESTNEIGHBOR LINK IN HIGHER DIMENSIONS
"... For a sample of points drawn uniformly from either the ddimensional torus or the dcube, d 2, we give limiting distributions for the largest of the nearestneighbor links. For d 3 the behavior in the torus is proved to be different from the behavior in the cube. The results given also settle a con ..."
Abstract
 Add to MetaCart
For a sample of points drawn uniformly from either the ddimensional torus or the dcube, d 2, we give limiting distributions for the largest of the nearestneighbor links. For d 3 the behavior in the torus is proved to be different from the behavior in the cube. The results given also settle a conjecture of Henze (1982) and throw light on the choice of the cube or torus in some probabilistic models of computational complexity of geometrical algorithms.