Results 1  10
of
43
Filling Gaps in the Boundary of a Polyhedron
 Computer Aided Geometric Design
, 1993
"... In this paper we present an algorithm for detecting and repairing defects in the boundary of a polyhedron. These defects, usually caused by problems in CAD software, consist of small gaps bounded by edges that are incident to only one polyhedron face. The algorithm uses a partial curve matching t ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
In this paper we present an algorithm for detecting and repairing defects in the boundary of a polyhedron. These defects, usually caused by problems in CAD software, consist of small gaps bounded by edges that are incident to only one polyhedron face. The algorithm uses a partial curve matching technique for matching parts of the defects, and an optimal triangulation of 3D polygons for resolving the unmatched parts. It is also shown that finding a consistent set of partial curve matches with maximum score, a subproblem which is related to our repairing process, is NPHard. Experimental results on several polyhedra are presented. Keywords: CAD, polyhedra, gap filling, curve matching, geometric hashing, triangulation. 1 Introduction The problem studied in this paper is the detection and repair of "gaps" in the boundary of a polyhedron. This problem usually appears in polyhedral approximations of CAD objects, whose boundaries are described using curved entities of higher leve...
I/Oefficient batched unionfind and its applications to terrain analysis
 IN PROC. 22ND ANNUAL SYMPOSIUM ON COMPUTATIONAL GEOMETRY
, 2006
"... Despite extensive study over the last four decades and numerous applications, no I/Oefficient algorithm is known for the unionfind problem. In this paper we present an I/Oefficient algorithm for the batched (offline) version of the unionfind problem. Given any sequence of N union and find opera ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
(Show Context)
Despite extensive study over the last four decades and numerous applications, no I/Oefficient algorithm is known for the unionfind problem. In this paper we present an I/Oefficient algorithm for the batched (offline) version of the unionfind problem. Given any sequence of N union and find operations, where each union operation joins two distinct sets, our algorithm uses O(SORT(N)) = O ( N B log M/B N I/Os, where M is the memory size and B is the disk block size. This bound is asymptotically optimal in the worst case. If there are union operations that join a set with itself, our algorithm uses O(SORT(N) + MST(N)) I/Os, where MST(N) is the number of I/Os needed to compute the minimum spanning tree of a graph with N edges. We also describe a simple and practical O(SORT(N) log ( N M))I/O algorithm for this problem, which we have implemented. We are interested in the unionfind problem because of its applications in terrain analysis. A terrain can be abstracted as a height function defined over R2, and many problems that deal with such functions require a unionfind data structure. With the emergence of modern mapping technologies, huge amount of elevation data is being generated that is too large to fit in memory, thus I/Oefficient algorithms are needed to process this data efficiently. In this paper, we study two terrainanalysis problems that benefit from a unionfind data structure: (i) computing topological persistence and (ii) constructing the contour tree. We give the first O(SORT(N))I/O algorithms for these two problems, assuming that the input terrain is represented as a triangular mesh with N vertices. Finally, we report some preliminary experimental results, showing that our algorithms give orderofmagnitude improvement over previous methods on large data sets that do not fit in memory.
Concurrent Computation of Attribute Filters on Shared Memory Parallel Machines
, 2008
"... Morphological attribute filters have not previously been parallelized mainly because they are both global and nonseparable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings, and thickenings, ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Morphological attribute filters have not previously been parallelized mainly because they are both global and nonseparable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings, and thickenings, based on Salembier’s MaxTrees and Mintrees. The image or volume is first partitioned in multiple slices. We then compute the Maxtrees of each slice using any sequential MaxTree algorithm. Subsequently, the Maxtrees of the slices can be merged to obtain the Maxtree of the image. A Cimplementation yielded good speedups on both a 16processor MIPS 14000 parallel machine and a dualcore Opteronbased machine. It is shown that the speedup of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a singlecore processor due to reduced cache thrashing.
BVisual memes in social media: Tracking realworld news in YouTube videos
 in Proc. ACM MULTIMEDIA
, 2011
"... We propose visual memes, or frequently reposted short video segments, for tracking largescale video remix in social media. Visual memes are extracted by novel and highly scalable detection algorithms that we develop, with over 96% precision and 80 % recall. We monitor realworld events on YouTube ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
We propose visual memes, or frequently reposted short video segments, for tracking largescale video remix in social media. Visual memes are extracted by novel and highly scalable detection algorithms that we develop, with over 96% precision and 80 % recall. We monitor realworld events on YouTube, and we model interactions using a graph model over memes, with people and content as nodes and meme postings as links. This allows us to define several measures of influence. These abstractions, using more than two million video shots from several largescale event datasets, enable us to quantify and efficiently extract several important observations: over half of the videos contain remixed content, which appears rapidly; video view counts, particularly high ones, are poorly correlated with the virality of content; the influence of traditional news media versus citizen journalists varies from event to event; iconic single images of an event are easily extracted; and content that will have long lifespan can be predicted within a day after it first appears. Visual memes can be applied to a number of social media scenarios: brand monitoring, social buzz tracking, ranking content and users, among others.
A practical approach to word level model checking of industrial netlists
 In CAV ’08: Proceedings of the 20th international conference on Computer Aided Verification
, 2008
"... Abstract. In this paper we present a wordlevel model checking method that attempts to speed up safety property checking of industrial netlists. Our aim is to construct an algorithm that allows us to check both bounded and unbounded properties using standard bitlevel model checking methods as back ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a wordlevel model checking method that attempts to speed up safety property checking of industrial netlists. Our aim is to construct an algorithm that allows us to check both bounded and unbounded properties using standard bitlevel model checking methods as backend decision procedures, while incurring minimum runtime penalties for designs that are unsuited to our analysis. We do this by combining modifications of several previously known techniques into a static abstraction algorithm which is guaranteed to produce bitlevel netlists that are as small or smaller than the original bitblasted designs. We evaluate our algorithm on several challenging hardware components. 1
DiscFinder: A DataIntensive Scalable Cluster Finder for Astrophysics
"... DiscFinder is a scalable approach for identifying largescale astronomical structures, such as galaxy clusters, in massive observation and simulation astrophysics datasets. It is designed to operate on datasets with tens of billions of astronomical objects, even in the case when the dataset is much ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
DiscFinder is a scalable approach for identifying largescale astronomical structures, such as galaxy clusters, in massive observation and simulation astrophysics datasets. It is designed to operate on datasets with tens of billions of astronomical objects, even in the case when the dataset is much larger than the aggregate memory of compute cluster used for the processing. 1.
Nardelli: Distributed searching of kdimensional data with almost constant costs
 ADBIS 2000, Prague, Lecture Notes in Computer Science
, 2000
"... Abstract. In this paper we consider the dictionary problem in the scalable distributed data structure paradigm introduced by Litwin, Neimat and Schneider and analyze costs for insert and exact searches in an amortized framework. We show that both for the 1dimensional and the kdimensional case inser ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we consider the dictionary problem in the scalable distributed data structure paradigm introduced by Litwin, Neimat and Schneider and analyze costs for insert and exact searches in an amortized framework. We show that both for the 1dimensional and the kdimensional case insert and exact searches have an amortized almost constant costs, namely O � log (1+A) n � messages, where n is the total number of servers of the structure, b is the capacity of each server, and A = b. Considering that A is a large value in real applications, in the 2 order of thousands, we can assume to have a constant cost in real distributed structures. Only worst case analysis has been previously considered and the almost constant cost for the amortized analysis of the general kdimensional case appears to be very promising in the light of the well known difficulties in proving optimal worst case bounds for kdimensions.
Optimal reencryption strategy for joins in encrypted databases
 In Data and Applications Security and Privacy
, 2013
"... Abstract. In order to perform a join in a deterministically, adjustably encrypted database one has to reencrypt at least one column. The problem is to select that column that will result in the minimum number of reencryptions even under an unknown schedule of joins. Naive strategies may perform t ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In order to perform a join in a deterministically, adjustably encrypted database one has to reencrypt at least one column. The problem is to select that column that will result in the minimum number of reencryptions even under an unknown schedule of joins. Naive strategies may perform too many or even innitely many reencryptions. We provide two strategies that allow for a much better performance. In particular the asymptotic behavior is O(n3=2) resp. O(n logn) reencryptions for n columns. We show that there can be no algorithm better than O(n logn). We further extend our result to elementwise reencryptions and show experimentally that our algorithm results in the optimal cost in 41 % of the cases.
A Straightforward SaturationBased Decision Procedure for Hybrid Logic
"... In this paper we present a saturationbased decision procedure for basic hybrid logic extended with the universal modality. Termination of the procedure is guaranteed by constraints that are conceptually simpler than the loopchecks commonly used with related tableaubased decision methods in that t ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
In this paper we present a saturationbased decision procedure for basic hybrid logic extended with the universal modality. Termination of the procedure is guaranteed by constraints that are conceptually simpler than the loopchecks commonly used with related tableaubased decision methods in that they do not rely on the order in which new formulas are introduced. At the same time, our constraints allow us to limit the worstcase asymptotic complexity of the procedure more tightly than it seems to be possible for methods using conventional loopchecks. The procedure is based on Hardt and Smolka’s higherorder formulation of hybrid logic [10]. 1
A New Scalable Parallel DBSCAN Algorithm Using the DisjointSet Data Structure
"... Abstract—DBSCAN is a wellknown density based clustering algorithm capable of discovering arbitrary shaped clusters and eliminating noise data. However, parallelization of DBSCAN is challenging as it exhibits an inherent sequential data access order. Moreover, existing parallel implementations adopt ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract—DBSCAN is a wellknown density based clustering algorithm capable of discovering arbitrary shaped clusters and eliminating noise data. However, parallelization of DBSCAN is challenging as it exhibits an inherent sequential data access order. Moreover, existing parallel implementations adopt a masterslave strategy which can easily cause an unbalanced workload and hence result in low parallel efficiency. We present a new parallel DBSCAN algorithm (PDSDBSCAN) using graph algorithmic concepts. More specifically, we employ the disjointset data structure to break the access sequentiality of DBSCAN. In addition, we use a treebased bottomup approach to construct the clusters. This yields a betterbalanced workload distribution. We implement the algorithm both for shared and for distributed memory. Using data sets containing up to several hundred million highdimensional points, we show that PDSDBSCAN significantly outperforms the masterslave approach, achieving speedups up to 25.97 using 40 cores on shared memory architecture, and speedups up to 5,765 using 8,192 cores on distributed memory architecture. Index Terms—Density based clustering, UnionFind algorithm, Disjointset data structure. I.