Results 1  10
of
26
Filling Gaps in the Boundary of a Polyhedron
 Computer Aided Geometric Design
, 1993
"... In this paper we present an algorithm for detecting and repairing defects in the boundary of a polyhedron. These defects, usually caused by problems in CAD software, consist of small gaps bounded by edges that are incident to only one polyhedron face. The algorithm uses a partial curve matching t ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
In this paper we present an algorithm for detecting and repairing defects in the boundary of a polyhedron. These defects, usually caused by problems in CAD software, consist of small gaps bounded by edges that are incident to only one polyhedron face. The algorithm uses a partial curve matching technique for matching parts of the defects, and an optimal triangulation of 3D polygons for resolving the unmatched parts. It is also shown that finding a consistent set of partial curve matches with maximum score, a subproblem which is related to our repairing process, is NPHard. Experimental results on several polyhedra are presented. Keywords: CAD, polyhedra, gap filling, curve matching, geometric hashing, triangulation. 1 Introduction The problem studied in this paper is the detection and repair of "gaps" in the boundary of a polyhedron. This problem usually appears in polyhedral approximations of CAD objects, whose boundaries are described using curved entities of higher leve...
Concurrent Computation of Attribute Filters on Shared Memory Parallel Machines
, 2008
"... Morphological attribute filters have not previously been parallelized mainly because they are both global and nonseparable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings, and thickenings, ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Morphological attribute filters have not previously been parallelized mainly because they are both global and nonseparable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings, and thickenings, based on Salembier’s MaxTrees and Mintrees. The image or volume is first partitioned in multiple slices. We then compute the Maxtrees of each slice using any sequential MaxTree algorithm. Subsequently, the Maxtrees of the slices can be merged to obtain the Maxtree of the image. A Cimplementation yielded good speedups on both a 16processor MIPS 14000 parallel machine and a dualcore Opteronbased machine. It is shown that the speedup of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a singlecore processor due to reduced cache thrashing.
Nardelli: Distributed searching of kdimensional data with almost constant costs
 ADBIS 2000, Prague, Lecture Notes in Computer Science
, 2000
"... Abstract. In this paper we consider the dictionary problem in the scalable distributed data structure paradigm introduced by Litwin, Neimat and Schneider and analyze costs for insert and exact searches in an amortized framework. We show that both for the 1dimensional and the kdimensional case inser ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. In this paper we consider the dictionary problem in the scalable distributed data structure paradigm introduced by Litwin, Neimat and Schneider and analyze costs for insert and exact searches in an amortized framework. We show that both for the 1dimensional and the kdimensional case insert and exact searches have an amortized almost constant costs, namely O � log (1+A) n � messages, where n is the total number of servers of the structure, b is the capacity of each server, and A = b. Considering that A is a large value in real applications, in the 2 order of thousands, we can assume to have a constant cost in real distributed structures. Only worst case analysis has been previously considered and the almost constant cost for the amortized analysis of the general kdimensional case appears to be very promising in the light of the well known difficulties in proving optimal worst case bounds for kdimensions.
DiscFinder: A DataIntensive Scalable Cluster Finder for Astrophysics
"... DiscFinder is a scalable approach for identifying largescale astronomical structures, such as galaxy clusters, in massive observation and simulation astrophysics datasets. It is designed to operate on datasets with tens of billions of astronomical objects, even in the case when the dataset is much ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
DiscFinder is a scalable approach for identifying largescale astronomical structures, such as galaxy clusters, in massive observation and simulation astrophysics datasets. It is designed to operate on datasets with tens of billions of astronomical objects, even in the case when the dataset is much larger than the aggregate memory of compute cluster used for the processing. 1.
BVisual memes in social media: Tracking realworld news in YouTube videos
 in Proc. ACM MULTIMEDIA
, 2011
"... We propose visual memes, or frequently reposted short video segments, for tracking largescale video remix in social media. Visual memes are extracted by novel and highly scalable detection algorithms that we develop, with over 96% precision and 80 % recall. We monitor realworld events on YouTube ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We propose visual memes, or frequently reposted short video segments, for tracking largescale video remix in social media. Visual memes are extracted by novel and highly scalable detection algorithms that we develop, with over 96% precision and 80 % recall. We monitor realworld events on YouTube, and we model interactions using a graph model over memes, with people and content as nodes and meme postings as links. This allows us to define several measures of influence. These abstractions, using more than two million video shots from several largescale event datasets, enable us to quantify and efficiently extract several important observations: over half of the videos contain remixed content, which appears rapidly; video view counts, particularly high ones, are poorly correlated with the virality of content; the influence of traditional news media versus citizen journalists varies from event to event; iconic single images of an event are easily extracted; and content that will have long lifespan can be predicted within a day after it first appears. Visual memes can be applied to a number of social media scenarios: brand monitoring, social buzz tracking, ranking content and users, among others.
The slice algorithm for irreducible decomposition of monomial ideals
 Journal of Symbolic Computation
"... Abstract. Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good perf ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good performance in practice.
A Straightforward SaturationBased Decision Procedure for Hybrid Logic
"... In this paper we present a saturationbased decision procedure for basic hybrid logic extended with the universal modality. Termination of the procedure is guaranteed by constraints that are conceptually simpler than the loopchecks commonly used with related tableaubased decision methods in that t ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
In this paper we present a saturationbased decision procedure for basic hybrid logic extended with the universal modality. Termination of the procedure is guaranteed by constraints that are conceptually simpler than the loopchecks commonly used with related tableaubased decision methods in that they do not rely on the order in which new formulas are introduced. At the same time, our constraints allow us to limit the worstcase asymptotic complexity of the procedure more tightly than it seems to be possible for methods using conventional loopchecks. The procedure is based on Hardt and Smolka’s higherorder formulation of hybrid logic [10]. 1
Recipes for Baking Black Forest Databases Building and Querying Black Hole Merger Trees from Cosmological Simulations
, 2011
"... an allocation of advanced computing resources supported by the NSF and the TeraGrid Advanced Support Program. The computations were performed on on Kraken at the National Institute for Computational Sciences (NICS) ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
an allocation of advanced computing resources supported by the NSF and the TeraGrid Advanced Support Program. The computations were performed on on Kraken at the National Institute for Computational Sciences (NICS)
Equivalence and inequivalence of instances of formulas
 TR553, Comptr. Sci. Dept., U. of
, 1977
"... An algorithm is presented for determining whether or not two instances of formulas are equal bas,ed on previous equality and inequality declarations. The problem is. attacked using a formal grammar approach. TIle equality determination algorithm is Sh01J1l to be almost linear, on the average, along ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
An algorithm is presented for determining whether or not two instances of formulas are equal bas,ed on previous equality and inequality declarations. The problem is. attacked using a formal grammar approach. TIle equality determination algorithm is Sh01J1l to be almost linear, on the average, along with a completeness proof. The maximum space requirement for the equality data base is also discussed.
Efficient Simplification of Bisimulation Formulas
 In Proceedings of the Workshop on Tools and Algorithms for the Construction and Analysis of Systems, pages 111132. LNCS 1019
, 1995
"... The problem of checking or optimally simplifying bisimulation formulas is likely to be computationally very hard. We take a different view at the problem: we set out to define a very fast algorithm, and then see what we can obtain. Sometimes our algorithm can simplify a formula perfectly, sometimes ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The problem of checking or optimally simplifying bisimulation formulas is likely to be computationally very hard. We take a different view at the problem: we set out to define a very fast algorithm, and then see what we can obtain. Sometimes our algorithm can simplify a formula perfectly, sometimes it cannot. However, the algorithm is extremely fast and can, therefore, be added to formulabased bisimulation model checkers at practically no cost. When the formula can be simplified by our algorithm, this can have a dramatic positive effect on the better, but also more time consuming, theorem provers which will finish the job. 1 Introduction The need for validity checking or optimal simplification of first order bisimulation formulas has arisen from recent work on symbolic bisimulation checking of valuepassing calculi [4, 9, 15]. The NPcompleteness of checking satisfiability of propositional formulas [3] implies that validity checking of that class of formulas is coNP complete. Addit...