Results 1  10
of
39
Pattern codification strategies in structured light systems
, 2004
"... Coded structured light is considered one of the most reliable techniques for recovering the surface of objects. This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points an ..."
Abstract

Cited by 109 (15 self)
 Add to MetaCart
Coded structured light is considered one of the most reliable techniques for recovering the surface of objects. This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points and points of the projected pattern can be easily found. The decoded points can be triangulated and 3D information is obtained. We present an overview of the existing techniques, as well as a new and definitive classification of patterns for structured light sensors. We have implemented a set of representative techniques in this field and present some comparative results. The advantages and constraints of the different patterns are also discussed.
Outlier mining in large highdimensional data sets
 IEEE Transactions on Knowledge and Data Engineering
, 2005
"... In this paper a new definition of distancebased outlier and an algorithm, called HilOut, designed to efficiently detect the top n outliers of a large and highdimensional data set are proposed. Given an integer k, the weight of a point is defined as the sum of the distances separating it from its k ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
In this paper a new definition of distancebased outlier and an algorithm, called HilOut, designed to efficiently detect the top n outliers of a large and highdimensional data set are proposed. Given an integer k, the weight of a point is defined as the sum of the distances separating it from its k nearestneighbors. Outlier are those points scoring the largest values of weight. The algorithm HilOut makes use of the notion of spacefilling curve to linearize the data set, and it consists of two phases. The first phase provides an approximate solution, within a rough factor, after the execution of at most d + 1 sorts and scans of the data set, with temporal cost quadratic in d and linear in N and in k, where d is the number of dimensions of the data set and N is the number of points in the data set. During this phase, the algorithm isolates points candidate to be outliers and reduces this set at each iteration. If the size of this set becomes n, then the algorithm stops reporting the exact solution. The second phase calculates the exact solution with a final scan examining further the candidate outliers remained after the first phase. Experimental results show that the algorithm always stops, reporting the exact solution, during the first phase after much less than d + 1 steps. We present both an inmemory and diskbased implementation of the HilOut algorithm and a thorough scaling analysis for real and synthetic data sets showing that the algorithm scales well in both cases.
A Common Data Management Infrastructure for Adaptive Algorithms for PDE Solutions
 in: Proceedings of the Supercomputing Conference, ACM/IEEE Computer Society
, 1997
"... This paper presents the design, development and application of a computational infrastructure to support the implementation of parallel adaptive algorithms for the solution of sets of partial differential equations. The infrastructure is separated into multiple layers of abstraction. This paper is p ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
This paper presents the design, development and application of a computational infrastructure to support the implementation of parallel adaptive algorithms for the solution of sets of partial differential equations. The infrastructure is separated into multiple layers of abstraction. This paper is primarily concerned with the two lowest layersof this infrastructure: a layer which defines and implements dynamic distributed arrays (DDA), and a layer in which several dynamic data and programming abstractions are implemented in terms of the DDAs. The currently implemented abstractions are those needed for formulation of hierarchical adaptive finite difference methods, hpadaptive finite element methods, and fast multipole method for solution of linear systems. Implementation of sample applications based on each of these methods are described and implementation issues and performance measurements are presented. Keywords:
A Fast Solution Method For ThreeDimensional ManyParticle Problems Of Linear Elasticity
 Int. J. Num. Meth. Engrg
, 1998
"... A boundary element method for solving threedimensional linear elasticity problems that involve a large number of particles embedded in a binder is introduced. The proposed method relies on an iterative solution strategy in which matrixvector multiplication is performed with the fast multipole meth ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
A boundary element method for solving threedimensional linear elasticity problems that involve a large number of particles embedded in a binder is introduced. The proposed method relies on an iterative solution strategy in which matrixvector multiplication is performed with the fast multipole method. As a result the method is capable of solving problems with N unknowns using only O(N) memory and O(N) operations. Results are given for problems with hundreds of particles in which N = O(10 5 ). KEY WORDS: boundary element method; fast multipole method; manyparticle problem; linear elasticity; iterative solution strategy . yto whom correspondence should be addressed, email gjr@ticam.utexas.edu 1 Introduction In this paper, we introduce a fast boundary element method (BEM) for solving threedimensional linear elasticity problems that involve a large number of particles embedded in a binder. We refer to those problems as manyparticle problems and to the new method as FLEMS: Fast Li...
Detecting outliers using transduction and statistical testing
 In Proceedings of the 12th Annual SIGKDD International Conference on Knowledge Discovery and Data Mining
, 2006
"... Outlier detection can uncover malicious behavior in fields like intrusion detection and fraud analysis. Although there has been a significant amount of work in outlier detection, most of the algorithms proposed in the literature are based on a particular definition of outliers (e.g., densitybased), ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Outlier detection can uncover malicious behavior in fields like intrusion detection and fraud analysis. Although there has been a significant amount of work in outlier detection, most of the algorithms proposed in the literature are based on a particular definition of outliers (e.g., densitybased), and use adhoc thresholds to detect them. In this paper we present a novel technique to detect outliers with respect to an existing clustering model. However, the test can also be successfully utilized to recognize outliers when the clustering information is not available. Our method is based on Transductive Confidence Machines, which have been previously proposed as a mechanism to provide individual confidence measures on classification decisions. The test uses hypothesis testing to prove or disprove whether a point is fit to be in each of the clusters of the model. We experimentally demonstrate that the test is highly robust, and produces very few misdiagnosed points, even when no clustering information is available. Furthermore, our experiments demonstrate the robustness of our method under the circumstances of data contaminated by outliers. We finally show that our technique can be successfully applied to identify outliers in a noisy data set for which no information is available (e.g., ground truth, clustering structure, etc.). As such our proposed methodology is capable of bootstrapping from a noisy data set a clean one that can be used to identify future outliers.
Dynamic Load Partitioning Strategies for Managing Data of Space and Time Heterogeneity in Parallel SAMR Applications
 In The 9th International EuroPar Conference (EuroPar 2003
, 2003
"... This paper presents the design and experimental evaluation of two dynamic load partitioning and balancing strategies for parallel Structured Adaptive Mesh Refinement (SAMR) applications: the Levelbased Partitioning Algorithm (LPA) and the Hierarchical Partitioning Algorithm (HPA). These techniqu ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
This paper presents the design and experimental evaluation of two dynamic load partitioning and balancing strategies for parallel Structured Adaptive Mesh Refinement (SAMR) applications: the Levelbased Partitioning Algorithm (LPA) and the Hierarchical Partitioning Algorithm (HPA). These techniques specifically address the computational and communication heterogeneity across refinement levels of the adaptive grid hierarchy underlying these methods.
Hierarchical Partitioning Techniques for Structured Adaptive Mesh Refinement Applications
 The Journal of Supercomputing
, 2003
"... This paper presents the design and preliminary evaluation of hierarchical partitioning and loadbalancing techniques for distributed Structured Adaptive Mesh Refinement (SAMR) applications. The overall goal of these techniques is to enable the load distribution to reflect the state of the adaptive g ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
This paper presents the design and preliminary evaluation of hierarchical partitioning and loadbalancing techniques for distributed Structured Adaptive Mesh Refinement (SAMR) applications. The overall goal of these techniques is to enable the load distribution to reflect the state of the adaptive grid hierarchy and exploit it to reduce synchronization requirements, improve loadbalance, and enable concurrent communications and incremental redistribution. The hierarchical partitioning algorithm (HPA) partitions the computational domain into subdomains and assigns them to hierarchical processor groups. Two variants of HPA are presented in this paper. The Static Hierarchical Partitioning Algorithm (SHPA) assigns portions of overall load to processor groups. In SHPA, the group size and the number of processors in each group is setup during initialization and remains unchanged during application execution. It is experimentally shown that SHPA reduces communication costs as compared to the NonHPA scheme, and reduces overall application execution time by up to 59%. The Adaptive Hierarchical Partitioning Algorithm (AHPA) dynamically partitions the processor pool into hierarchical groups that match the structure of the adaptive grid hierarchy. Initial evaluations of AHPA show that it can reduce communication costs by up to 70%.
On the Quality of Partitions based on SpaceFilling Curves
, 2002
"... This paper presents bounds on the quality of partitions induced by spacefilling curves. We compare the surface that surrounds an arbitrary index range with the optimal partition in the grid, i. e. the square. It is shown that partitions induced by Lebesgue and Hilbert curves behave about 1.85 times ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This paper presents bounds on the quality of partitions induced by spacefilling curves. We compare the surface that surrounds an arbitrary index range with the optimal partition in the grid, i. e. the square. It is shown that partitions induced by Lebesgue and Hilbert curves behave about 1.85 times worse with respect to the length of the surface. The Lebesgue indexing gives better results than the Hilbert indexing in worst case analysis. Furthermore, the surface of partitions based on the Lebesgue indexing are at most 3 times larger than the optimal in average case.
Graph Partitioning in Scientific Simulations: Multilevel Schemes versus SpaceFilling Curves
"... Using spacefilling curves to partition unstructured finite element meshes is a widely applied strategy when it comes to distributing load among several computation nodes. Compared to more elaborated graph partitioning packages, this geometric approach is relatively easy to implement and very fast. ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Using spacefilling curves to partition unstructured finite element meshes is a widely applied strategy when it comes to distributing load among several computation nodes. Compared to more elaborated graph partitioning packages, this geometric approach is relatively easy to implement and very fast. However, results are not expected to be as good as those of the latter, but no detailed comparison has ever been published. In this paper we will...
Materialized community ground models for largescale earthquake simulation
"... Largescale earthquake simulation requires source datasets which describe the highly heterogeneous physical characteristics of the earth in the region under simulation. Physical characteristic datasets are the first stage in a simulation pipeline which includes mesh generation, partitioning, solving ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Largescale earthquake simulation requires source datasets which describe the highly heterogeneous physical characteristics of the earth in the region under simulation. Physical characteristic datasets are the first stage in a simulation pipeline which includes mesh generation, partitioning, solving, and visualization. In practice, the data is produced in an adhoc fashion for each set of experiments, which has several significant shortcomings including lower performance, decreased repeatability and comparability, and a longer time to science, an increasingly important metric. As a solution to these problems, we propose a new approach for providing scientific data to ground motion simulations, in which ground model datasets are fully materialized into octress stored on disk, which can be more efficiently queried (by up to two orders of magnitude) than the underlying community velocity model programs. While octrees have long been used to store spatial datasets, they have not yet been used at the scale we propose. We further propose that these datasets can be provided as a service, either over the Internet or, more likely, in a datacenter or supercomputing center in which the simulations take place. Since constructing these octrees is itself a challenge, we present three dataparallel techniques for efficiently building them, which can significantly decrease the build time from days or weeks to hours using commodity clusters. This approach typifies a broader shift toward science as a service techniques in which scientific computation and storage services become more tightly intertwined. 1