Results 1  10
of
111
Multidimensional Access Methods
, 1998
"... Search operations in databases require special support at the physical level. This is true for conventional databases as well as spatial databases, where typical search operations include the point query (find all objects that contain a given search point) and the region query (find all objects that ..."
Abstract

Cited by 561 (3 self)
 Add to MetaCart
Search operations in databases require special support at the physical level. This is true for conventional databases as well as spatial databases, where typical search operations include the point query (find all objects that contain a given search point) and the region query (find all objects that overlap a given search region). More
The quadtree and related hierarchical data structures
 ACM Computing Surveys
, 1984
"... A tutorial survey is presented of the quadtree and related hierarchical data structures. They are based on the principle of recursive decomposition. The emphasis is on the representation of data used in applications in image processing, computer graphics, geographic information systems, and robotics ..."
Abstract

Cited by 421 (11 self)
 Add to MetaCart
A tutorial survey is presented of the quadtree and related hierarchical data structures. They are based on the principle of recursive decomposition. The emphasis is on the representation of data used in applications in image processing, computer graphics, geographic information systems, and robotics. There is a greater emphasis on region data (i.e., twodimensional shapes) and to a lesser extent on point, curvilinear, and threedimensional data. A number of operations in which such data structures find use are examined in greater detail.
Spatial Data Structures
, 1995
"... An overview is presented of the use of spatial data structures in spatial databases. The focus is on hierarchical data structures, including a number of variants of quadtrees, which sort the data with respect to the space occupied by it. Suchtechniques are known as spatial indexing methods. Hierarch ..."
Abstract

Cited by 287 (13 self)
 Add to MetaCart
An overview is presented of the use of spatial data structures in spatial databases. The focus is on hierarchical data structures, including a number of variants of quadtrees, which sort the data with respect to the space occupied by it. Suchtechniques are known as spatial indexing methods. Hierarchical data structures are based on the principle of recursive decomposition. They are attractive because they are compact and depending on the nature of the data they save space as well as time and also facilitate operations such as search. Examples are given of the use of these data structures in the representation of different data types such as regions, points, rectangles, lines, and volumes.
Designing pixeloriented visualization techniques: Theory and applications
 IEEE Transactions on Visualization and Computer Graphics
, 2000
"... AbstractÐVisualization techniques are ofincreasing importance in exploring and analyzing large amounts ofmultidimensional information. One important class of visualization techniques which is particularly interesting for visualizing very large multidimensional data sets is the class ofthe pixelorie ..."
Abstract

Cited by 86 (8 self)
 Add to MetaCart
AbstractÐVisualization techniques are ofincreasing importance in exploring and analyzing large amounts ofmultidimensional information. One important class of visualization techniques which is particularly interesting for visualizing very large multidimensional data sets is the class ofthe pixeloriented techniques. The basic idea ofpixeloriented visualization techniques is to represent as many data objects as possible on the screen at the same time by mapping each data value to a pixel ofthe screen and arranging the pixels adequately. A number of different pixeloriented visualization techniques have been proposed in recent years and it has been shown that the techniques are useful for visual data exploration in a number of different application contexts. In this paper, we discuss a number ofissues which are ofhigh importance in developing pixeloriented visualization techniques. The major goal ofthis article is to provide a formal basis of pixeloriented visualization techniques and show that the design decisions in developing them can be seen as solutions ofwelldefined optimization problems. This is true for the mapping ofthe data values to colors, the arrangement ofpixels inside the subwindows, the shape ofthe subwindows, and the ordering ofthe dimension subwindows. The paper also discusses the design issues of special variants of pixeloriented techniques for visualizing large spatial data sets. The optimization functions for the mentioned design decisions are important for the effectiveness of the resulting visualizations. We show this by evaluating the optimization functions and comparing it the results to the visualization obtained in a number of different application. Index TermsÐInformation visualization, visualizing large data sets, visualizing multidimensional and multivariate data, visual data exploration, visual data mining. 1
Visualization Techniques for Mining Large Databases: A Comparison
 IEEE Transactions on Knowledge and Data Engineering
, 1996
"... Visual data mining techniques have proven to be of high value in exploratory data analysis and they also have a high potential for mining large databases. In this article, we describe and evaluate a new visualizationbased approach to mining large databases. The basic idea of our visual data mining ..."
Abstract

Cited by 75 (1 self)
 Add to MetaCart
Visual data mining techniques have proven to be of high value in exploratory data analysis and they also have a high potential for mining large databases. In this article, we describe and evaluate a new visualizationbased approach to mining large databases. The basic idea of our visual data mining techniques is to represent as many data items as possible on the screen at the same time by mapping each data value to a pixel of the screen and arranging the pixels adequately. The major goal of this article is to evaluate our visual data mining techniques and to compare them to other wellknown visualization techniques for multidimensional data: the parallel coordinate and stick figure visualization techniques. For the evaluation of visual data mining techniques, in the first place the perception of properties of the data counts, and only in the second place the CPU time and the number of secondary storage accesses are important. In addition to testing the visualization techniques using re...
Scalable Network Distance Browsing in Spatial Databases
, 2008
"... An algorithm is presented for finding the k nearest neighbors in a spatial network in a bestfirst manner using network distance. The algorithm is based on precomputing the shortest paths between all possible vertices in the network and then making use of an encoding that takes advantage of the fact ..."
Abstract

Cited by 46 (8 self)
 Add to MetaCart
An algorithm is presented for finding the k nearest neighbors in a spatial network in a bestfirst manner using network distance. The algorithm is based on precomputing the shortest paths between all possible vertices in the network and then making use of an encoding that takes advantage of the fact that the shortest paths from vertex u to all of the remaining vertices can be decomposed into subsets based on the first edges on the shortest paths to them from u. Thus, in the worst case, the amount of work depends on the number of objects that are examined and the number of links on the shortest paths to them from q, rather than depending on the number of vertices in the network. The amount of storage required to keep track of the subsets is reduced by taking advantage of their spatial coherence which is captured by the aid of a shortest path quadtree. In particular, experiments on a number of large road networks as
High Resolution Forward and Inverse Earthquake Modeling on Terascale Computers
 In SC2003
, 2003
"... For earthquake simulations to play an important role in the reduction of seismic risk, they must be capable of high resolution and high fidelity. We have developed algorithms and tools for earthquake simulation based on multiresolution hexahedral meshes. We have used this capability to carry out 1 H ..."
Abstract

Cited by 39 (17 self)
 Add to MetaCart
For earthquake simulations to play an important role in the reduction of seismic risk, they must be capable of high resolution and high fidelity. We have developed algorithms and tools for earthquake simulation based on multiresolution hexahedral meshes. We have used this capability to carry out 1 Hz simulations of the 1994 Northridge earthquake in the LA Basin using 100 million grid points. Our wave propagation solver sustains 1.21 teraflop/s for 4 hours on 3000 AlphaServer processors at 80% parallel efficiency. Because of uncertainties in characterizing earthquake source and basin material properties, a critical remaining challenge is to invert for source and material parameter fields for complex 3D basins from records of past earthquakes. Towards this end, we present results for material and source inversion of highresolution models of basins undergoing antiplane motion using parallel scalable inversion algorithms that overcome many of the difficulties particular to inverse heterogeneous wave propagation problems.
Balancing Processor Loads and Exploiting Data Locality in NBody Simulations
 In Proceedings of Supercomputing’95 (CDROM
, 1995
"... Although Nbody simulation algorithms are amenable to parallelization, performance gains from execution on parallel machines are difficult to obtain due to load imbalances caused by irregular distributions of bodies. In general, there is a tension between balancing processor loads and maintaining lo ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
Although Nbody simulation algorithms are amenable to parallelization, performance gains from execution on parallel machines are difficult to obtain due to load imbalances caused by irregular distributions of bodies. In general, there is a tension between balancing processor loads and maintaining locality, as the dynamic reassignment of work necessitates access to remote data. Fractiling is a dynamic scheduling scheme that simultaneously balances processor loads and maintains locality by exploiting the selfsimilarity properties of fractals. Fractiling is based on a probabilistic analysis, and thus, accommodates load imbalances caused by predictable phenomena, such as irregular data, and unpredictable phenomena, such as dataaccess latencies. In experiments on a KSR1, performance of Nbody simulation codes were improved by as much as 53% by fractiling. Performance improvements were obtained on uniform and nonuniform distributions of bodies, underscoring the need for a scheduling schem...
Navigating through Triangle Meshes Implemented as Linear Quadtrees
 ACM Transactions on Graphics
, 1998
"... Techniques are presented for navigating between adjacent triangles of greater or equal size in a hierarchical triangle mesh where the triangles are obtained by a recursive quadtreelike subdivision of the underlying space into four equilateral triangles. These techniques are useful in a number of ap ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Techniques are presented for navigating between adjacent triangles of greater or equal size in a hierarchical triangle mesh where the triangles are obtained by a recursive quadtreelike subdivision of the underlying space into four equilateral triangles. These techniques are useful in a number of applications including finite element analysis, ray tracing, and the modeling of spherical data. The operations are implemented in a manner analogous to that used in a quadtree representation of data on the twodimensional plane where the underlying space is tessellated into a square mesh. A new technique is described for labeling the triangles which is useful in implementing the quadtree triangle mesh as a linear quadtree (i.e., a pointerless quadtree); the navigation can then take place in this linear quadtree. When the neighbors are of equal size, the algorithms take constant time. The algorithms are very efficient, as they make use of just a few bit manipulation operations and can be impl...