Results 1 
8 of
8
TerraStream: From elevation data to watershed hierarchies
 Proc. ACM Sympos. on Advances in Geographic Information Systems
"... We consider the problem of extracting a river network and a watershed hierarchy from a terrain given as a set of irregularly spaced points. We describe TerraStream, a “pipelined ” solution that consists of four main stages: construction of a digital elevation model (DEM), hydrological conditioning, ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
We consider the problem of extracting a river network and a watershed hierarchy from a terrain given as a set of irregularly spaced points. We describe TerraStream, a “pipelined ” solution that consists of four main stages: construction of a digital elevation model (DEM), hydrological conditioning, extraction of river networks, and construction of a watershed hierarchy. Our approach has several advantages over existing methods. First, we design and implement the pipeline so each stage is scalable to massive data sets; a single nonscalable stage would create a bottleneck and limit overall scalability. Second, we develop the algorithms in a general framework so that they work for both TIN and grid DEMs. TerraStream is flexible and allows users to choose from various models and parameters, yet our pipeline is designed to reduce (or eliminate) the need for manual intervention between stages. We have implemented TerraStream and present experimental results on real elevation point sets that show that our approach handles massive multigigabyte terrain data sets. For example, we can process a data set containing over 300 million points—over 20GB of raw data—in under 26 hours, where most of the time (76%) is spent in the initial CPUintensive DEM construction stage. 1
Generating raster DEM from mass points via TIN streaming
 IN PROC. 4TH INTERNATIONAL CONFERENCE ON GEOGRAPHIC INFORMATION SCIENCE
, 2006
"... It is difficult to generate raster Digital Elevation Models (DEMs) from terrain mass point data sets too large to fit into memory, such as those obtained by LIDAR. We describe prototype tools for streaming DEM generation that use memory and disk I/O very efficiently. From 500 million bareearth LIDA ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
It is difficult to generate raster Digital Elevation Models (DEMs) from terrain mass point data sets too large to fit into memory, such as those obtained by LIDAR. We describe prototype tools for streaming DEM generation that use memory and disk I/O very efficiently. From 500 million bareearth LIDAR double precision points (11.2 GB) our tool can, in just over an hour on a standard laptop with two hard drives, produce a 50,394 × 30,500 raster DEM with 20 foot post spacing in 16 bit binary BIL format (3 GB), using less than 100 MB of main memory and less than 300 MB of temporary disk space.
Natural neighbor interpolation based grid dem construction using a gpu
 In ACM GIS ’10: Proceedings of the 18th ACM SIGSPATIAL International Symposium on Advances in Geographic Information Systems
, 2010
"... With modern LiDAR technology the amount of topographic data, in the form of massive point clouds, has increased dramatically. One of the most fundamental GIS tasks is to construct a grid digital elevation model (DEM) from these 3D point clouds. In this paper we present a simple yet very fast algorit ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
With modern LiDAR technology the amount of topographic data, in the form of massive point clouds, has increased dramatically. One of the most fundamental GIS tasks is to construct a grid digital elevation model (DEM) from these 3D point clouds. In this paper we present a simple yet very fast algorithm for constructing a grid DEM from massive point clouds using natural neighbor interpolation (NNI). We use a graphics processing unit (GPU) to significantly speed up the computation. To handle the large data sets and to deal with graphics hardware limitations clever blocking schemes are used to partition the point cloud. For example, using standard desktop computers and graphics hardware, we construct a highresolution grid with 150 million cells from two billion points in less than thirtyseven minutes. This is about onetenth of the time required for the same computer to perform a standard linear interpolation, which produces a much less smooth surface.
Scalable Algorithms for Large HighResolution Terrain Data ∗
"... In this paper we demonstrate that the technology required to perform typical GIS computations on very large highresolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that highresolution data has on common problems. To our knowledge, some ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we demonstrate that the technology required to perform typical GIS computations on very large highresolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that highresolution data has on common problems. To our knowledge, some of the computations we present have never before been carried out by standard desktop computers on data sets of comparable size.
I/OEfficient Batched UnionFind and Its . . .
"... Despite extensive study over the last four decades and numerous applications, no I/Oefficient algorithm is known for the unionfind problem. In this paper we present an I/Oefficient algorithm for the batched (offline) version of the unionfind problem. Given any sequence of N mixed union andfin ..."
Abstract
 Add to MetaCart
Despite extensive study over the last four decades and numerous applications, no I/Oefficient algorithm is known for the unionfind problem. In this paper we present an I/Oefficient algorithm for the batched (offline) version of the unionfind problem. Given any sequence of N mixed union andfind operations, where each union operation joins two distinct sets, our algorithm uses O(SORT(N)) = O ( NB logM/B NB) I/Os, where M is the memory size and B is the disk block size. This bound isasymptotically optimal in the worst case. If there are union operations that join a set with itself, our algorithm uses O(SORT(N) + MST(N)) I/Os, where MST(N) is the number of I/Os needed to compute the minimum spanning tree of a graph with N edges. We also describe a simple and practical O(SORT(N) log ( NM))I/O algorithm, which we have implemented.The main motivation for our study of the unionfind problem arises from problems in terrain analysis. A terrain can be abstracted as a height function defined over R2, and many problems that deal with suchfunctions require a unionfind data structure. With the emergence of modern mapping technologies, huge amount of data is being generated that is too large to fit in memory, thus I/Oefficient algorithmsare needed to process this data efficiently. In this paper, we study two terrain analysis problems that benefit from a unionfind data structure: (i) computing topological persistence and (ii) constructing thecontour tree. We give the first O(SORT(N))I/O algorithms for these two problems, assuming that theinput terrain is represented as a triangular mesh with N vertices.Finally, we report some preliminary experimental results, showing that our algorithms give orderofmagnitude improvement over previous methods on large data sets that do not fit in memory.
Cleaning Massive Sonar Point Clouds ABSTRACT
"... We consider the problem of automatically cleaning massive sonar data point clouds, that is, the problem of automatically removing noisy points that for example appear as a result of scans of (shoals of) fish, multiple reflections, scanner selfreflections, refraction in gas bubbles, and so on. We de ..."
Abstract
 Add to MetaCart
We consider the problem of automatically cleaning massive sonar data point clouds, that is, the problem of automatically removing noisy points that for example appear as a result of scans of (shoals of) fish, multiple reflections, scanner selfreflections, refraction in gas bubbles, and so on. We describe a new algorithm that avoids the problems of previous localneighbourhood based algorithms. Our algorithm is theoretically I/Oefficient, that is, it is capable of efficiently processing massive sonar point clouds that do not fit in internal memory but must reside on disk. The algorithm is also relatively simple and thus practically efficient, partly due to the development of a new simple algorithm for computing the connected components of a graph embedded in the plane. A version of our cleaning algorithm has already been incorporated in a commercial product.
Date: Approved:
"... With modern LiDAR technology the amount of topographic data, in the form of massive point clouds, has increased dramatically. One of the most fundamental GIS tasks is to construct a grid digital elevation model (DEM) from these point clouds. We present a simple yet very fast natural neighbor interpo ..."
Abstract
 Add to MetaCart
With modern LiDAR technology the amount of topographic data, in the form of massive point clouds, has increased dramatically. One of the most fundamental GIS tasks is to construct a grid digital elevation model (DEM) from these point clouds. We present a simple yet very fast natural neighbor interpolation algorithm for constructing a grid DEM from massive point clouds. We use the graphics processing unit (GPU) to significantly speed up the computation. To handle the large data sets and to deal with graphics hardware limitations clever blocking schemes are used to partition the point cloud. This algorithm is about an order of magnitude faster than the much simpler linear interpolation, which produces a much less smooth surface. We also show how to extend our algorithm to higher dimensions, which is useful for constructing 3D grids, such as from spatialtemporal topographic data. We describe different algorithms to attain speed and memory tradeoffs. iii Contents
Fast Segment Insertion and Incremental Construction of Constrained Delaunay Triangulations
"... The most commonly implemented method of constructing a constrained Delaunay triangulation (CDT) in the plane is to first construct a Delaunay triangulation, then incrementally insert the input segments one by one. For typical implementations of segment insertion, this method has aΘ(kn 2) worstcase ..."
Abstract
 Add to MetaCart
The most commonly implemented method of constructing a constrained Delaunay triangulation (CDT) in the plane is to first construct a Delaunay triangulation, then incrementally insert the input segments one by one. For typical implementations of segment insertion, this method has aΘ(kn 2) worstcase running time, where n is the number of input vertices and k is the number of input segments. We give a randomized algorithm for inserting a segment into a CDT in expected time linear in the number of edges the segment crosses, and demonstrate with a performance comparison that it is faster than giftwrapping for segments that cross many edges. A result of Agarwal, Arge, and Yi implies that randomized incremental construction of CDTs by our segment insertion algorithm takes expectedO(n log n+n log 2 k) time. We show that this bound is tight by deriving a matching lower bound. Although there are CDT construction algorithms guaranteed to run inO(n log n) time, incremental CDT construction is easier to program and competitive in practice. Moreover, the ability to incrementally update a CDT by inserting a segment is useful in itself.