Results 1  10
of
30
Terrain Simplification Simplified: A General Framework for ViewDependent OutofCore Visualization
, 2002
"... This paper describes a general framework for outofcore rendering and management of massive terrain surfaces. The two key components of this framework are: viewdependent refinement of the terrain mesh; and a simple scheme for organizing the terrain data to improve coherence and reduce the number o ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
This paper describes a general framework for outofcore rendering and management of massive terrain surfaces. The two key components of this framework are: viewdependent refinement of the terrain mesh; and a simple scheme for organizing the terrain data to improve coherence and reduce the number of paging events from external storage to main memory. Similar to several previously proposed methods for viewdependent refinement, we recursively subdivide a triangle mesh defined over regularly gridded data using longestedge bisection. As part of this single, perframe refinement pass, we perform triangle stripping, view frustum culling, and smooth blending of geometry using geomorphing. Meanwhile, our refinement framework supports a large class of error metrics, is highly competitive in terms of rendering performance, and is surprisingly simple to implement. Independent
Global Static Indexing for Realtime Exploration of Very Large Regular Grids
, 2001
"... In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048 3 grid (8 Giganodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192 3 grid ( teranodes) using only 60MB of memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the outofcore computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a timecritical and interruptible user interface. 1.
Mesh layouts for blockbased caches
 IEEE Transactions on Visualization and Computer Graphics
, 2006
"... Abstract—Current computer architectures employ caching to improve the performance of a wide variety of applications. One of the main characteristics of such cache schemes is the use of block fetching whenever an uncached data element is accessed. To maximize the benefit of the block fetching mechani ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Abstract—Current computer architectures employ caching to improve the performance of a wide variety of applications. One of the main characteristics of such cache schemes is the use of block fetching whenever an uncached data element is accessed. To maximize the benefit of the block fetching mechanism, we present novel cacheaware and cacheoblivious layouts of surface and volume meshes that improve the performance of interactive visualization and geometric processing algorithms. Based on a general I/O model, we derive new cacheaware and cacheoblivious metrics that have high correlations with the number of cache misses when accessing a mesh. In addition to guiding the layout process, our metrics can be used to quantify the quality of a layout, e.g. for comparing different layouts of the same mesh and for determining whether a given layout is amenable to significant improvement. We show that layouts of unstructured meshes optimized for our metrics result in improvements over conventional layouts in the performance of visualization applications such as isosurface extraction and viewdependent rendering. Moreover, we improve upon recent cacheoblivious mesh layouts in terms of performance, applicability, and accuracy. Index Terms—Mesh and graph layouts, cacheaware and cacheoblivious layouts, metrics for cache coherence, data locality. 1
On MultiDimensional Hilbert Indexings
 Theory of Computing Systems
, 1998
"... Indexing schemes for grids based on spacefilling curves (e.g., Hilbert indexings) find applications in numerous fields, ranging from parallel processing over data structures to image processing. Because of an increasing interest in discrete multidimensional spaces, indexing schemes for them hav ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Indexing schemes for grids based on spacefilling curves (e.g., Hilbert indexings) find applications in numerous fields, ranging from parallel processing over data structures to image processing. Because of an increasing interest in discrete multidimensional spaces, indexing schemes for them have won considerable interest. Hilbert curves are the most simple and popular spacefilling indexing scheme. We extend the concept of curves with Hilbert property to arbitrary dimensions and present first results concerning their structural analysis that also simplify their applicability. We define and analyze in a precise mathematical way rdimensional Hilbert indexings for arbitrary r 2. Moreover, we generalize and simplify previous work and clarify the concept of Hilbert curves for multidimensional grids. As we show, Hilbert indexings can be completely described and analyzed by "generating elements of order 1", thus, in comparison with previous work, reducing their structural comp...
On Multidimensional Curves with Hilbert Property
, 2000
"... Indexing schemes for grids based on spacefilling curves (e.g., Hilbert curves) find applications in numerous fields, ranging from parallel processing over data structures to image processing. Because of an increasing interest in discrete multidimensional spaces, indexing schemes for them have won c ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Indexing schemes for grids based on spacefilling curves (e.g., Hilbert curves) find applications in numerous fields, ranging from parallel processing over data structures to image processing. Because of an increasing interest in discrete multidimensional spaces, indexing schemes for them have won considerable interest. Hilbert curves are the most simple and popular spacefilling indexing schemes. We extend the concept of curves with Hilbert property to arbitrary dimensions and present first results concerning their structural analysis that also simplify their applicability.
Parallel Adaptive Subspace Correction Schemes with Applications to Elasticity
 Comput. Methods Appl. Mech. Engrg
, 1999
"... : In this paper, we give a survey on the three main aspects of the efficient treatment of PDEs, i.e. adaptive discretization, multilevel solution and parallelization. We emphasize the abstract approach of subspace correction schemes and summarize its convergence theory. Then, we give the main featur ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
: In this paper, we give a survey on the three main aspects of the efficient treatment of PDEs, i.e. adaptive discretization, multilevel solution and parallelization. We emphasize the abstract approach of subspace correction schemes and summarize its convergence theory. Then, we give the main features of each of the three distinct topics and treat the historical background and modern developments. Furthermore, we demonstrate how all three ingredients can be put together to give an adaptive and parallel multilevel approach for the solution of elliptic PDEs and especially of linear elasticity problems. We report on numerical experiments for the adaptive parallel multilevel solution of some test problems, namely the Poisson equation and Lam'e's equation. Here, we emphasize the parallel efficiency of the adaptive code even for simple test problems with little work to distribute, which is achieved through hash storage techniques and spacefilling curves. Keywords: subspace correction, iter...
Hash Based Adaptive Parallel Multilevel Methods with SpaceFilling Curves
 NIC Series
, 2002
"... this paper a parallelisable and cheap method based on spacefilling curves is proposed. The partitioning is embedded into the parallel solution algorithm using multilevel iterative solvers and adaptive grid refinement. Numerical experiments on two massively parallel computers prove the efficienc ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
this paper a parallelisable and cheap method based on spacefilling curves is proposed. The partitioning is embedded into the parallel solution algorithm using multilevel iterative solvers and adaptive grid refinement. Numerical experiments on two massively parallel computers prove the efficiency of this approach
Scanning and sequential decision making for multidimensional data  part I: the noiseless case
 IEEE Trans. on Inform. Theory
"... We consider the problem of sequential decision making on random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of sequentially scanning and filtering noisy r ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We consider the problem of sequential decision making on random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of sequentially scanning and filtering noisy random fields. In this case, the sequential filter is given the freedom to choose the path over which it traverses the random field (e.g., noisy image or video sequence), thus it is natural to ask what is the best achievable performance and how sensitive this performance is to the choice of the scan. We formally define the problem of scanning and filtering, derive a bound on the best achievable performance and quantify the excess loss occurring when nonoptimal scanners are used, compared to optimal scanning and filtering. We then discuss the problem of sequential scanning and prediction of noisy random fields. This setting is a natural model for applications such as restoration and coding of noisy images. We formally define the problem of scanning and prediction of a noisy multidimensional array and relate the optimal performance to the clean scandictability defined by Merhav and Weissman. Moreover, bounds on the excess loss due to suboptimal scans are derived, and a universal prediction algorithm is suggested.
On the Quality of Partitions based on SpaceFilling Curves
, 2002
"... This paper presents bounds on the quality of partitions induced by spacefilling curves. We compare the surface that surrounds an arbitrary index range with the optimal partition in the grid, i. e. the square. It is shown that partitions induced by Lebesgue and Hilbert curves behave about 1.85 times ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This paper presents bounds on the quality of partitions induced by spacefilling curves. We compare the surface that surrounds an arbitrary index range with the optimal partition in the grid, i. e. the square. It is shown that partitions induced by Lebesgue and Hilbert curves behave about 1.85 times worse with respect to the length of the surface. The Lebesgue indexing gives better results than the Hilbert indexing in worst case analysis. Furthermore, the surface of partitions based on the Lebesgue indexing are at most 3 times larger than the optimal in average case.
Scalable LoadDistance Balancing
 DISC 2007, LNCS 4731
, 2007
"... Abstract. We introduce the problem of loaddistance balancing in assigning users of a delaysensitive networked application to servers. We model the service delay experienced by a user as a sum of a networkincurred delay, which depends on its network distance from the server, and a serverincurred ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. We introduce the problem of loaddistance balancing in assigning users of a delaysensitive networked application to servers. We model the service delay experienced by a user as a sum of a networkincurred delay, which depends on its network distance from the server, and a serverincurred delay, stemming from the load on the server. The problem is to minimize the maximum service delay among all users. We address the challenge of finding a nearoptimal assignment in a scalable distributed manner. The key to achieving scalability is using local solutions, whereby each server only communicates with a few close servers. Note, however, that the attainable locality of a solution depends on the workload – when some area in the network is congested, obtaining a nearoptimal cost may require offloading users to remote servers, whereas when the network load is uniform, a purely local assignment may suffice. We present algorithms that exploit the opportunity to provide a local solution when possible, and thus have communication costs and stabilization times that vary according to the network congestion. We evaluate our algorithms with a detailed simulation case study of their application in assigning hosts to Internet gateways in an urban wireless mesh network (WMN).