Results 1  10
of
202
The Node Distribution of the Random Waypoint Mobility Model for Wireless Ad Hoc Networks
, 2003
"... The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in general, nonuniform. However, a closedform expression of this distribution and an indepth investigation ..."
Abstract

Cited by 253 (7 self)
 Add to MetaCart
The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in general, nonuniform. However, a closedform expression of this distribution and an indepth investigation is still missing. This fact impairs the accuracy of the current simulation methodology of ad hoc networks and makes it impossible to relate simulationbased performance results to corresponding analytical results. To overcome these problems, we present a detailed analytical study of the spatial node distribution generated by random waypoint mobility. More specifically, we consider a generalization of the model in which the pause time of the mobile nodes is chosen arbitrarily in each waypoint and a fraction of nodes may remain static for the entire simulation time. We show that the structure of the resulting distribution is the weighted sum of three independent components: the static, pause, and mobility component. This division enables us to understand how the models parameters influence the distribution. We derive an exact equation of the asymptotically stationary distribution for movement on a line segment and an accurate approximation for a square area. The good quality of this approximation is validated through simulations using various settings of the mobility parameters. In summary, this article gives a fundamental understanding of the behavior of the random waypoint model.
Computing geodesics and minimal surfaces via graph cuts
 in International Conference on Computer Vision
, 2003
"... Geodesic active contours and graph cuts are two standard image segmentation techniques. We introduce a new segmentation method combining some of their benefits. Our main intuition is that any cut on a graph embedded in some continuous space can be interpreted as a contour (in 2D) or a surface (in 3D ..."
Abstract

Cited by 179 (22 self)
 Add to MetaCart
Geodesic active contours and graph cuts are two standard image segmentation techniques. We introduce a new segmentation method combining some of their benefits. Our main intuition is that any cut on a graph embedded in some continuous space can be interpreted as a contour (in 2D) or a surface (in 3D). We show how to build a grid graph and set its edge weights so that the cost of cuts is arbitrarily close to the length (area) of the corresponding contours (surfaces) for any anisotropic Riemannian metric. There are two interesting consequences of this technical result. First, graph cut algorithms can be used to find globally minimum geodesic contours (minimal surfaces in 3D) under arbitrary Riemannian metric for a given set of boundary conditions. Second, we show how to minimize metrication artifacts in existing graphcut based methods in vision. Theoretically speaking, our work provides an interesting link between several branches of mathematicsdifferential geometry, integral geometry, and combinatorial optimization. The main technical problem is solved using CauchyCrofton formula from integral geometry. 1.
Intrinsic Parameterizations of Surface Meshes
, 2002
"... Parameterization of discrete surfaces is a fundamental and widelyused operation in graphics, required, for instance, for texture mapping or remeshing. As 3D data becomes more and more detailed, there is an increased need for fast and robust techniques to automatically compute leastdistorted para ..."
Abstract

Cited by 164 (12 self)
 Add to MetaCart
Parameterization of discrete surfaces is a fundamental and widelyused operation in graphics, required, for instance, for texture mapping or remeshing. As 3D data becomes more and more detailed, there is an increased need for fast and robust techniques to automatically compute leastdistorted parameterizations of large meshes. In this paper, we present new theoretical and practical results on the parameterization of triangulated surface patches.
Approximating the Bandwidth Via Volume Respecting Embeddings
, 1999
"... A linear arrangement of an nvertex graph is a onetoone mapping of its vertices to the integers f1; : : : ; ng. The bandwidth of a linear arrangement is the maximum difference between mapped values of adjacent vertices. The problem of finding a linear arrangement with smallest possible bandwidt ..."
Abstract

Cited by 92 (3 self)
 Add to MetaCart
A linear arrangement of an nvertex graph is a onetoone mapping of its vertices to the integers f1; : : : ; ng. The bandwidth of a linear arrangement is the maximum difference between mapped values of adjacent vertices. The problem of finding a linear arrangement with smallest possible bandwidth in NPhard. We present a randomized algorithm that runs in nearly linear time and outputs a linear arrangement whose bandwidth is within a polylogarithmic multiplicative factor of optimal. Our algorithm is based on a new notion, called volume respecting embeddings, which is a natural extension of small distortion embeddings of Bourgain and of Linial, London and Rabinovich. 1 Introduction We consider the problem of minimizing the bandwidth of an undirected connected graph G(V; E), where n = jV j and m = jEj. One needs to find a linear arrangement of the vertices, namely, a onetoone mapping f : V \Gamma! f1; 2; : : : ng, for which the bandwidth, i.e. max (i;j)2E jf(i) \Gamma f(j)j, i...
Ray Tracing Deformable Scenes using Dynamic Bounding Volume Hierarchies
 ACM Transactions on Graphics
, 2006
"... The most significant deficiency of most of today’s interactive ray tracers is that they are restricted to static walkthroughs. This restriction is due to the static nature of the acceleration structures used. While the best reported frame rates for static geometric models have been achieved using ca ..."
Abstract

Cited by 84 (18 self)
 Add to MetaCart
The most significant deficiency of most of today’s interactive ray tracers is that they are restricted to static walkthroughs. This restriction is due to the static nature of the acceleration structures used. While the best reported frame rates for static geometric models have been achieved using carefully constructed kdtrees, this article shows that bounding volume hierarchies (BVHs) can be used to efficiently ray trace large static models. More importantly, the BVH can be used to ray trace deformable models (sets of triangles whose positions change over time) with little loss of performance. A variety of efficiency techniques are used to achieve this performance, but three algorithmic changes to the typical BVH algorithm are mainly responsible. First, the BVH is built using a variant of the surface area heuristic conventionally used to build kdtrees. Second, the topology of the BVH is not changed over time so that only the bounding volumes need to be refit from frametoframe. Third, and most importantly, packets of rays are traced together through the BVH using a novel integrated packetfrustum traversal scheme. This traversal scheme elegantly combines the advantages of both packet traversal and frustum traversal and allows for rapid hierarchy descent for packets that hit bounding volumes as well as rapid exits for packets that miss. A BVHbased ray tracing system using these techniques is shown to achieve performance for deformable models comparable to that previously available only for static models.
A Comparison of Sequential Delaunay Triangulation Algorithms
, 1996
"... This paper presents an experimental comparison of a number of different algorithms for computing the Deluanay triangulation. The algorithms examined are: Dwyer’s divide and conquer algorithm, Fortune’s sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri, an ..."
Abstract

Cited by 54 (0 self)
 Add to MetaCart
This paper presents an experimental comparison of a number of different algorithms for computing the Deluanay triangulation. The algorithms examined are: Dwyer’s divide and conquer algorithm, Fortune’s sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri, and Murota, a new bucketingbased algorithm described in this paper, and Devillers’s version of a Delaunaytree based algorithm that appears in LEDA), an algorithm that incrementally adds a correct Delaunay triangle adjacent to a current triangle in a manner similar to gift wrapping algorithms for convex hulls, and Barber’s convex hull based algorithm. Most of the algorithms examined are designed for good performance on uniformly distributed sites. However, we also test implementations of these algorithms on a number of nonuniform distibutions. The experiments go beyond measuring total running time, which tends to be machinedependent. We also analyze the major highlevel primitives that algorithms use and do an experimental analysis of how often implementations of these algorithms perform each operation.
Testing for a signal with unknown location and scale in a stationary Gaussian random field
, 1995
"... this paper are concerned with approximate evaluation of the significance level of the test defined by (1.5), i.e., the probability when = 0 that X max exceeds a constant threshold, say b. First order approximations for this can easily be derived from the results going back to Belyaev and Pitaberg ( ..."
Abstract

Cited by 52 (18 self)
 Add to MetaCart
this paper are concerned with approximate evaluation of the significance level of the test defined by (1.5), i.e., the probability when = 0 that X max exceeds a constant threshold, say b. First order approximations for this can easily be derived from the results going back to Belyaev and Pitaberg (1972) (see Adler, 1981, Theorem 6.9.1, p. 160) who give the the following. Suppose Y (r) is a zero mean, unit variance, stationary random field defined on an interval S ae IR
On the Curvature of Piecewise Flat Spaces
 COMMUNICATIONS IN MATHEMATICAL PHYSICS
, 1984
"... We consider analogs of the LipschitzKilling curvatures of smooth Riemannian manifolds for piecewise flat spaces. In the special case of scalar curvature, the definition is due to T. Regge considerations in this spirit date back to J. Steiner. We show that if a piecewise flat space approximates a s ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
We consider analogs of the LipschitzKilling curvatures of smooth Riemannian manifolds for piecewise flat spaces. In the special case of scalar curvature, the definition is due to T. Regge considerations in this spirit date back to J. Steiner. We show that if a piecewise flat space approximates a smooth space in a suitable sense, then the corresponding curvatures are close in the sense of measures.
On building fast kdTrees for Ray Tracing, and on doing that in O(N log N)
 IN PROCEEDINGS OF THE 2006 IEEE SYMPOSIUM ON INTERACTIVE RAY TRACING
, 2006
"... Though a large variety of efficiency structures for ray tracing exist, kdtrees today seem to slowly become the method of choice. In particular, kdtrees built with cost estimation functions such as a surface area heuristic (SAH) seem to be important for reaching high performance. Unfortunately, mos ..."
Abstract

Cited by 45 (10 self)
 Add to MetaCart
Though a large variety of efficiency structures for ray tracing exist, kdtrees today seem to slowly become the method of choice. In particular, kdtrees built with cost estimation functions such as a surface area heuristic (SAH) seem to be important for reaching high performance. Unfortunately, most algorithms for building such trees have a time complexity of O(N log² N), or even O(N²). In this paper, we analyze the state of the art in building good kdtrees for ray tracing, and eventually propose an algorithm that builds SAH kdtrees in O(N logN), the theoretical lower bound.