Results 1  10
of
70
Randomized Kinodynamic Motion Planning with Moving Obstacles
, 2000
"... We present a randomized motion planner for robots that must avoid moving obstacles and achieve a specified goal under kinematic and dynamic constraints. The planner samples the robot's statetime space by picking control inputs at random and integrating the equations of motion. The result is ..."
Abstract

Cited by 197 (12 self)
 Add to MetaCart
We present a randomized motion planner for robots that must avoid moving obstacles and achieve a specified goal under kinematic and dynamic constraints. The planner samples the robot's statetime space by picking control inputs at random and integrating the equations of motion. The result is a roadmap of sampled statetime points, called milestones, connected by short admissible trajectories. The planner does not precompute the roadmap as traditional probabilistic roadmap planners do; instead, for each planning query, it generates a new roadmap to find a trajectory between an initial and a goal statetime point. We prove in this paper that the probability that the planner fails to find such a trajectory when one exists quickly goes to 0, as the number of milestones grows. The planner has been implemented and tested successfully in both simulated and real environments. In the latter case, a vision module estimates obstacle motions just before planning starts; the planner is then allocated a small, fixed amount of time to compute a trajectory. If a change in the obstacle motion is detected while the robot executes the planned trajectory, the planner recomputes a trajectory on the fly. 1
New data structures for orthogonal range searching
 In Proc. 41st IEEE Symposium on Foundations of Computer Science
, 2000
"... ..."
On Delaying Collision Checking in PRM Planning  Application To MultiRobot Coordination
 INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
, 2002
"... This paper describes the foundations and algorithms of a new probabilistic roadmap (PRM) planner that is: singlequery  instead of precomputing a roadmap covering the entire free space, it uses the two input query configurations to explore as little space as possible; bidirectional  it explo ..."
Abstract

Cited by 67 (16 self)
 Add to MetaCart
This paper describes the foundations and algorithms of a new probabilistic roadmap (PRM) planner that is: singlequery  instead of precomputing a roadmap covering the entire free space, it uses the two input query configurations to explore as little space as possible; bidirectional  it explores the robot's free space by building a roadmap made of two trees rooted at the query configurations; and lazy in checking collisions  it delays collision tests along the edges of the roadmap until they are absolutely needed. Several observations motivated this strategy: (1) PRM planners spend a large fraction of their time testing connections for collision; (2) most connections in a roadmap are not on the final path; (3) the collision test for a connection is most expensive when there is no collision; and (4) any short connection between two collisionfree configurations has high prior probability of being collisionfree. The strengths of singlequery and bidirectional sampling techniques, and those of delayed collision checking reinforce each other. Experimental results
Marked Ancestor Problems
, 1998
"... Consider a rooted tree whose nodes can be marked or unmarked. Given a node, we want to find its nearest marked ancestor. This generalises the wellknown predecessor problem, where the tree is a path. ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
Consider a rooted tree whose nodes can be marked or unmarked. Given a node, we want to find its nearest marked ancestor. This generalises the wellknown predecessor problem, where the tree is a path.
Fractionally cascaded information in a sensor network
, 2004
"... We address the problem of distributed information aggregation and storage in a sensor network, where queries can be injected anywhere in the network. The principle we propose is that a sensor should know a “fraction ” of the information from distant parts of the network, in an exponentially decaying ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
We address the problem of distributed information aggregation and storage in a sensor network, where queries can be injected anywhere in the network. The principle we propose is that a sensor should know a “fraction ” of the information from distant parts of the network, in an exponentially decaying fashion by distance. We show how a sampled scalar field can be stored in this distributed fashion, with only a modest amount of additional storage and network traffic. Our storage scheme makes neighboring sensors have highly correlated world views; this allows smooth information gradients and enables local search algorithms to work well. We study in particular how this principle of fractionally cascaded information can be exploited to answer range queries about the sampled field efficiently. Using local decisions only we are able to route the query to exactly the portions of the field where the sought information is stored. We provide a rigorous theoretical analysis showing that our scheme is close to optimal. Categories and Subject Descriptors H.3.3 [Information Systems]: information storage and retrieval—information search and retrieval; F.2.2 [Theory of Computation]: analysis of algorithms and problem complexity—nonnumerical algorithms and problems
Maintaining Sliding Window Skylines on Data Streams
 IEEE Transactions on Knowledge and Data Engineering
, 2006
"... Abstract—The skyline of a multidimensional data set contains the “best ” tuples according to any preference function that is monotonic on each dimension. Although skyline computation has received considerable attention in conventional databases, the existing algorithms are inapplicable to stream app ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
Abstract—The skyline of a multidimensional data set contains the “best ” tuples according to any preference function that is monotonic on each dimension. Although skyline computation has received considerable attention in conventional databases, the existing algorithms are inapplicable to stream applications because 1) they assume static data that are stored in the disk (rather than continuously arriving/expiring), 2) they focus on “onetime ” execution that returns a single skyline (in contrast to constantly tracking skyline changes), and 3) they aim at reducing the I/O overhead (as opposed to minimizing the CPUcost and mainmemory consumption). This paper studies skyline computation in stream environments, where query processing takes into account only a “sliding window ” covering the most recent tuples. We propose algorithms that continuously monitor the incoming data and maintain the skyline incrementally. Our techniques utilize several interesting properties of stream skylines to improve space/time efficiency by expunging data from the system as early as possible (i.e., before their expiration). Furthermore, we analyze the asymptotical performance of the proposed solutions, and evaluate their efficiency with extensive experiments. Index Terms—Skyline, stream, database, algorithm. 1
Range counting over multidimensional data streams
 Discrete & Computational Geometry
, 2004
"... \Lambda \Lambda Abstract We consider the problem of approximate range counting over streams of ddimensional points. In the data stream model, the algorithm makes a single scan of the data, which is presented in an arbitrary order, and computes a compact summary (called a sketch). The sketch, whose ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
\Lambda \Lambda Abstract We consider the problem of approximate range counting over streams of ddimensional points. In the data stream model, the algorithm makes a single scan of the data, which is presented in an arbitrary order, and computes a compact summary (called a sketch). The sketch, whose size depends on the approximation parameter &quot;, can be used to count the number of points inside a query range within additive error &quot;n, where n is the size of the stream. We present several results, deterministic and randomized, for both rectangle and halfplane ranges. 1 Introduction Data streams have emerged as an important paradigm for processing data that arrives and needs to be processed continuously. For instance, telecom service providers routinely monitor packet flows through their networks to infer usage patterns and signs of attack, or to optimize their routing tables. Financial markets, banks, web servers, and news organizations also generate rapid and continuous data streams.
Exploiting Result Equivalence in Caching Dynamic Web Content
 IN USENIX SYMPOSIUM ON INTERNET TECHNOLOGIES AND SYSTEMS
, 1999
"... Caching is currently the primary mechanism for reducing the latency as well as bandwidth requirements for delivering Web content. Numerous techniques and tools have been proposed, evaluated and successfully used for caching static content. Recent studies show that requests for dynamic web content al ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
Caching is currently the primary mechanism for reducing the latency as well as bandwidth requirements for delivering Web content. Numerous techniques and tools have been proposed, evaluated and successfully used for caching static content. Recent studies show that requests for dynamic web content also contain substantial locality for identical requests. In this paper, we classify locality in dynamic web content into three kinds: identical requests, equivalent requests, and partially equivalent requests. Equivalent requests are not identical to previous requests but result in generation of identical dynamic content. The documents generated for partially equivalent requests are not identical but can be used as temporary place holders for each other while the real document is being generated. We present a new protocol, which we refer to as Dynamic Content Caching Protocol (DCCP), to allow individual content generating applications to exploit query semantics and specify how their results s...
Constrained Higher Order Delaunay Triangulations
 Comput. Geom. Theory Appl
, 2004
"... We extend the notion of higherorder Delaunay triangulations to constrained higherorder Delaunay triangulations and provide various results. We can determine the order k of a given triangulation in O(min(nk log n log k, n n)) time. We show that the completion of a set of useful orderk Delau ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
We extend the notion of higherorder Delaunay triangulations to constrained higherorder Delaunay triangulations and provide various results. We can determine the order k of a given triangulation in O(min(nk log n log k, n n)) time. We show that the completion of a set of useful orderk Delaunay edges may have order 2k 2, which is worstcase optimal. We give an algorithm for the lowestorder completion for a set of useful orderk Delaunay edges when k 3. For higher orders the problem is open.
Low Latency Photon Mapping Using Block Hashing
 IN PROCEEDINGS OF THE CONFERENCE ON GRAPHICS HARDWARE 2002
, 2002
"... Photon mapping is useful in the acceleration of global illumination and caustic effects computed by path tracing. For hardware accelerated rendering, photon maps would be especially useful for simulating caustic lighting effects on nonLambertian surfaces. For this to be possible, an efficient hardw ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Photon mapping is useful in the acceleration of global illumination and caustic effects computed by path tracing. For hardware accelerated rendering, photon maps would be especially useful for simulating caustic lighting effects on nonLambertian surfaces. For this to be possible, an efficient hardware algorithm for the computation of the k nearest neighbours to a sample point is required. Existing