Results 1 - 10
of
366
Multidimensional Access Methods
, 1998
"... Search operations in databases require special support at the physical level. This is true for conventional databases as well as spatial databases, where typical search operations include the point query (find all objects that contain a given search point) and the region query (find all objects that ..."
Abstract
-
Cited by 686 (3 self)
- Add to MetaCart
Search operations in databases require special support at the physical level. This is true for conventional databases as well as spatial databases, where typical search operations include the point query (find all objects that contain a given search point) and the region query (find all objects that overlap a given search region).
Joint mobility and routing for lifetime elongation in wireless sensor networks
- In Proceedijngs of IEEE INFOCOM
"... Abstract — Although many energy efficient/conserving routing protocols have been proposed for wireless sensor networks, the concentration of data traffic towards a small number of base stations remains a major threat to the network lifetime. The main reason is that the sensor nodes located near a ba ..."
Abstract
-
Cited by 175 (9 self)
- Add to MetaCart
(Show Context)
Abstract — Although many energy efficient/conserving routing protocols have been proposed for wireless sensor networks, the concentration of data traffic towards a small number of base stations remains a major threat to the network lifetime. The main reason is that the sensor nodes located near a base station have to relay data for a large part of the network and thus deplete their batteries very quickly. The solution we propose in this paper suggests that the base station be mobile; in this way, the nodes located close to it change over time. Data collection protocols can then be optimized by taking both base station mobility and multi-hop routing into account. We first study the former, and conclude that the best mobility strategy consists in following the periphery of the network (we assume that the sensors are deployed within a circle). We then consider jointly mobility and routing algorithms in this case, and show that a better routing strategy uses a combination of round routes and short paths. We provide a detailed analytical model for each of our statements, and corroborate it with simulation results. We show that the obtained improvement in terms of network lifetime is in the order of 500%.
Divide-and-Conquer Approximation Algorithms via Spreading Metrics
, 1996
"... We present a novel divide-and-conquer paradigm for approximating NP-hard graph optimization problems. The paradigm models graph optimization problems that satisfy two properties: First, a divide-and-conquer approach is applicable. Second, a fractional spreading metric is computable in polynomial tim ..."
Abstract
-
Cited by 110 (9 self)
- Add to MetaCart
We present a novel divide-and-conquer paradigm for approximating NP-hard graph optimization problems. The paradigm models graph optimization problems that satisfy two properties: First, a divide-and-conquer approach is applicable. Second, a fractional spreading metric is computable in polynomial time. The spreading metric assigns rational lengths to either edges or vertices of the input graph, such that all subgraphs on which the optimization problem is non-trivial have large diameters. In addition, the spreading metric provides a lower bound, ø , on the cost of solving the optimization problem. We present a polynomial time approximation algorithm for problems modeled by our paradigm whose approximation factor is O (minflog ø log log ø; log k log log kg), where k denotes the number of "interesting" vertices in the problem instance, and is at most the number of vertices. We present seven problems that can be formulated to fit the paradigm. For all these problems our algorithm improves ...
Terrain simplification simplified: a general framework for view-dependent out-of-core visualization.
- IEEE Transactions on Visualization and Computer Graphics,
, 2002
"... ..."
Improving Memory Hierarchy Performance for Irregular Applications Using Data and Computation Reorderings
- International Journal of Parallel Programming
, 2001
"... The performance of irregular applications on modern computer systems is hurt by the wide gap between CPU and memory speeds because these applications typically underutilize multi-level memory hierarchies, which help hide this gap. This paper investigates using data and computation reorderings to i ..."
Abstract
-
Cited by 104 (2 self)
- Add to MetaCart
(Show Context)
The performance of irregular applications on modern computer systems is hurt by the wide gap between CPU and memory speeds because these applications typically underutilize multi-level memory hierarchies, which help hide this gap. This paper investigates using data and computation reorderings to improve memory hierarchy utilization for irregular applications. We evaluate the impact of reordering on data reuse at different levels in the memory hierarchy. We focus on coordinated data and computation reordering based on space-filling curves and we introduce a new architecture-independent multi-level blocking strategy for irregular applications. For two particle codes we studied, the most effective reorderings reduced overall execution time by a factor of two and four, respectively. Preliminary experience with a scatter benchmark derived from a large unstructured mesh application showed that careful data and computation ordering reduced primary cache misses by a factor of two compared to a random ordering.
Flexible Information Discovery in Decentralized Distributed Systems
- in Proceedings of the 12th High Performance Distributed Computing (HPDC
, 2003
"... The ability to efficiently discover information using partial knowledge (for example keywords, attributes or ranges) is important in large, decentralized, resource sharing distributed environments such as computational Grids and Peer-to-Peer (P2P) storage and retrieval systems. This paper presents a ..."
Abstract
-
Cited by 87 (12 self)
- Add to MetaCart
The ability to efficiently discover information using partial knowledge (for example keywords, attributes or ranges) is important in large, decentralized, resource sharing distributed environments such as computational Grids and Peer-to-Peer (P2P) storage and retrieval systems. This paper presents a P2P information discovery system that supports flexible queries using partial keywords and wildcards, and range queries. It guarantees that all existing data elements that match a query are found with bounded costs in terms of number of messages and number of peers involved. The key innovation is a dimension reducing indexing scheme that effectively maps the multidimensional information space to physical peers. The design, implementation and experimental evaluation of the system are presented.
On the construction of some capacity-approaching coding schemes
, 2000
"... This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint source ..."
Abstract
-
Cited by 84 (2 self)
- Add to MetaCart
(Show Context)
This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint source-channel coding schemes. We first show some examples of sources and channels where no coding is optimal for all values of the signal-to-noise ratio (SNR). When the source bandwidth is greater than the channel bandwidth, joint coding schemes based on space-filling curves and other families of curves are proposed. For uniform sources and modulo channels, our coding scheme based on space-filling curves operates within 1.1 dB of Shannon’s rate-distortion bound. For Gaussian sources and additive white Gaussian noise (AWGN) channels, we can achieve within 0.9 dB of the rate-distortion bound. The second scheme is based on low-density parity-check (LDPC) codes. We first demonstrate that we can translate threshold values of an LDPC code between channels accurately using a simple mapping. We develop some models for density evolution
Colyseus: A Distributed Architecture for Online Multiplayer Games
- In Proc. Symposium on Networked Systems Design and Implementation (NSDI
, 2006
"... This paper presents the design, implementation, and evaluation of Colyseus, a distributed architecture for interactive multiplayer games. Colyseus takes advantage of a game’s tolerance for weakly consistent state and predictable workload to meet the tight latency constraints of game-play and maintai ..."
Abstract
-
Cited by 83 (2 self)
- Add to MetaCart
(Show Context)
This paper presents the design, implementation, and evaluation of Colyseus, a distributed architecture for interactive multiplayer games. Colyseus takes advantage of a game’s tolerance for weakly consistent state and predictable workload to meet the tight latency constraints of game-play and maintain scalable communication costs. In addition, it provides a rich distributed query interface and effective pre-fetching subsystem to help locate and replicate objects before they are accessed at a node. We have implemented Colyseus and modified Quake II, a popular first person shooter game, to use it. Our measurements of Quake II and our own Colyseus-based game with hundreds of players shows that Colyseus effectively distributes game traffic across the participating nodes, allowing Colyseus to support low-latency game-play for an order of magnitude more players than existing single server designs, with similar per-node bandwidth costs. 1
Recursive Blocked Algorithms and Hybrid Data Structures for Dense Matrix Library Software
- SIAM REVIEW VOL. 46, NO. 1, PP. 3–45
, 2004
"... Matrix computations are both fundamental and ubiquitous in computational science and its vast application areas. Along with the development of more advanced computer systems with complex memory hierarchies, there is a continuing demand for new algorithms and library software that efficiently utilize ..."
Abstract
-
Cited by 81 (6 self)
- Add to MetaCart
Matrix computations are both fundamental and ubiquitous in computational science and its vast application areas. Along with the development of more advanced computer systems with complex memory hierarchies, there is a continuing demand for new algorithms and library software that efficiently utilize and adapt to new architecture features. This article reviews and details some of the recent advances made by applying the paradigm of recursion to dense matrix computations on today’s memory-tiered computer systems. Recursion allows for efficient utilization of a memory hierarchy and generalizes existing fixed blocking by introducing automatic variable blocking that has the potential of matching every level of a deep memory hierarchy. Novel recursive blocked algorithms offer new ways to compute factorizations such as Cholesky and QR and to solve matrix equations. In fact, the whole gamut of existing dense linear algebra factorization is beginning to be reexamined in view of the recursive paradigm. Use of recursion has led to using new hybrid data structures and optimized superscalar kernels. The results we survey include new algorithms and library software implementations for level 3 kernels, matrix factorizations, and the solution of general systems of linear equations and several common matrix equations. The software implementations we survey are robust and show impressive performance on today’s high performance computing systems.
A Peer-to-Peer Approach to Web Service Discovery
- World Wide Web Journal
, 2003
"... Web Services has emerged as a dominant paradigm for constructing and composing distributed business applications and enabling entreprise-wide interoperability. A critical factor to the overall utility of Web Services is a scalable, flexible and robust discover mechanism. This paper presents a peer-t ..."
Abstract
-
Cited by 80 (3 self)
- Add to MetaCart
Web Services has emerged as a dominant paradigm for constructing and composing distributed business applications and enabling entreprise-wide interoperability. A critical factor to the overall utility of Web Services is a scalable, flexible and robust discover mechanism. This paper presents a peer-to-peer (P2P) indexing system and associated P2P storage that supports large-scale, decentralized, real-time search capabilities. The presented system supports complex queries containing partial keywords and wildcards. Furthermore, it guarantees that all existing data elements matching a query will be found with bounded costs in terms of number of messages and number of nodes involved. The key inovation is a dimension reducing indexing scheme that effectively maps the multidimensional information space to physical peers. The design and an experimental evaluation of the system are presented. 1