Results 11  20
of
249
Adaptive Server Selection for Large Scale Interactive Online Games
 ACM Int’l Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV
, 2004
"... In this paper, we present a novel distributed algorithm that dynamically selects game servers for a group of game clients participating in large scale interactive online games. The goal of server selection is to minimize server resource usage while satisfying the realtime delay constraint. We devel ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
In this paper, we present a novel distributed algorithm that dynamically selects game servers for a group of game clients participating in large scale interactive online games. The goal of server selection is to minimize server resource usage while satisfying the realtime delay constraint. We develop a synchronization delay model for interactive games and formulate the server selection problem, and prove that the considered problem is NPhard. The proposed algorithm, called zoominzoomout, is adaptive to session dynamics (e.g. clients join and leave) and lets the clients select appropriate servers in a distributed manner such that the number of servers used by the game session is minimized. Using simulation, we present the performance of the proposed algorithm and show that it is simple yet effective in achieving its design goal. In particular, we show that the performance of our algorithm is comparable to that of a greedy selection algorithm, which requires global information and excessive computation.
Topology Control in Ad hoc Wireless Networks with Hitchhiking
, 2004
"... In this paper, we address the Topology Control with Hitchhiking (TCH) problem. Hitchhiking [1] is a novel model introduced recently that allows combining partial messages to decode a complete message. By effective use of partial signals, a specific topology can be obtained with less transmission ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
In this paper, we address the Topology Control with Hitchhiking (TCH) problem. Hitchhiking [1] is a novel model introduced recently that allows combining partial messages to decode a complete message. By effective use of partial signals, a specific topology can be obtained with less transmission power. The objective of the TCH problem is to obtain a stronglyconnected topology with minimum total energy consumption. We prove the TCH problem to be NPcomplete and design a distributed and localized algorithm (DTCH) that can be applied on top of any symmetric, stronglyconnected topology to reduce total power consumption. We analyze the performance of our approach through simulation.
On the Optimization of Storage Capacity Allocation for Content Distribution
 Computer Networks
, 2003
"... The addition of storage capacity in network nodes for the caching or replication of popular data objects results in reduced enduser delay, reduced network tra#c, and improved scalability. ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
The addition of storage capacity in network nodes for the caching or replication of popular data objects results in reduced enduser delay, reduced network tra#c, and improved scalability.
Maximizing network lifetime of broadcasting over wireless stationary ad hoc networks
 MOBILE NETWORKS AND APPLICATIONS
, 2005
"... We investigate the problem of extending the network lifetime of a single broadcast session over wireless stationary ad hoc networks where the hosts are not mobile. We define the network lifetime as the time from network initialization to the first node failure due to battery depletion. We provide t ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We investigate the problem of extending the network lifetime of a single broadcast session over wireless stationary ad hoc networks where the hosts are not mobile. We define the network lifetime as the time from network initialization to the first node failure due to battery depletion. We provide through graph theoretic approaches a polynomialtime globally optimal solution, a variant of the minimum spanning tree (MST), to the problem of maximizing the static network lifetime. We make use of this solution to develop a periodic tree update strategy for effective load balancing and show that a significant gain in network lifetime over the optimal static network lifetime can be achieved. We provide extensive comparative simulation studies on parameters such as update interval and control overhead and investigate their impact on the network lifetime. The simulation results are also compared with an upper bound to the network lifetime.
Abstraction of 2D Shapes in Terms of Parts
"... Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract de ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.
Syntactic analysis by local grammars and automata: an efficient algorithm
 In Proceedings of the International Conference on Computational Lexicography (COMPLEX 94
, 1994
"... address: ..."
SpaceEfficient Algorithms for Computing the Convex Hull of a Simple Polygonal Line in Linear Time
"... We present spaceefficient algorithms for computing the convex hull of a simple polygonal line inplace, in linear time. It turns out that the problem is as hard as stable partition, i.e., if there were a truly simple solution then stable partition would also have a truly simple solution, and vice v ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We present spaceefficient algorithms for computing the convex hull of a simple polygonal line inplace, in linear time. It turns out that the problem is as hard as stable partition, i.e., if there were a truly simple solution then stable partition would also have a truly simple solution, and vice versa. Nevertheless, we present a simple selfcontained solution that uses O(log n) space, and indicate how to improve it to O(1) space with the same techniques used for stable partition. If the points inside the convex hull can be discarded, then there is a truly simple solution that uses a single call to stable partition, and even that call can be spared if only extreme points are desired (and not their order). If the polygonal line is closed, then the problem admits a very simple solution which does not call for stable partitioning at all.
1 Multithreaded Asynchronous Graph Traversal for InMemory and SemiExternal Memory
"... Abstract—Processing large graphs is becoming increasingly important for many computational domains. Unfortunately, many algorithms and implementations do not scale with the demand for increasing graph sizes. As a result, researchers have attempted to meet the growing data demands using parallel and ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Abstract—Processing large graphs is becoming increasingly important for many computational domains. Unfortunately, many algorithms and implementations do not scale with the demand for increasing graph sizes. As a result, researchers have attempted to meet the growing data demands using parallel and external memory techniques. Our work, targeted to chip multiprocessors, takes a highly parallel asynchronous approach to hide the high data latency due to both poor locality and delays in the underlying graph data storage. We present a novel asynchronous approach to compute Breadth First Search (BFS), Single Source Shortest Path (SSSP), and Connected Components (CC) for large graphs in shared memory. We present an experimental study applying our technique to both InMemory (IM) and SemiExternal Memory (SEM) graphs utilizing multicore processors and solidstate memory devices. Our experiments using both synthetic and realworld datasets show that our asynchronous approach is able to overcome data latencies and provide significant speedup over alternative approaches. I.
Adaptive thinning for terrain modelling and image compression
 in Advances in Multiresolution for Geometric Modelling
, 2004
"... Summary. Adaptive thinning algorithms are greedy point removal schemes for bivariate scattered data sets with corresponding function values, where the points are recursively removed according to some datadependent criterion. Each subset of points, together with its function values, defines a linear ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
Summary. Adaptive thinning algorithms are greedy point removal schemes for bivariate scattered data sets with corresponding function values, where the points are recursively removed according to some datadependent criterion. Each subset of points, together with its function values, defines a linear spline over its Delaunay triangulation. The basic criterion for the removal of the next point is to minimize the error between the resulting linear spline at the bivariate data points and the original function values. This leads to a hierarchy of linear splines of coarser and coarser resolutions. This paper surveys the various removal strategies developed in our earlier papers, and the application of adaptive thinning to terrain modelling and to image compression. In our image test examples, we found that our thinning scheme, adapted to diminish the least squares error, combined with a postprocessing least squares optimization and a customized coding scheme, often gives better or comparable results to the waveletbased scheme SPIHT. 1
The P versus NP problem
 Clay Mathematical Institute; The Millennium Prize Problem
, 2000
"... The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic) algorithm in polynomial time. To define the problem precisely it is necessary to give a formal model of a computer. The standard comp ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic) algorithm in polynomial time. To define the problem precisely it is necessary to give a formal model of a computer. The standard computer model in computability theory is the Turing machine, introduced by Alan Turing in 1936 [37]. Although the model was introduced before physical computers were built, it nevertheless continues to be accepted as the proper computer model for the purpose of defining the notion of computable function. Informally the class P is the class of decision problems solvable by some algorithm within a number of steps bounded by some fixed polynomial in the length of the input. Turing was not concerned with the efficiency of his machines, rather his concern was whether they can simulate arbitrary algorithms given sufficient time. It turns out, however, Turing machines can generally simulate more efficient computer models (for example, machines equipped with many tapes or an unbounded random access memory) by at most squaring or cubing the computation time. Thus P is a