Results 11  20
of
251
Competitive collaborative learning
 In Proceedings of the 18th Annual Conference on Learning Theory (COLT
, 2005
"... Abstract. We develop algorithms for a community of users to make decisions about selecting products or resources, in a model characterized by two key features: – The quality of the products or resources may vary over time. – Some of the users in the system may be dishonest, manipulating their action ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Abstract. We develop algorithms for a community of users to make decisions about selecting products or resources, in a model characterized by two key features: – The quality of the products or resources may vary over time. – Some of the users in the system may be dishonest, manipulating their actions in a Byzantine manner to achieve other goals. We formulate such learning tasks as an algorithmic problem based on the multiarmed bandit problem, but with a set of users (as opposed to a single user), of whom a constant fraction are honest and are partitioned into coalitions such that the users in a coalition perceive the same expected quality if they sample the same resource at the same time. Our main result exhibits an algorithm for this problem which converges in polylogarithmic time to a state in which the average regret (per honest user) is an arbitrarily small constant. 1
Adaptive Server Selection for Large Scale Interactive Online Games
 ACM Int’l Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV
, 2004
"... In this paper, we present a novel distributed algorithm that dynamically selects game servers for a group of game clients participating in large scale interactive online games. The goal of server selection is to minimize server resource usage while satisfying the realtime delay constraint. We devel ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
In this paper, we present a novel distributed algorithm that dynamically selects game servers for a group of game clients participating in large scale interactive online games. The goal of server selection is to minimize server resource usage while satisfying the realtime delay constraint. We develop a synchronization delay model for interactive games and formulate the server selection problem, and prove that the considered problem is NPhard. The proposed algorithm, called zoominzoomout, is adaptive to session dynamics (e.g. clients join and leave) and lets the clients select appropriate servers in a distributed manner such that the number of servers used by the game session is minimized. Using simulation, we present the performance of the proposed algorithm and show that it is simple yet effective in achieving its design goal. In particular, we show that the performance of our algorithm is comparable to that of a greedy selection algorithm, which requires global information and excessive computation.
Topology Control in Ad hoc Wireless Networks with Hitchhiking
, 2004
"... In this paper, we address the Topology Control with Hitchhiking (TCH) problem. Hitchhiking [1] is a novel model introduced recently that allows combining partial messages to decode a complete message. By effective use of partial signals, a specific topology can be obtained with less transmission ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
In this paper, we address the Topology Control with Hitchhiking (TCH) problem. Hitchhiking [1] is a novel model introduced recently that allows combining partial messages to decode a complete message. By effective use of partial signals, a specific topology can be obtained with less transmission power. The objective of the TCH problem is to obtain a stronglyconnected topology with minimum total energy consumption. We prove the TCH problem to be NPcomplete and design a distributed and localized algorithm (DTCH) that can be applied on top of any symmetric, stronglyconnected topology to reduce total power consumption. We analyze the performance of our approach through simulation.
On the Optimization of Storage Capacity Allocation for Content Distribution
 Computer Networks
, 2003
"... The addition of storage capacity in network nodes for the caching or replication of popular data objects results in reduced enduser delay, reduced network tra#c, and improved scalability. ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
The addition of storage capacity in network nodes for the caching or replication of popular data objects results in reduced enduser delay, reduced network tra#c, and improved scalability.
Maximizing network lifetime of broadcasting over wireless stationary ad hoc networks
 MOBILE NETWORKS AND APPLICATIONS
, 2005
"... We investigate the problem of extending the network lifetime of a single broadcast session over wireless stationary ad hoc networks where the hosts are not mobile. We define the network lifetime as the time from network initialization to the first node failure due to battery depletion. We provide t ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We investigate the problem of extending the network lifetime of a single broadcast session over wireless stationary ad hoc networks where the hosts are not mobile. We define the network lifetime as the time from network initialization to the first node failure due to battery depletion. We provide through graph theoretic approaches a polynomialtime globally optimal solution, a variant of the minimum spanning tree (MST), to the problem of maximizing the static network lifetime. We make use of this solution to develop a periodic tree update strategy for effective load balancing and show that a significant gain in network lifetime over the optimal static network lifetime can be achieved. We provide extensive comparative simulation studies on parameters such as update interval and control overhead and investigate their impact on the network lifetime. The simulation results are also compared with an upper bound to the network lifetime.
Abstraction of 2D Shapes in Terms of Parts
"... Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract de ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.
Syntactic analysis by local grammars and automata: an efficient algorithm
 In Proceedings of the International Conference on Computational Lexicography (COMPLEX 94
, 1994
"... address: ..."
SpaceEfficient Algorithms for Computing the Convex Hull of a Simple Polygonal Line in Linear Time
"... We present spaceefficient algorithms for computing the convex hull of a simple polygonal line inplace, in linear time. It turns out that the problem is as hard as stable partition, i.e., if there were a truly simple solution then stable partition would also have a truly simple solution, and vice v ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We present spaceefficient algorithms for computing the convex hull of a simple polygonal line inplace, in linear time. It turns out that the problem is as hard as stable partition, i.e., if there were a truly simple solution then stable partition would also have a truly simple solution, and vice versa. Nevertheless, we present a simple selfcontained solution that uses O(log n) space, and indicate how to improve it to O(1) space with the same techniques used for stable partition. If the points inside the convex hull can be discarded, then there is a truly simple solution that uses a single call to stable partition, and even that call can be spared if only extreme points are desired (and not their order). If the polygonal line is closed, then the problem admits a very simple solution which does not call for stable partitioning at all.
Adaptive thinning for terrain modelling and image compression
 in Advances in Multiresolution for Geometric Modelling
, 2004
"... Summary. Adaptive thinning algorithms are greedy point removal schemes for bivariate scattered data sets with corresponding function values, where the points are recursively removed according to some datadependent criterion. Each subset of points, together with its function values, defines a linear ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
Summary. Adaptive thinning algorithms are greedy point removal schemes for bivariate scattered data sets with corresponding function values, where the points are recursively removed according to some datadependent criterion. Each subset of points, together with its function values, defines a linear spline over its Delaunay triangulation. The basic criterion for the removal of the next point is to minimize the error between the resulting linear spline at the bivariate data points and the original function values. This leads to a hierarchy of linear splines of coarser and coarser resolutions. This paper surveys the various removal strategies developed in our earlier papers, and the application of adaptive thinning to terrain modelling and to image compression. In our image test examples, we found that our thinning scheme, adapted to diminish the least squares error, combined with a postprocessing least squares optimization and a customized coding scheme, often gives better or comparable results to the waveletbased scheme SPIHT. 1
Semimatchings for bipartite graphs and load balancing
 In Proc. 8th WADS
, 2003
"... We consider the problem of fairly matching the lefthand vertices of a bipartite graph to the righthand vertices. We refer to this problem as the optimal semimatching problem; it is a relaxation of the known bipartite matching problem. We present a way to evaluate the quality of a given semimatchi ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We consider the problem of fairly matching the lefthand vertices of a bipartite graph to the righthand vertices. We refer to this problem as the optimal semimatching problem; it is a relaxation of the known bipartite matching problem. We present a way to evaluate the quality of a given semimatching and show that, under this measure, an optimal semimatching balances the load on the right hand vertices with respect to any Lpnorm. In particular, when modeling a job assignment system, an optimal semimatching achieves the minimal makespan and the minimal flow time for the system. The problem of finding optimal semimatchings is a special case of certain scheduling problems for which known solutions exist. However, these known solutions are based on general network optimization algorithms, and are not the most efficient way to solve the optimal semimatching problem. To compute optimal semimatchings efficiently, we present and analyze two new algorithms. The first algorithm generalizes the Hungarian method for computing maximum bipartite matchings, while the second, more efficient algorithm is based on a new notion of costreducing paths. Our experimental results demonstrate that the second algorithm is vastly superior to using known network optimization algorithms to solve the optimal semimatching problem. Furthermore, this same algorithm can also be used to find maximum bipartite matchings and is shown to be roughly as efficient as the best known algorithms for this goal. Key words: bipartite graphs, loadbalancing, matching algorithms, optimal algorithms, semimatching