Results 1  10
of
29
Combinatorial Auctions with Decreasing Marginal Utilities
, 2001
"... This paper considers combinatorial auctions among such submodular buyers. The valuations of such buyers are placed within a hierarchy of valuations that exhibit no complementarities, a hierarchy that includes also OR and XOR combinations of singleton valuations, and valuations satisfying the gross s ..."
Abstract

Cited by 138 (21 self)
 Add to MetaCart
This paper considers combinatorial auctions among such submodular buyers. The valuations of such buyers are placed within a hierarchy of valuations that exhibit no complementarities, a hierarchy that includes also OR and XOR combinations of singleton valuations, and valuations satisfying the gross substitutes property. Those last valuations are shown to form a zeromeasure subset of the submodular valuations that have positive measure. While we show that the allocation problem among submodular valuations is NPhard, we present an efficient greedy 2approximation algorithm for this case and generalize it to the case of limited complementarities. No such approximation algorithm exists in a setting allowing for arbitrary complementarities. Some results about strategic aspects of combinatorial auctions among players with decreasing marginal utilities are also presented.
Multiresolution Modeling: Survey & Future Opportunities
, 1999
"... For twenty years, it has been clear that many datasets are excessively complex for applications such as realtime display, and that techniques for controlling the level of detail of models are crucial. More recently, there has been considerable interest in techniques for the automatic simplificati ..."
Abstract

Cited by 118 (7 self)
 Add to MetaCart
For twenty years, it has been clear that many datasets are excessively complex for applications such as realtime display, and that techniques for controlling the level of detail of models are crucial. More recently, there has been considerable interest in techniques for the automatic simplification of highly detailed polygonal models into faithful approximations using fewer polygons. Several effective techniques for the automatic simplification of polygonal models have been developed in recent years. This report begins with a survey of the most notable available algorithms. Iterative edge contraction algorithms are of particular interest because they induce a certain hierarchical structure on the surface. An overview of this hierarchical structure is presented,including a formulation relating it to minimum spanning tree construction algorithms. Finally, we will consider the most significant directions in which existing simplification methods can be improved, and a summary of o...
Approximation algorithms for combinatorial auctions with complementfree bidders
 In Proceedings of the 37th Annual ACM Symposium on Theory of Computing (STOC
, 2005
"... We exhibit three approximation algorithms for the allocation problem in combinatorial auctions with complement free bidders. The running time of these algorithms is polynomial in the number of items m and in the number of bidders n, even though the “input size ” is exponential in m. The first algori ..."
Abstract

Cited by 94 (22 self)
 Add to MetaCart
We exhibit three approximation algorithms for the allocation problem in combinatorial auctions with complement free bidders. The running time of these algorithms is polynomial in the number of items m and in the number of bidders n, even though the “input size ” is exponential in m. The first algorithm provides an O(log m) approximation. The second algorithm provides an O ( √ m) approximation in the weaker model of value oracles. This algorithm is also incentive compatible. The third algorithm provides an improved 2approximation for the more restricted case of “XOS bidders”, a class which strictly contains submodular bidders. We also prove lower bounds on the possible approximations achievable for these classes of bidders. These bounds are not tight and we leave the gaps as open problems. 1
Balanced Scheduling: Instruction scheduling when memory latency is uncertain
, 1992
"... Traditional list schedulers order instructions based on an optimistic estimate of the load delay imposed by the implementation. Therefore they cannot respond to variations in load latencies (due to cache hits or misses, congestion in the memory interconnect, etc.) and cannot easily be applied across ..."
Abstract

Cited by 53 (3 self)
 Add to MetaCart
Traditional list schedulers order instructions based on an optimistic estimate of the load delay imposed by the implementation. Therefore they cannot respond to variations in load latencies (due to cache hits or misses, congestion in the memory interconnect, etc.) and cannot easily be applied across different implementations. We have developed an alternative algorithm, known as balanced scheduling, that schedules instructions based on an estimate of the amount of instruction level parallelism in the program. Since scheduling decisions are program rather than machinebased, balanced scheduling is unaffected by implementation changes. Since it is based on the amount of instruction level parallelism that a program can support, it can respond better to variations in load latencies. Performance improvements over a traditional list scheduler on a Fortran workload and simulating several different machine types (cachebased workstations, large parallel machines with a multipath interconnect an...
A Bayesian Approach to Relevance in Game Playing
 ARTIFICIAL INTELLIGENCE
, 1997
"... The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as if the leaf evaluations were exact. This has been effective in many games because of the computational e ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as if the leaf evaluations were exact. This has been effective in many games because of the computational efficiency of the alphabeta algorithm. Our approach is to form a Bayesian model of our uncertainty. We adopt an evaluation function that returns a probability distribution estimating the probability of various errors in valuing each position. These estimates are obtained by training from data. We thus use additional information at each leaf not available to the standard approach. We utilize this information in three ways: to evaluate which move is best after we are done expanding, to allocate additional thinking time to moves where additional time is most relevant to game outcome, and, perhaps most importantly, to expand the tree along the most relevant lines. Our measure of the relevan...
Optimal Web Cache Sizing: Scalable Methods for Exact Solutions
 Computer Communications
, 2000
"... This paper describes two approaches to the problem of determining exact optimal storage capacity for Web caches based on expected workload and the monetary costs of memory and bandwidth. The first approach considers memory/bandwidth tradeoffs in an idealized model. It assumes that workload consist ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
This paper describes two approaches to the problem of determining exact optimal storage capacity for Web caches based on expected workload and the monetary costs of memory and bandwidth. The first approach considers memory/bandwidth tradeoffs in an idealized model. It assumes that workload consists of independent references drawn from a known distribution (e.g., Zipf) and caches employ a "Perfect LFU" removal policy. We derive conditions under which a shared higherlevel "parent" cache serving several lowerlevel "child" caches is economically viable. We also characterize circumstances under which globally optimal storage capacities in such a hierarchy can be determined through a decentralized computation in which caches individually minimize local monetary expenditures. The second approach is applicable if the workload at a single cache is represented by an explicit request sequence and the cache employs any one of a large family of removal policies that includes LRU. The mis...
Best Play for Imperfect Players and Game Tree Search; part I  theory
, 1995
"... 1 . The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as if the leaf evaluations were exact. This has been effective in many games because of the computatio ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
1 . The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as if the leaf evaluations were exact. This has been effective in many games because of the computational efficiency of the Alphabeta algorithm. But as Bayesians, we want to know the best way to use the inexact statistical information provided by the leaf evaluator to choose our next move. We add a model of uncertainty to the standard evaluation function. Within such a formal model, there is an optimal tree growth procedure and an optimal method of valuing the tree. We describe how to optimally value the tree within our model, and how to efficiently approximate the optimal tree to search. Our tree growth procedure provably approximates the contribution of each leaf to the utility in the limit where we grow a large tree, taking explicit account of the interactions between expanding different ...
Using Tarjan's Red Rule for Fast Dependency Tree Construction
 Advances in Neural Information Processing Systems 15
, 2002
"... We focus on the problem of efficient learning of dependency trees. It is wellknown that given the pairwise mutual information coefficients, a minimumweight spanning tree algorithm solves this problem exactly and in polynomial time. However, for large datasets it is the construction of the cor ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We focus on the problem of efficient learning of dependency trees. It is wellknown that given the pairwise mutual information coefficients, a minimumweight spanning tree algorithm solves this problem exactly and in polynomial time. However, for large datasets it is the construction of the correlation matrix that dominates the running time. We have developed a new spanningtree algorithm which is capable of exploiting partial knowledge about edge weights. The partial knowledge we maintain is a probabilistic confidence interval on the coefficients, which we derive by examining just a small sample of the data. The algorithm is able to flag the need to shrink an interval, which translates to inspection of more data for the particular attribute pair. Experimental results show running time that is nearconstant in the number of records, without significant loss in accuracy of the generated trees. Interestingly, our spanningtree algorithm is based solely on Tarjan's rededge rule, which is generally considered a guaranteed recipe for bad performance.
CONTEXTSENSITIVE POINTER ANALYSIS USING BINARY DECISION DIAGRAMS
, 2007
"... in my opinion, it ..."