Results 1  10
of
47
eMediator: A Next Generation Electronic Commerce Server
 Computational Intelligence
, 2002
"... This paper presents eMediator, an electronic commerce server prototype that demonstrates ways in which algorithmic support and gametheoretic incentive engineering can jointly improve the efficiency of ecommerce. eAuctionHouse, the configurable auction server, includes a variety of generalized combi ..."
Abstract

Cited by 106 (31 self)
 Add to MetaCart
This paper presents eMediator, an electronic commerce server prototype that demonstrates ways in which algorithmic support and gametheoretic incentive engineering can jointly improve the efficiency of ecommerce. eAuctionHouse, the configurable auction server, includes a variety of generalized combinatorial auctions and exchanges, pricing schemes, bidding languages, mobile agents, and user support for choosing an auction type. We introduce two new logical bidding languages for combinatorial markets: the XOR bidding language and the ORofXORs bidding language. Unlike the traditional OR bidding language, these are fully expressive. They therefore enable the use of the ClarkeGroves pricing mechanism for motivating the bidders to bid truthfully. eAuctionHouse also supports supply/demand curve bidding. eCommitter, the leveled commitment contract optimizer, determines the optimal contract price and decommitting penalties for a variety of leveled commitment contracting mechanisms, taking into account that rational agents will decommit strategically in Nash equilibrium. It also determines the optimal decommitting strategies for any given leveled commitment contract. eExchangeHouse, the safe exchange planner, enables unenforced anonymous exchanges by dividing the exchange into chunks and sequencing those chunks to be delivered safely in alternation between the buyer and the seller.
Preference Elicitation in Combinatorial Auctions (Extended Abstract)
 IN PROCEEDINGS OF THE ACM CONFERENCE ON ELECTRONIC COMMERCE (ACMEC
, 2001
"... Combinatorial auctions (CAs) where bidders can bid on bundles of items can be very desirable market mechanisms when the items sold exhibit complementarity and/or substitutability, so the bidder's valuations for bundles are not additive. However, in a basic CA, the bidders may need to bid on expone ..."
Abstract

Cited by 100 (28 self)
 Add to MetaCart
Combinatorial auctions (CAs) where bidders can bid on bundles of items can be very desirable market mechanisms when the items sold exhibit complementarity and/or substitutability, so the bidder's valuations for bundles are not additive. However, in a basic CA, the bidders may need to bid on exponentially many bundles, leading to di#culties in determining those valuations, undesirable information revelation, and unnecessary communication. In this paper we present a design of an auctioneer agent that uses topological structure inherent in the problem to reduce the amount of information that it needs from the bidders. An analysis tool is presented as well as data structures for storing and optimally assimilating the information received from the bidders. Using this information, the agent then narrows down the set of desirable (welfaremaximizing or Paretoe#cient) allocations, and decides which questions to ask next. Several algorithms are presented that ask the bidders for value, order, and rank information. A method is presented for making the elicitor incentive compatible.
Costly valuation computation in auctions
 IN IN PROCEEDINGS OF THE EIGHTH CONFERENCE OF THEORETICAL ASPECTS OF KNOWLEDGE AND RATIONALITY (TARK VIII), SIENNA
, 2001
"... We investigate deliberation and bidding strategies of agents with unlimited but costly computation who are participating in auctions. The agents do not a priori know their valuations for the items begin auctioned. Instead they devote computational resources to compute their valuations. We present a ..."
Abstract

Cited by 52 (26 self)
 Add to MetaCart
We investigate deliberation and bidding strategies of agents with unlimited but costly computation who are participating in auctions. The agents do not a priori know their valuations for the items begin auctioned. Instead they devote computational resources to compute their valuations. We present a normative model of bounded rationality where deliberation actions of agents are incorporated into strategies and equilibria are analyzed for standard auction protocols. We show that even in settings such as English auctions where information about other agents ’ valuations is revealed for free by the bidding process, agents may still compute on opponents’ valuation problems, incurring a cost, in order to determine how to bid. We compare the costly computation model of bounded rationality with a different model where computation is free but limited. For some auction mechanisms the equilibrium strategies are substantially different. It can be concluded that the model of bounded rationality impacts the agents’ equilibrium strategies and must be considered when designing mechanisms for computationally limited agents.
Partialrevelation VCG mechanism for combinatorial auctions
 In Proceddings of the National Conference on Artificial Intelligence (AAAI
"... Winner determination in combinatorial auctions has received significant interest in the AI community in the last 3 years. Another difficult problem in combinatorial auctions is that of eliciting the bidders ’ preferences. We introduce a progressive, partialrevelation mechanism that determines an ef ..."
Abstract

Cited by 48 (20 self)
 Add to MetaCart
Winner determination in combinatorial auctions has received significant interest in the AI community in the last 3 years. Another difficult problem in combinatorial auctions is that of eliciting the bidders ’ preferences. We introduce a progressive, partialrevelation mechanism that determines an efficient allocation and the Vickrey payments. The mechanism is based on a family of algorithms that explore the natural lattice structure of the bidders ’ combined preferences. The mechanism elicits utilities in a natural sequence, and aims at keeping the amount of elicited information and the effort to compute the information minimal. We present analytical results on the amount of elicitation. We show that no valuequerying algorithm that is constrained to querying feasible bundles can save more elicitation than one of our algorithms. We also show that one of our algorithms can determine the Vickrey payments as a costless byproduct of determining an optimal allocation.
CABOB: A Fast Optimal Algorithm for Winner Determination in Combinatorial Auctions
, 2005
"... Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NPcomplete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and ..."
Abstract

Cited by 48 (8 self)
 Add to MetaCart
Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NPcomplete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and lower bounding (also across components), elaborate and dynamically chosen bidordering heuristics, and a host of structural observations. CABOB attempts to capture structure in any instance without making assumptions about the instance distribution. Experiments against the fastest prior algorithm, CPLEX 8.0, show that CABOB is often faster, seldom drastically slower, and in many cases drastically faster—especially in cases with structure. CABOB’s search runs in linear space and has significantly better anytime performance than CPLEX. We also uncover interesting aspects of the problem itself. First, problems with short bids, which were hard for the first generation of specialized algorithms, are easy. Second, almost all of the CATS distributions are easy, and the run time is virtually unaffected by the number of goods. Third, we test several random restart strategies, showing that they do not help on this problem—the runtime distribution does not have a heavy tail.
Expressive commerce and its application to sourcing: How we conducted $35 billion of generalized combinatorial auctions
"... Sourcing professionals buy several trillion dollars worth of goods and services yearly. We introduced a new paradigm called expressive commerce and applied it to sourcing. It combines the advantages of highly expressive human negotiation with the advantages of electronic reverse auctions. The idea i ..."
Abstract

Cited by 47 (8 self)
 Add to MetaCart
Sourcing professionals buy several trillion dollars worth of goods and services yearly. We introduced a new paradigm called expressive commerce and applied it to sourcing. It combines the advantages of highly expressive human negotiation with the advantages of electronic reverse auctions. The idea is that supply and demand are expressed in drastically greater detail than in traditional electronic auctions, and are algorithmically cleared. This creates a Pareto efficiency improvement in the allocation (a winwin between the buyer and the sellers) but the market clearing problem is a highly complex combinatorial optimization problem. We developed the world’s fastest tree search algorithms for solving it. We have hosted $35 billion of sourcing using the technology, and created $4.4 billion of harddollar savings plus numerous hardertoquantify benefits. The suppliers also benefited by being able to express production efficiencies and creativity, and through exposure problem removal. Supply networks were redesigned, with quantitative understanding of the tradeoffs, and implemented in weeks instead of months.
Settling the Complexity of Computing TwoPlayer Nash Equilibria
"... We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the c ..."
Abstract

Cited by 46 (3 self)
 Add to MetaCart
We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the complexity of fourplayer Nash equilibria [21], settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of twoplayer Nash equilibria. In particular, we prove the following theorems: • Bimatrix does not have a fully polynomialtime approximation scheme unless every problem in PPAD is solvable in polynomial time. • The smoothed complexity of the classic LemkeHowson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: • ArrowDebreu market equilibria are PPADhard to compute.
Computational Criticisms of the Revelation Principle
, 2003
"... The revelation principle is a cornerstone tool in mechanism design. It states that one can restrict attention, without loss in the designer's objective, to mechanisms in which A) the agents report their types completely in a single step up front, and B) the agents are motivated to be truthful. We sh ..."
Abstract

Cited by 38 (10 self)
 Add to MetaCart
The revelation principle is a cornerstone tool in mechanism design. It states that one can restrict attention, without loss in the designer's objective, to mechanisms in which A) the agents report their types completely in a single step up front, and B) the agents are motivated to be truthful. We show that reasonable constraints on computation and communication can invalidate the revelation principle. Regarding A, we show that by moving to multistep mechanisms, one can reduce exponential communication and computation to linearthereby answering a recognized important open question in mechanism design. Regarding B, we criticize the focus on truthful mechanismsa dogma that has, to our knowledge, never been criticized before. First, we study settings where the optimal truthful mechanism is complete to execute for the center. In that setting we show that by moving to insincere mechanisms, one can shift the burden of having to solve the complete problem from the center to one of the agents. Second, we study a new oracle model that captures the setting where utility values can be hard to compute even when all the pertinent information is availablea situation that occurs in many practical applications. In this model we show that by moving to insincere mechanisms, one can shift the burden of having to ask the oracle an exponential number of costly queries from the center to one of the agents. In both cases the insincere mechanism is equally good as the optimal truthful mechanism in the presence of unlimited computation. More interestingly, whereas being unable to carry out either difficult task would have hurt the center in achieving his objective in the truthful setting, if the agent is unable to carry out either difficult task, the value of the center's objec...
Selfinterested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions
, 2004
"... Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key di#culty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. game so that the agents are motivated to report their preferences trut ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key di#culty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently proposed approachcalled automated mechanism designa mechanism is computed for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is selfinterested. In this case, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well o# as the agent would have been had it not participated in the mechanism. In this setting, we show that designing optimal deterministic mechanisms is in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. We then show how allowing for randomization in the mechanism makes problems in this setting computationally easy. Finally, we show that the paymentmaximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have "bestonly" preferences. We show that here, too, designing an optimal deterministic auction is NPcomplete, but designin...
Effectiveness of Preference Elicitation in Combinatorial Auctions
 In International Conference on Autonomous Agents and MultiAgent Systems
, 2002
"... Combinatorial auctions where agents can bid on bundles of items are desirable because they allow the agents to express complementarity and substitutability between the items. However, expressing one's preferences can require bidding on all bundles. Selective incremental preference elicitation by the ..."
Abstract

Cited by 29 (11 self)
 Add to MetaCart
Combinatorial auctions where agents can bid on bundles of items are desirable because they allow the agents to express complementarity and substitutability between the items. However, expressing one's preferences can require bidding on all bundles. Selective incremental preference elicitation by the auctioneer was recently proposed to address this problem [4], but the idea was not evaluated. In this paper we show, experimentally and theoretically, that automated elicitation provides a drastic benefit. In all of the elicitation schemes under study, as the number of items for sale increases, the amount of information elicited is a vanishing fraction of the information collected in traditional "direct revelation mechanisms" where bidders reveal all their valuation information. Most of the elicitation schemes also maintain the benefit as the number of agents increases. We develop more effective elicitation policies for existing query types. We also present a new query type that takes the incremental nature of elicitation to a new level by allowing agents to give approximate answers that are refined only on an asneeded basis. In the process, we present methods for evaluating different types of elicitation policies.