Results 1  10
of
94
Clearing algorithms for barter exchange markets: Enabling nationwide kidney exchanges
 In Proceedings of the 8th ACM Conference on Electronic commerce (EC
, 2007
"... In barterexchange markets, agents seek to swap their items with one another. These swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle. We focus mainly on the upcoming national kidneyexchange market, where patients with kidney disease can obtain com ..."
Abstract

Cited by 74 (8 self)
 Add to MetaCart
(Show Context)
In barterexchange markets, agents seek to swap their items with one another. These swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle. We focus mainly on the upcoming national kidneyexchange market, where patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors. With almost 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney disease. The clearing problem involves finding a social welfare maximizing exchange when the maximum length of a cycle is fixed. Long cycles are forbidden, since, for incentive reasons, all transplants in a cycle must be performed simultaneously. Also, in barterexchanges generally, more agents are affected if one drops out of a longer cycle. We prove that the clearing problem is NPhard. Solving it exactly is one of the main challenges in establishing a national kidney exchange. We present the first algorithm capable of clearing these markets on a nationwide scale. The key is incremental problem formulation. We adapt two paradigms for the task: constraint generation and column generation. For each, we develop techniques that dramatically improve both runtime and memory usage. We conclude that column generation scales drastically better than constraint generation.
Mathematical statistics
 Test
, 1977
"... This paper is a selective review of the regularization methods scattered in statistics literature. We introduce a general conceptual approach to regularization and fit most existing methods into it. We have tried to focus on the importance of regularization when dealing with today’s highdimensional ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
This paper is a selective review of the regularization methods scattered in statistics literature. We introduce a general conceptual approach to regularization and fit most existing methods into it. We have tried to focus on the importance of regularization when dealing with today’s highdimensional objects: data and models. A wide range of examples are discussed, including nonparametric regression, boosting, covariance matrix estimation, principal component estimation, subsampling.
What Have We Learned From Market Design?
 ECONOMIC JOURNAL
, 2008
"... This essay discusses some things we have learned about markets, in the process of designing marketplaces to fix market failures. To work well, marketplaces have to provide thickness, i.e. they need to attract a large enough proportion of the potential participants in the market; they have to overco ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
This essay discusses some things we have learned about markets, in the process of designing marketplaces to fix market failures. To work well, marketplaces have to provide thickness, i.e. they need to attract a large enough proportion of the potential participants in the market; they have to overcome the congestion that thickness can bring, by making it possible to consider enough alternative transactions to arrive at good ones; and they need to make it safe and sufficiently simple to participate in the market, as opposed to transacting outside of the market, or having to engage in costly and risky strategic behavior. I'll draw on recent examples of market design ranging from labor markets for doctors and new economists, to kidney exchange, and school choice in New York City and Boston.
The Speed of Learning in Noisy Games: Partial Reinforcement and the Sustainability of Cooperation
"... In an experiment, players’ ability to learn to cooperate in the repeated prisoner’s dilemma was substantially diminished when the payoffs were noisy, even though players could monitor one another’s past actions perfectly. In contrast, in onetime play against a succession of opponents, noisy payoffs ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
In an experiment, players’ ability to learn to cooperate in the repeated prisoner’s dilemma was substantially diminished when the payoffs were noisy, even though players could monitor one another’s past actions perfectly. In contrast, in onetime play against a succession of opponents, noisy payoffs increased cooperation, by slowing the rate at which cooperation decays. These observations are consistent with the robust observation from the psychology literature that partial reinforcement (adding randomness to the link between an action and its consequences while holding expected payoffs constant) slows learning. This effect is magnified in the repeated game: when others are slow to learn to cooperate, the benefits of cooperation are reduced, which further hampers cooperation. These results show that a small change in the payoff environment, which changes the speed of individual learning, can have a large effect on collective behavior. And they show that there may be interesting comparative dynamics that can be derived from careful attention to the fact that at least some economic behavior is learned from experience.
Making Decisions Based on the Preferences of Multiple Agents
"... People often have to reach a joint decision even though they have conflicting preferences over the alternatives. Examples range from the mundane—such as allocating chores among the members of a household—to the sublime—such as electing a government and thereby charting the course for a country. The ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
(Show Context)
People often have to reach a joint decision even though they have conflicting preferences over the alternatives. Examples range from the mundane—such as allocating chores among the members of a household—to the sublime—such as electing a government and thereby charting the course for a country. The joint decision can be reached by an informal negotiating process or by a carefully specified protocol. Philosophers, mathematicians, political scientists, economists, and others have studied the merits of various protocols for centuries. More recently, especially over the course of the past decade or so, computer scientists have also become deeply involved in this study. The perhaps surprising arrival of computer scientists on this scene is due to a variety of reasons, including the following. 1. Computer networks provide a new platform for communicating
The dynamics of law clerk matching: An experimental and computational investigation of proposals for reform of the market
, 2006
"... ..."
Local Matching Dynamics in Social Networks
, 2012
"... We study stable marriage and roommates problems under locality constraints. Each player is a vertex in a social network and strives to be matched to other players. The value of a match is specified by an edge weight. Players explore possible matches only based on their current neighborhood. We study ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
(Show Context)
We study stable marriage and roommates problems under locality constraints. Each player is a vertex in a social network and strives to be matched to other players. The value of a match is specified by an edge weight. Players explore possible matches only based on their current neighborhood. We study convergence of natural betterresponse dynamics that converge to locally stable matchings – matchings that allow no incentive to deviate with respect to their imposed information structure in the social network. If we have global information and control to steer the convergence process, then quick convergence is possible and for every starting state we can construct in polynomial time a sequence of polynomially many betterresponse moves to a locally stable matching. In contrast, for a large class of oblivious dynamics including random and concurrent betterresponse the convergence time turns out to be exponential. In such distributed settings, a small amountof randommemorycan ensure polynomialconvergence time, even for manytomany matchings and more general notions of neighborhood. Here the type of memory is crucial as for several variants of cache memory we provide exponential lower bounds on convergence times. 1
Mix and Match
, 2010
"... Consider a matching problem on a graph where disjoint sets of vertices are privately owned by selfinterested agents. An edge between a pair of vertices indicates compatibility and allows the vertices to match. We seek a mechanism to maximize the number of matches despite selfinterest, with agents ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Consider a matching problem on a graph where disjoint sets of vertices are privately owned by selfinterested agents. An edge between a pair of vertices indicates compatibility and allows the vertices to match. We seek a mechanism to maximize the number of matches despite selfinterest, with agents that each want to maximize the number of their own vertices that match. Each agent can choose to hide some of its vertices, and then privately match the hidden vertices with any of its own vertices that go unmatched by the mechanism. A prominent application of this model is to kidney exchange, where agents correspond to hospitals and vertices to donorpatient pairs. Here hospitals may game an exchange by holding back pairs and harm social welfare. In this paper we seek to design mechanisms that are strategyproof, in the sense that agents cannot benefit from hiding vertices, and approximately maximize efficiency, i.e., produce a matching that is close in cardinality to the maximum cardinality matching. Our main result is the design and analysis of the eponymous MixandMatch mechanism; we show that this randomized mechanism is strategyproof and provides a 2approximation. Lower bounds establish that the mechanism is near optimal.
Individual Rationality and Participation in Large Scale, MultiHospital Kidney Exchange
, 2011
"... ..."
"Almost stable" matchings in the roommates problem
 IN APPROXIMATION AND ONLINE ALGORITHMS
, 2006
"... An instance of the classical Stable Roommates problem (sr) need not admit a stable matching. This motivates the problem of finding a matching that is “as stable as possible”, i.e. admits the fewest number of blocking pairs. In this paper we prove that, given an sr instance with n agents, in which ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
(Show Context)
An instance of the classical Stable Roommates problem (sr) need not admit a stable matching. This motivates the problem of finding a matching that is “as stable as possible”, i.e. admits the fewest number of blocking pairs. In this paper we prove that, given an sr instance with n agents, in which all preference lists are complete, the problem of finding a matching with the fewest number of blocking pairs is NPhard and not approximable within n 1 2 −ε, for any ε> 0, unless P=NP. If the preference lists contain ties, we improve this result to n 1−ε. Also, we show that, given an integer K and an sr instance I in which all preference lists are complete, the problem of deciding whether I admits a matching with exactly K blocking pairs is NPcomplete. By contrast, if K is constant, we give a polynomialtime algorithm that finds a matching with at most (or exactly) K blocking pairs, or reports that no such matching exists. Finally, we give upper and lower bounds for the minimum number of blocking pairs over all matchings in terms of some properties of a stable partition, given an sr instance I.