Results 1  10
of
74
Deferred Acceptance Algorithms: History, Theory, Practice, and Open Questions
 INTERNATIONAL JOURNAL OF GAME THEORY, SPECIAL ISSUE IN HONOR OF DAVID GALE'S 85 TH BIRTHDAY
, 2007
"... The deferred acceptance algorithm proposed by Gale and Shapley (1962) has had a profound influence on market design, both directly, by being adapted into practical matching mechanisms, and, indirectly, by raising new theoretical questions. Deferred acceptance algorithms are at the basis of a number ..."
Abstract

Cited by 78 (7 self)
 Add to MetaCart
The deferred acceptance algorithm proposed by Gale and Shapley (1962) has had a profound influence on market design, both directly, by being adapted into practical matching mechanisms, and, indirectly, by raising new theoretical questions. Deferred acceptance algorithms are at the basis of a number of labor market clearinghouses around the world, and have recently been implemented in school choice systems in Boston and New York City. In addition, the study of markets that have failed in ways that can be fixed with centralized mechanisms has led to a deeper understanding of some of the tasks a marketplace needs to accomplish to perform well. In particular, marketplaces work well when they provide thickness to the market, help it deal with the congestion that thickness can bring, and make it safe for participants to act effectively on their preferences. Centralized clearinghouses organized around the deferred acceptance algorithm can have these properties, and this has sometimes allowed failed markets to be reorganized.
Mathematical statistics
 Test
, 1977
"... This paper is a selective review of the regularization methods scattered in statistics literature. We introduce a general conceptual approach to regularization and fit most existing methods into it. We have tried to focus on the importance of regularization when dealing with today’s highdimensional ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
This paper is a selective review of the regularization methods scattered in statistics literature. We introduce a general conceptual approach to regularization and fit most existing methods into it. We have tried to focus on the importance of regularization when dealing with today’s highdimensional objects: data and models. A wide range of examples are discussed, including nonparametric regression, boosting, covariance matrix estimation, principal component estimation, subsampling.
What Have We Learned From Market Design?
 ECONOMIC JOURNAL
, 2008
"... This essay discusses some things we have learned about markets, in the process of designing marketplaces to fix market failures. To work well, marketplaces have to provide thickness, i.e. they need to attract a large enough proportion of the potential participants in the market; they have to overco ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
This essay discusses some things we have learned about markets, in the process of designing marketplaces to fix market failures. To work well, marketplaces have to provide thickness, i.e. they need to attract a large enough proportion of the potential participants in the market; they have to overcome the congestion that thickness can bring, by making it possible to consider enough alternative transactions to arrive at good ones; and they need to make it safe and sufficiently simple to participate in the market, as opposed to transacting outside of the market, or having to engage in costly and risky strategic behavior. I'll draw on recent examples of market design ranging from labor markets for doctors and new economists, to kidney exchange, and school choice in New York City and Boston.
Making Decisions Based on the Preferences of Multiple Agents
"... People often have to reach a joint decision even though they have conflicting preferences over the alternatives. Examples range from the mundane—such as allocating chores among the members of a household—to the sublime—such as electing a government and thereby charting the course for a country. The ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
(Show Context)
People often have to reach a joint decision even though they have conflicting preferences over the alternatives. Examples range from the mundane—such as allocating chores among the members of a household—to the sublime—such as electing a government and thereby charting the course for a country. The joint decision can be reached by an informal negotiating process or by a carefully specified protocol. Philosophers, mathematicians, political scientists, economists, and others have studied the merits of various protocols for centuries. More recently, especially over the course of the past decade or so, computer scientists have also become deeply involved in this study. The perhaps surprising arrival of computer scientists on this scene is due to a variety of reasons, including the following. 1. Computer networks provide a new platform for communicating
Mix and Match
, 2010
"... Consider a matching problem on a graph where disjoint sets of vertices are privately owned by selfinterested agents. An edge between a pair of vertices indicates compatibility and allows the vertices to match. We seek a mechanism to maximize the number of matches despite selfinterest, with agents ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Consider a matching problem on a graph where disjoint sets of vertices are privately owned by selfinterested agents. An edge between a pair of vertices indicates compatibility and allows the vertices to match. We seek a mechanism to maximize the number of matches despite selfinterest, with agents that each want to maximize the number of their own vertices that match. Each agent can choose to hide some of its vertices, and then privately match the hidden vertices with any of its own vertices that go unmatched by the mechanism. A prominent application of this model is to kidney exchange, where agents correspond to hospitals and vertices to donorpatient pairs. Here hospitals may game an exchange by holding back pairs and harm social welfare. In this paper we seek to design mechanisms that are strategyproof, in the sense that agents cannot benefit from hiding vertices, and approximately maximize efficiency, i.e., produce a matching that is close in cardinality to the maximum cardinality matching. Our main result is the design and analysis of the eponymous MixandMatch mechanism; we show that this randomized mechanism is strategyproof and provides a 2approximation. Lower bounds establish that the mechanism is near optimal.
Individual Rationality and Participation in Large Scale, MultiHospital Kidney Exchange
, 2011
"... ..."
(Show Context)
Online Stochastic Optimization in the Large: Application to Kidney Exchange
"... Kidneys are the most prevalent organ transplants, but demand dwarfs supply. Kidney exchanges enable willing but incompatible donorpatient pairs to swap donors. These swaps can include cycles longer than two pairs as well, and chains triggered by altruistic donors. Current kidney exchanges address c ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
(Show Context)
Kidneys are the most prevalent organ transplants, but demand dwarfs supply. Kidney exchanges enable willing but incompatible donorpatient pairs to swap donors. These swaps can include cycles longer than two pairs as well, and chains triggered by altruistic donors. Current kidney exchanges address clearing (deciding who gets kidneys from whom) as an offline problem: they optimize the current batch. In reality, clearing is an online problem where patientdonor pairs and altruistic donors appear and expire over time. In this paper, we study trajectorybased online stochastic optimization algorithms (which use a recent scalable optimal offline solver as a subroutine) for this. We identify tradeoffs in these algorithms between different parameters. We also uncover the need to set the batch size that the algorithms consider an atomic unit. We develop an experimental methodology for setting these parameters, and conduct experiments on real and generated data. We adapt the REGRETS algorithm of Bent and van Hentenryck for the setting. We then develop a better algorithm. We also show that the AMSAA algorithm of Mercier and van Hentenryck does not scale to the nationwide level. Our best online algorithm saves significantly more lives than the current practice of solving each batch separately. 1
Optimizing kidney exchange with transplant chains: Theory and reality
 In International Conference on Autonomous Agents and MultiAgent Systems (AAMAS
, 2012
"... Kidney exchange, where needy patients swap incompatible donors with each other, offers a lifesaving alternative to waiting for an organ from the deceaseddonor waiting list. Recently, chains— sequences of transplants initiated by an altruistic kidney donor— have shown marked success in practice, yet ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
(Show Context)
Kidney exchange, where needy patients swap incompatible donors with each other, offers a lifesaving alternative to waiting for an organ from the deceaseddonor waiting list. Recently, chains— sequences of transplants initiated by an altruistic kidney donor— have shown marked success in practice, yet remain poorly understood. We provide a theoretical analysis of the efficacy of chains in the most widely used kidney exchange model, proving that long chains do not help beyond chains of length of 3 in the large. This completely contradicts our realworld results gathered from the budding nationwide kidney exchange in the United States; there, solution quality improves by increasing the chain length cap to 13 or beyond. We analyze reasons for this gulf between theory and practice, motivated by our experiences running the only nationwide kidney exchange. We augment the standard kidney exchange model to include a variety of realworld features. Experiments in the static setting support the theory and help determine how large is really “in the large". Experiments in the dynamic setting cannot be conducted in the large due to computational limitations, but with up to 460 candidates, a chain cap of 4 was best (in fact, better than 5).
Dynamic Matching via Weighted Myopia with Application to Kidney Exchange
, 2012
"... In many dynamic matching applications—especially highstakes ones—the competitive ratios of priorfree online algorithms are unacceptably poor. The algorithm should take distributional information about possible futures into account in deciding what action to take now. This is typically done by draw ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
In many dynamic matching applications—especially highstakes ones—the competitive ratios of priorfree online algorithms are unacceptably poor. The algorithm should take distributional information about possible futures into account in deciding what action to take now. This is typically done by drawing sample trajectories of possible futures at each time period, but may require a prohibitively large number of trajectories or prohibitive memory and/or computation to decide what action to take. Instead, we propose to learn potentials of elements (e.g., vertices) of the current problem. Then, at run time, we simply run an offline matching algorithm at each time period, but subtracting out in the objective the potentials of the elements used up in the matching. We apply the approach to kidney exchange. Kidney exchanges enable willing but incompatible patientdonor pairs (vertices) to swap donors. These swaps typically include cycles longer than two pairs and chains triggered by altruistic donors. Fielded exchanges currently match myopically, maximizing the number of patients who get kidneys in an offline fashion at each time period. Myopic matching is suboptimal; the clearing problem is dynamic since patients, donors, and altruists appear and expire over time. We theoretically compare the power of using potentials on increasingly large elements: vertices, edges, cycles, and the entire graph (optimum). Then, experiments show that by learning vertex potentials, our algorithm matches more patients than the current practice of clearing myopically. It scales to exchanges orders of magnitude beyond those handled by the prior dynamic algorithm.
Anarchy, stability, and utopia: Creating better matchings
, 2011
"... We consider the loss in social welfare caused by individual rationality in matching scenarios. We give both theoretical and experimental results comparing stable matchings with socially optimal ones, as well as studying the convergence of various natural algorithms to stable matchings. Our main goal ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
We consider the loss in social welfare caused by individual rationality in matching scenarios. We give both theoretical and experimental results comparing stable matchings with socially optimal ones, as well as studying the convergence of various natural algorithms to stable matchings. Our main goal is to design mechanisms in order to incentivize agents to participate in matchings that are socially desirable. We show that theoretically, the loss in social welfare caused by strategic behavior can be substantial. We analyze some natural distributions of utilities that agents receive from matchings, and find that in most cases the stable matching attains close to the optimal social welfare. Furthermore, for certain graph structures, simple greedy algorithms for partnerswitching (some without convergence guarantees) converge to stability remarkably quickly in expectation. Even when stable matchings are significantly socially suboptimal, slight changes in incentives can provide good solutions. We derive conditions for the existence of approximately stable matchings that are also close to socially optimal, which demonstrates that adding small switching costs can make socially optimal matchings stable. We also show that introducing heterogeneity in tastes can greatly improve social outcomes.