CMSA Colloquium: Strategyproof-Exposing Mechanisms Descriptions

SEMINARS, COLLOQUIUMS

View Calendar
September 14, 2022 12:00 pm - 1:00 pm
CMSA, 20 Garden St, G10
Address: 1 Oxford Street, Cambridge, MA 02138 USA
Speaker:

Yannai Gonczarowski - Harvard


One of the crowning achievements of the field of Mechanism Design has been the design and usage of the so-called "Deferred Acceptance" matching algorithm. Designed in 1962 and awarded the Nobel Prize in 2012, this algorithm has been used around the world in settings ranging from matching students to schools to matching medical doctors to residencies. A hallmark of this algorithm is that unlike many other matching algorithms, it is "strategy-proof": participants can never gain by misreporting their preferences (say, over schools) to the algorithm. Alas, this property is far from apparent from the algorithm description. Its mathematical proof is so delicate and complex, that (for example) school districts in which it is implemented do not even attempt to explain to students and parents why this property holds, but rather resort to an appeal to authority: Nobel laureates have proven this property, so one should listen to them. Unsurprisingly perhaps, there is a growing body of evidence that participants in Deferred Acceptance attempt (unsuccessfully) to "game it," which results in a suboptimal match for themselves and for others.

By developing a novel framework of algorithm description simplicity—grounded at the intersection between Economics and Computer Science—we present a novel, starkly different, yet equivalent, description for the Deferred Acceptance algorithm, which, in a precise sense, makes its strategyproofness far more apparent. Our description does have a downside, though: some other of its most fundamental properties—for instance, that no school exceeds its capacity—are far less apparent than from all traditional descriptions of the algorithm. Using the theoretical framework that we develop, we mathematically address the question of whether and to what extent this downside is unavoidable, providing a possible explanation for why our description of the algorithm has eluded discovery for over half a century. Indeed, it seems that in the design of all traditional descriptions of the algorithm, it was taken for granted that properties such as no capacity getting exceeded should be apparent. Our description emphasizes the property that is important for participants to correctly interact with the algorithm, at the expense of properties that are mostly of interest to policy makers, and thus has the potential of vastly improving access to opportunity for many populations. Our theory provides a principled way of recasting algorithm descriptions in a way that makes certain properties of interest easier to explain and grasp, which we also support with behavioral experiments in the lab.

Joint work with Ori Heffetz and Clayton Thomas.