Mechanism design : from partial to probabilistic verification

Algorithmic mechanism design is concerned with designing algorithms for settings where inputs are controlled by selfish agents, and the center needs to motivate the agents to report their true values. In this paper, we study scenarios where the center may be able to verify whether the agents report...

Full description

Saved in:
Bibliographic Details
Main Authors: Caragiannis, Ioannis, Szegedy, Mario, Yu, Lan, Elkind, Edith
Other Authors: School of Physical and Mathematical Sciences
Format: Conference or Workshop Item
Language:English
Published: 2013
Online Access:https://hdl.handle.net/10356/98788
http://hdl.handle.net/10220/12630
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Algorithmic mechanism design is concerned with designing algorithms for settings where inputs are controlled by selfish agents, and the center needs to motivate the agents to report their true values. In this paper, we study scenarios where the center may be able to verify whether the agents report their preferences (types) truthfully. We first consider the standard model of mechanism design with partial verification, where the set of types that an agent can report is a function of his true type. We explore inherent limitations of this model; in particular, we show that the famous Gibbard - Satterthwaite impossibility result holds even if a manipulator can only lie by swapping two adjacent alternatives in his vote. Motivated by these negative results, we then introduce a richer model of verification, which we term mechanism design with probabilistic verification. In our model, an agent may report any type, but will be caught with some probability that may depend on his true type, the reported type, or both; if an agent is caught lying, he will not get his payment and may be fined. We characterize the class of social choice functions that can be truthfully implemented in this model. We then proceed to study the complexity of finding an optimal individually rational implementation, i.e., one that minimizes the center's expected payment while guaranteeing non-negative utility to the agent, both for truthful and for non-truthful implementation. Our hardness result for non-truthful implementation answers an open question recently posed by Auletta et al. [2011].