PlumX Metrics
Embed PlumX Metrics

Mechanisms for making crowds truthful

Journal of Artificial Intelligence Research, ISSN: 1076-9757, Vol: 34, Page: 209-253
2009
  • 92
    Citations
  • 0
    Usage
  • 63
    Captures
  • 1
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Citations
    92
  • Captures
    63
  • Mentions
    1
    • News Mentions
      1
      • 1

Most Recent News

Algorithmic Contract Design for Crowdsourced Ranking: Conclusions, Future Directions, and References

:::info This paper is available on arxiv under CC 4.0 license. Authors : (1) Kiriaki Frangias; (2) Andrew Lin; (3) Ellen Vitercik; (4) Manolis Zampetakis.

Article Description

We consider schemes for obtaining truthful reports on a common but hidden signal from large groups of rational, self-interested agents. One example are online feedback mechanisms, where users provide observations about the quality of a product or service so that other users can have an accurate idea of what quality they can expect. However, (i) providing such feedback is costly, and (ii) there are many motivations for providing incorrect feedback. Both problems can be addressed by reward schemes which (i) cover the cost of obtaining and reporting feedback, and (ii) maximize the expected reward of a rational agent who reports truthfully. We address the design of such incentive-compatible rewards for feedback generated in environments with pure adverse selection. Here, the correlation between the true knowledge of an agent and her beliefs regarding the likelihoods of reports of other agents can be exploited to make honest reporting a Nash equilibrium. In this paper we extend existing methods for designing incentive-compatible rewards by also considering collusion. We analyze different scenarios, where, for example, some or all of the agents collude. For each scenario we investigate whether a collusion-resistant, incentive-compatible reward scheme exists, and use automated mechanism, design to specify an algorithm for deriving an efficient reward mechanism. ©2009 AI Access Foundation. All rights reserved.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know