Know when to run : recommendations in crowdsourcing contests

Crowdsourcing contests have emerged as an innovative way for firms to solve business problems by acquiring ideas from participants external to the firm. As the number of participants on crowdsourcing contest platforms has increased, so has the number of tasks that are open at any time. This has made...

Full description

Saved in:
Bibliographic Details
Main Authors: Mo, Jiahui, Sarkar, Sumit, Menon, Syam
Other Authors: Nanyang Business School
Format: Article
Language:English
Published: 2018
Subjects:
Online Access:https://hdl.handle.net/10356/89091
http://hdl.handle.net/10220/46105
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-89091
record_format dspace
spelling sg-ntu-dr.10356-890912023-05-19T06:44:40Z Know when to run : recommendations in crowdsourcing contests Mo, Jiahui Sarkar, Sumit Menon, Syam Nanyang Business School Performance Competition DRNTU::Business::General Crowdsourcing contests have emerged as an innovative way for firms to solve business problems by acquiring ideas from participants external to the firm. As the number of participants on crowdsourcing contest platforms has increased, so has the number of tasks that are open at any time. This has made it difficult for solvers to identify tasks in which to participate. We present a framework to recommend tasks to solvers who wish to participate in crowdsourcing contests. The existence of competition among solvers is an important and unique aspect of this environment, and our framework considers the competition a solver would face in each open task. As winning a task depends on performance, we identify a theory of performance and reinforce it with theories from learning, motivation, and tournaments. This augmented theory of performance guides us to variables specific to crowdsourcing contests that could impact a solver’s winning probability. We use these variables as input into various probability prediction models adapted to our context, and make recommendations based on the probability or the expected payoff of the solver winning an open task. We validate our framework using data from a real crowdsourcing platform. The recommender system is shown to have the potential of improving the success rates of solvers across all abilities. Recommendations have to be made for open tasks and we find that the relative rankings of tasks at similar stages of their time lines remain remarkably consistent when the tasks close. Further, we show that deploying such a system should benefit not only the solvers, but also the seekers and the platform itself. Published version 2018-09-26T07:01:10Z 2019-12-06T17:17:40Z 2018-09-26T07:01:10Z 2019-12-06T17:17:40Z 2018 Journal Article Mo, J., Sarkar, S., & Menon, S. (2018). Know When to Run: Recommendations in Crowdsourcing Contests. MIS Quarterly, 42(3), 919-944. doi:10.25300/MISQ/2018/14103 0276-7783 https://hdl.handle.net/10356/89091 http://hdl.handle.net/10220/46105 10.25300/MISQ/2018/14103 en MIS Quarterly © 2018 Management Information Systems Research Center. This paper was published in MIS Quarterly and is made available as an electronic reprint (preprint) with permission of Management Information Systems Research Center. The published version is available at: [http://dx.doi.org/10.25300/MISQ/2018/14103]. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper is prohibited and is subject to penalties under law. 26 p. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Performance
Competition
DRNTU::Business::General
spellingShingle Performance
Competition
DRNTU::Business::General
Mo, Jiahui
Sarkar, Sumit
Menon, Syam
Know when to run : recommendations in crowdsourcing contests
description Crowdsourcing contests have emerged as an innovative way for firms to solve business problems by acquiring ideas from participants external to the firm. As the number of participants on crowdsourcing contest platforms has increased, so has the number of tasks that are open at any time. This has made it difficult for solvers to identify tasks in which to participate. We present a framework to recommend tasks to solvers who wish to participate in crowdsourcing contests. The existence of competition among solvers is an important and unique aspect of this environment, and our framework considers the competition a solver would face in each open task. As winning a task depends on performance, we identify a theory of performance and reinforce it with theories from learning, motivation, and tournaments. This augmented theory of performance guides us to variables specific to crowdsourcing contests that could impact a solver’s winning probability. We use these variables as input into various probability prediction models adapted to our context, and make recommendations based on the probability or the expected payoff of the solver winning an open task. We validate our framework using data from a real crowdsourcing platform. The recommender system is shown to have the potential of improving the success rates of solvers across all abilities. Recommendations have to be made for open tasks and we find that the relative rankings of tasks at similar stages of their time lines remain remarkably consistent when the tasks close. Further, we show that deploying such a system should benefit not only the solvers, but also the seekers and the platform itself.
author2 Nanyang Business School
author_facet Nanyang Business School
Mo, Jiahui
Sarkar, Sumit
Menon, Syam
format Article
author Mo, Jiahui
Sarkar, Sumit
Menon, Syam
author_sort Mo, Jiahui
title Know when to run : recommendations in crowdsourcing contests
title_short Know when to run : recommendations in crowdsourcing contests
title_full Know when to run : recommendations in crowdsourcing contests
title_fullStr Know when to run : recommendations in crowdsourcing contests
title_full_unstemmed Know when to run : recommendations in crowdsourcing contests
title_sort know when to run : recommendations in crowdsourcing contests
publishDate 2018
url https://hdl.handle.net/10356/89091
http://hdl.handle.net/10220/46105
_version_ 1770566129742249984