Context-aware personalized crowdtesting task recommendation

Junjie Wang, Ye Yang, Song Wang, Chunyang Chen, Dandan Wang, Qing Wang

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Crowdsourced software testing (short for crowdtesting) is a special type of crowdsourcing. It requires that crowdworkers master appropriate skill-sets and commit significant effort for completing a task. Abundant uncertainty may arise during a crowdtesting process due to imperfect information between the task requester and crowdworkers. For example, a worker frequently chooses tasks in an ad hoc manner in crowdtesting context, and an inappropriate task selection may lead to the worker's failing to detect any bugs, and significant testing effort unpaid and wasted. Recent studies have explored methods for supporting task requesters to make informed decisions on task pricing, worker recommendation, and so on. Unfortunately, very few study offers decision making support from the crowdworkers' perspectives. We motivate this study through a pilot study, revealing the large portion (74\%) of unpaid crowdworkers' effort due to the inappropriate task choice. Drawn from our previous work on context-aware crowdworker recommendations, we advocate a more effective alternative to manual task selection would be to provide contextualized and personalized task recommendation considering the diverse distribution of worker preference and expertise, with objectives to increase their winning chances and to potentially reduce the frequency of unpaid crowd work. This paper proposes a context-aware personalized task recommendation approach PTRec, consisting of a testing context model and a learning-based task recommendation model to aid dynamic worker decision in selecting crowdtesting tasks. The testing context model is constructed in two perspectives, i.e., process context and resource context, to capture the in-process progress-oriented information and crowdworkers' characteristics respectively. Built on top of this context model, the learning-based task recommendation model extracts 60 features automatically, and employs random forest learner to generate dynamic and personalized task recommendation which matches workers' expertise and interest. The evaluation is conducted on 636 crowdtesting tasks involving 2,404 crowdworkers from one of the largest crowdtesting platforms, and results show the potential in recommending proper tasks to workers so as to improve bug detection efficiency and increase their monetary earnings.

Original languageEnglish
Number of pages13
JournalIEEE Transactions on Software Engineering
DOIs
Publication statusAccepted/In press - 18 May 2021

Keywords

  • Computer bugs
  • Context modeling
  • Crowdsourced testing
  • Crowdsourcing
  • Feature extraction
  • Task analysis
  • Task recommendation
  • Testing
  • Testing context model
  • Videos

Cite this