TY - CHAP
T1 - Evaluation in the crowd
T2 - an introduction
AU - Archambault, Daniel
AU - Purchase, Helen C.
AU - Hoßfeld, Tobias
N1 - Publisher Copyright:
© 2017, Springer International Publishing AG.
PY - 2017
Y1 - 2017
N2 - Human-centred empirical evaluations play important roles in the fields of human-computer interaction, visualisation, and graphics. The advent of crowdsourcing platforms, such as Amazon Mechanical Turk, has provided a revolutionary methodology to conduct human-centred experiments. Through such platforms, experiments can now collect data from hundreds, even thousands, of participants from a diverse user community over a matter of weeks, greatly increasing the ease with which we can collect data as well as the power and generalisability of experimental results. However, such an experimental platform does not come without its problems: ensuring participant investment in the task, defining experimental controls, and understanding the ethics behind deploying such experiments en masse. This book is intended to be a primer for computer science researchers who intend to use crowdsourcing technology for human centred experiments. It focuses on methodological considerations when using crowdsourcing platforms to run human-centred experiments, particularly in the areas of visualisation and of quality of experience (QoE) for online video delivery. We hope that this book can act as a primer to researchers in our fields that intend to run experiments on crowdsourcing for the purposes of human-centred experimentation.
AB - Human-centred empirical evaluations play important roles in the fields of human-computer interaction, visualisation, and graphics. The advent of crowdsourcing platforms, such as Amazon Mechanical Turk, has provided a revolutionary methodology to conduct human-centred experiments. Through such platforms, experiments can now collect data from hundreds, even thousands, of participants from a diverse user community over a matter of weeks, greatly increasing the ease with which we can collect data as well as the power and generalisability of experimental results. However, such an experimental platform does not come without its problems: ensuring participant investment in the task, defining experimental controls, and understanding the ethics behind deploying such experiments en masse. This book is intended to be a primer for computer science researchers who intend to use crowdsourcing technology for human centred experiments. It focuses on methodological considerations when using crowdsourcing platforms to run human-centred experiments, particularly in the areas of visualisation and of quality of experience (QoE) for online video delivery. We hope that this book can act as a primer to researchers in our fields that intend to run experiments on crowdsourcing for the purposes of human-centred experimentation.
UR - http://www.scopus.com/inward/record.url?scp=85031496379&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-66435-4_1
DO - 10.1007/978-3-319-66435-4_1
M3 - Chapter (Book)
AN - SCOPUS:85031496379
SN - 9783319664347
T3 - Lecture Notes in Computer Science
SP - 1
EP - 5
BT - Evaluation in the Crowd
A2 - Archambault, Daniel
A2 - Purchase, Helen
A2 - Hoßfeld, Tobias
PB - Springer
CY - Cham Switzerland
ER -