Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Rehumanized crowdsourcing: a labeling framework addressing bias and ethics in machine learning
Barbosa N., Chen M.  CHI 2019 (Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK,  May 4-9, 2019) 1-12. 2019. Type: Proceedings
Date Reviewed: Jun 1 2021

Crowdsourcing is the practice of obtaining information or input into a task from a large number of people, either paid or unpaid, typically via the Internet. With its fast growth, crowdsourcing has produced large volumes of data manually labeled via human crowds. Processing this data with various machine learning algorithms, people expect meaningful information to meet their objectives. The authors refer to the “dehumanization effects” of crowdsourcing because both data collection and processing are carried out by machines.

Due to the open nature of crowdsourcing, the data collected is prone to biases for various human factors, such as age, country of residence, culture, ethics, gender, knowledge level, and so on. Data with human bias may affect the quality of information derived either positively or negatively. After providing strong evidence for skewed information caused by biased labels, the authors propose a labeling framework that takes human factors into consideration to improve the efficacy of crowdsourcing. The key idea of the framework is based on the following: different tasks have their specific preferences related to human factors. Therefore, a requester should specify different settings in the task transparently before launching a task. Making decisions about tradeoffs on such specifications is a kind of rehumanization.

Furthermore, because of the framework’s transparency, requesters are made aware of any potential issues introduced and can mitigate biases in the process at any point in time if a task is launched. Deploying the framework to a popular crowdsourcing platform in Python, the authors report “experiments with 1,919 workers collecting 160,345 human judgments.” The authors explain:

By routing microtasks to workers based on demographics and appropriate pay, our framework mitigates biases in the contributor sample and increases the hourly pay given to contributors.

The quality of crowdsourcing work depends on the quality of labels collected. While popular mobile computing devices and broadband networks make it easy to collect inputs from the public, the control of data quality has been a challenge. This paper provides a practical approach for managing human factors in crowdsourcing with convincing results. Researchers and practitioners working in the area of socially aware computing and machine learning should benefit from reading this paper.

Reviewer:  Chenyi Hu Review #: CR147277 (2111-0271)
Bookmark and Share
  Reviewer Selected
Featured Reviewer
Human Factors (H.1.2 ... )
Would you recommend this review?
Other reviews under "Human Factors": Date
 A survey on end-edge-cloud orchestrated network computing paradigms: transparent computing, mobile edge computing, fog computing, and cloudlet
Ren J., Zhang D., He S., Zhang Y., Li T.  ACM Computing Surveys 52(6): 1-36, 2019. Type: Article
Jun 17 2022
Website visual design qualities: a threefold framework
Hartono E., Holsapple C.  ACM Transactions on Management Information Systems 10(1): 1-21, 2019. Type: Article
May 19 2022
Widar2.0: passive human tracking with a single Wi-Fi link
Qian K., Wu C., Zhang Y., Zhang G., Yang Z., Liu Y.  MobiSys 2018 (Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services, Munich, Germany,  Jun 10-15, 2018) 350-361, 2018. Type: Proceedings
Jun 22 2021

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright © 2000-2022 ThinkLoud, Inc.
Terms of Use
| Privacy Policy