Performance Measurement towards Crowd-Workers Reliability in Crowd Computing using Bayesian Probability Model

Authors

  • Masnida Hussin
  • Nur Aliya Rozlan

DOI:

https://doi.org/10.14419/ijet.v7i4.19.23184

Published:

2018-11-27

Keywords:

Crowd computing, Worker Reliability, Bayesian Probability Model, Crowd sourcing,

Abstract

Crowd computing becomes increasingly popular nowadays to Internet users. It has been chosen as distributed processing mechanism to solve computing problems which offer less policy but better business. Some of advantages in using the crowd computing are significant in time and cost used. In the crowd computing, the users’ tasks are completed by crowd workers. These workers have different skill, knowledge and style in finishing the task that allocated to them. Satisfaction on the completing the tasks is really challenging to measure in the crowdsourcing due to its dynamic environment. Furthermore, it is raised when behaviour or commitments of crowd-workers in providing services are started to query, either it can be trusted or not.  In this work, a Bayesian probability model utilized to assess workers reliability in crowd computing is proposed. Specifically, we formulate trust factor by using the Bayesian model for indicating the reliability of the available workers in crowdsourcing platform. The process of prediction and hypothesis of workers’ commitment are identified to relate with the crowd-sourced computing system. We designed the significant behavioural factors to measure the workers performance towards user satisfaction. We then developed an automation measurement system that used to verify Bayesian formulation towards workers performance. The web-based automation system is able to identify the workers’ reliability values according to the input/response from the users. Optimistically, by using Bayesian probability model provides guidelines for designing trustworthy system in crowd sourcing platform. 

 

 

References

[1] S.Tranquillini, F.Daniel, P.Kucherbaev, and F.Casati, "Modeling, enacting, and integrating custom crowdsourcing processes. ," ACM Transactions on the Web (TWEB), vol. 9, 2015.

[2] A. Siddharthan, C. Lambin, A.-M. Robinson, N. Sharma, R. Comont, E. O'mahony, C. Mellish, Ren, and V. D. Wal, "Crowdsourcing Without a Crowd: Reliable Online Species Identification Using Bayesian Models to Minimize Crowd Size," ACM Trans. Intell. Syst. Technol., vol. 7, pp. 1-20, 2016.

[3] Q. Liu, J. Peng, and A. T. Ihler, "Variational inference for crowdsourcing," in Advances in neural information processing systems, 2012, pp. 692-700.

[4] Y.Zhao and Q.Zhu, "Evaluation on crowdsourcing research: Current status and future direction. ," Information Systems Frontiers, vol. 16, pp. 417-434, 2014.

[5] A. M. Turk, "Amazon mechanical turk," Retrieved August, vol. 17, p. 2012, 2012.

[6] C. Van Pelt and A. Sorokin, "Designing a scalable crowdsourcing platform," in Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, 2012, pp. 765-766.

[7] G. Demartini, D. E. Difallah, and P. Cudr, "ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking," presented at the Proceedings of the 21st international conference on World Wide Web, Lyon, France, 2012.

[8] Q. Li, F. Ma, J. Gao, L. Su, and C. J. Quinn, "Crowdsourcing High Quality Labels with a Tight Budget," presented at the Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, San Francisco, California, USA, 2016.

[9] D.Schall., "Automatic Quality Management in Crowdsourcing [Leading Edge]." IEEE Technology and Society Magazine. , vol. 4, pp. 9-13, 2013.

[10] A. T. Nguyen, M. Halpern, B. C. Wallace, and M. Lease, "Probabilistic modeling for crowdsourcing partially-subjective ratings," in Fourth AAAI Conference on Human Computation and Crowdsourcing, 2016.

[11] L. de Alfaro, V. Polychronopoulos, and M. Shavlovsky, "Reliable aggregation of boolean crowdsourced tasks," in Third AAAI Conference on Human Computation and Crowdsourcing, 2015.

[12] Y. Bachrach, T. Graepel, T. Minka, and J. Guiver, "How to grade a test without knowing the answers---a Bayesian graphical model for adaptive crowdsourcing and aptitude testing," arXiv preprint arXiv:1206.6386, 2012.

[13] S. E. and R. S., "Bayesian Methods for Intelligent Task Assignment in Crowdsourcing Systems," in Decision Making: Uncertainty, Imperfection, Deliberation and Scalability. Studies in Computational Intelligence. vol. 538, K. M. Guy T., Wolpert D. (eds), Ed., ed: Springer, Cham, 2015, pp. 1-32.

[14] M. Venanzi, J. Guiver, G. Kazai, P. Kohli, and M. Shokouhi, "Community-based bayesian aggregation models for crowdsourcing," presented at the Proceedings of the 23rd international conference on World wide web, Seoul, Korea, 2014.

[15] D. Marquez, M. Neil, and N. Fenton, "Improved reliability modeling using Bayesian networks and dynamic discretization," Reliability Engineering & System Safety, vol. 95, pp. 412-425, 2010.

View Full Article:

How to Cite

Hussin, M., & Aliya Rozlan, N. (2018). Performance Measurement towards Crowd-Workers Reliability in Crowd Computing using Bayesian Probability Model. International Journal of Engineering & Technology, 7(4.19), 447–453. https://doi.org/10.14419/ijet.v7i4.19.23184
Received 2018-12-05
Accepted 2018-12-05
Published 2018-11-27