Browse > Article
http://dx.doi.org/10.3745/JIPS.04.0064

Task Assignment Model for Crowdsourcing Software Development: TAM  

Tunio, Muhammad Zahid (School of Software Engineering, Beijing University of Posts and Telecommunication)
Luo, Haiyong (Institute of Computer Technology, Chines Academy of Science, Beijing Key Laboratory of Mobile Computing and Pervasive Devices)
Wang, Cong (School of Software Engineering, Beijing University of Posts and Telecommunication)
Zhao, Fang (School of Software Engineering, Beijing University of Posts and Telecommunication)
Gilal, Abdul Rehman (Dept. of Computer Science, Sukkur IBA University)
Shao, Wenhua (School of Software Engineering, Beijing University of Posts and Telecommunication)
Publication Information
Journal of Information Processing Systems / v.14, no.3, 2018 , pp. 621-630 More about this Journal
Abstract
Selection of a suitable task from the extensively available large set of tasks is an intricate job for the developers in crowdsourcing software development (CSD). Besides, it is also a tiring and a time-consuming job for the platform to evaluate thousands of tasks submitted by developers. Previous studies stated that managerial and technical aspects have prime importance in bringing success for software development projects, however, these two aspects can be more effective and conducive if combined with human aspects. The main purpose of this paper is to present a conceptual framework for task assignment model for future research on the basis of personality types, that will provide a basic structure for CSD workers to find suitable tasks and also a platform to assign the task directly. This will also match their personality and task. Because personality is an internal force which whittles the behavior of developers. Consequently, this research presented a Task Assignment Model (TAM) from a developers point of view, moreover, it will also provide an opportunity to the platform to assign a task to CSD workers according to their personality types directly.
Keywords
Crowdsourced; Human Factor; Personality Type; Software Development; Task Assignment;
Citations & Related Records
Times Cited By KSCI : 1  (Citation Analysis)
연도 인용수 순위
1 D. Geiger and M. Schader, "Personalized task recommendation in crowdsourcing information systems: current state of the art," Decision Support Systems, vol. 65, pp. 3-16, 2014.   DOI
2 D. Dang, Y. Liu, X. Zhang, and S. Huang, "A crowdsourcing worker quality evaluation algorithm on MapReduce for big data applications," IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 7, pp. 1879-1888, 2016.   DOI
3 V. Ambati, S. Vogel, and J. G. Carbonell, "Towards task recommendation in micro-task markets," Human Computation, vol. 11, pp. 1-4, 2011.
4 L. B. Chilton, J. J. Horton, R. C. Miller, and S. Azenkot, "Task search in a human computation market," in Proceedings of the ACM SIGKDD Workshop on Human Computation, Washington, DC, 2010, pp. 1-9.
5 T. D. LaToza and A. Van der Hoek, "Crowdsourcing in software engineering: models, motivations, and challenges," IEEE Software, vol. 33, no. 1, pp. 74-80, 2016.   DOI
6 K. Mao, L. Capra, M. Harman, and Y. Jia, "A survey of the use of crowdsourcing in software engineering," Journal of Systems and Software, vol. 126, pp. 57-84, 2017.   DOI
7 K. Mao, Y. Yang, Q. Wang, Y. Jia, and M. Harman, "Developer recommendation for crowdsourced software development tasks," in Proceedings of 2015 IEEE Symposium on Service-Oriented System Engineering, San Francisco Bay, CA, 2015, pp. 347-356.
8 E. Aldhahri, V. Shandilya, and S. Shiva, "Towards an effective crowdsourcing recommendation system: a survey of the state-of-the-art," in Proceedings of 2015 IEEE Symposium on Service-Oriented System Engineering, San Francisco Bay, CA, 2015, pp. 372-377.
9 T. D. LaToza and A. Van der Hoek, "A vision of crowd development," in Proceedings of 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Florence, Italy, 2015, pp. 563-566.
10 B. Carpenter, "Multilevel Bayesian models of categorical data annotation," 2008 [Online]. Available: https://lingpipe.files.wordpress.com/2008/11/carp-bayesian-multilevel-annotation.pdf.
11 A. Brew, D. Greene, and P. Cunningham, "Using crowdsourcing and active learning to track sentiment in online media," in Proceedings of the 19th European Conference on Artificial Intelligence, Lisbon, Portugal, 2010, pp. 145-150.
12 J. Howe, "The rise of crowdsourcing," Wired Magazine, vol. 14, no. 6, pp. 1-4, 2006.
13 L. Machado, R. Prikladnicki, F. Meneguzzi, C. R. de Souza, and E. Carmel, "Task allocation for crowdsourcing using AI planning," in Proceedings of the 3rd International Workshop on CrowdSourcing in Software Engineering, Austin, TX, 2016, pp. 36-40.
14 Y. Fu, H. Chen, and F. Song, "STWM: a solution to self-adaptive task-worker matching in software crowdsourcing," in Proceedings of the International Conference on Algorithms and Architectures for Parallel Processing, Zhangjiajie, China, 2015, pp. 383-398.
15 J. E. Tomayko and O. Hazzan, Human Aspects of Software Engineering. Hingham, MA : Charles River Media, 2004.
16 R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng, "Cheap and fast: but is it good? Evaluating non-expert annotations for natural language tasks," in Proceedings of the Conference on Empirical Methods in Natural Language Processing, Honolulu, HI, 2008, pp. 254-263.
17 J. Whitehill, T. F. Wu, J. Bergsma, J. R. Movellan, and P. L. Ruvolo, "Whose vote should count more: optimal integration of labels from labelers of unknown expertise," Advances in Neural Information Processing Systems, vol. 22, pp. 2035-2043, 2009.
18 M. C. Yuen, I. King, and K. S. Leung, "Task matching in crowdsourcing," in Proceedings of 2011 International Conference on and 4th International Conference on Cyber, Physical and Social Computing, Dalian, China, 2011, pp. 409-412.
19 V. S. Sheng, F. Provost, P. G. Ipeirotis, "Get another label? Improving data quality and data mining using multiple, noisy labelers," in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, NV, 2008, pp. 614-622.
20 X. Liu, M. Lu, B. C. Ooi, Y. Shen, S. Wu, and M. Zhang, "CDAS: a crowdsourcing data analytics system," Proceedings of the VLDB Endowment, vol. 5, no. 10, pp. 1040-1051, 2012.   DOI
21 V. C. Raykar, S. Yu, L. H. Zhao, A. Jerebko, C. Florin, G. H. Valadez, L. Bogoni, and L. Moy, "Supervised learning from multiple experts: whom to trust when everyone lies a bit," in Proceedings of the 26th Annual International Conference on Machine Learning, 2009, pp. 889-896.
22 A. P. Dempster, N. M. Laird, and D. B. Rubin, "Maximum likelihood from incomplete data via the EM algorithm," Journal of the Royal Statistical Society Series B (Methodological), vol. 39, no. 1, pp. 1-38, 1977.
23 A. P. Dawid and A. M. Skene, "Maximum likelihood estimation of observer error-rates using the EM algorithm," Applied Statistics, vol. 28, no. 1, pp. 20-28, 1979.   DOI
24 M. R. Gupta and Y. Chen, "Theory and use of the EM algorithm," Foundations and Trends in Signal Processing, vol. 4, no. 3, pp. 223-296, 2011.   DOI
25 C. M. Karapicak and O. Demirors, "A case study on the need to consider personality types for software team formation," in Proceedings of the International Conference on Software Process Improvement and Capability Determination, Bremen, Germany, 2013, pp. 120-129.
26 G. McLachlan and T. Krishnan, The EM Algorithm and Extensions, 2nd ed. Hoboken, NJ: John Wiley & Sons, 2007.
27 L. F. Capretz and F. Ahmed, "Making sense of software development and personality types," IT Professional, vol. 12, no. 1, pp. 6-13, 2010.   DOI
28 G. Kazai, J. Kamps, and N. Milic-Frayling, "Worker types and personality traits in crowdsourcing relevance labels," in Proceedings of the 20th ACM International Conference on Information and Knowledge Management, Glasgow, Scotland, 2011, pp. 1941-1944.
29 L. F. Capretz, D. Varona, and A. Raza, "Influence of personality types in software tasks choices," Computers in Human Behavior, vol. 52, pp. 373-378, 2015.   DOI
30 S. Cruz, F. Q. da Silva, and L. F. Capretz, "Forty years of research on personality in software engineering: a mapping study," Computers in Human Behavior, vol. 46, pp. 94-113, 2015.   DOI
31 R. Valencia-Garcia, F. Garcia-Sanchez, D. Castellanos-Nieves, J. T. Fernandez-Breis, and A. Toval, "Exploitation of social semantic technology for software development team configuration," IET Software, vol. 4, no. 6, pp. 373-385, 2010.   DOI
32 N. R. Mead, "Software engineering education: how far we've come and how far we have to go," Journal of Systems and Software, vol. 82, no. 4, pp. 571-575, 2009.   DOI
33 A. R. Gilal, J. Jaafar, M. Omar, S. Basri, and A. Waqas, "A rule-based model for software development team composition: team leader role with personality types and gender classification," Information and Software Technology, vol. 74, pp. 105-113, 2016.   DOI
34 M. Z. Tunio, H. Luo, C. Wang and F. Zhao, "Crowdsourcing software development: task assignment using PDDL artificial intelligence planning," Journal of Information Processing Systems, vol. 14, no. 1, pp. 129-139, 2018.   DOI
35 M. Z. Tunio, H. Luo, W. Cong, Z. Fang, A. R. Gilal, A. Abro, and S. Wenhua, "Impact of personality on task selection in crowdsourcing software development: a sorting approach," IEEE Access, vol. 5, pp. 18287-18294, 2017.   DOI