DOI QR코드

DOI QR Code

Implementation of AIoT Edge Cluster System via Distributed Deep Learning Pipeline

  • Jeon, Sung-Ho (Dept. of Applied IT and Engineering, Pusan National University) ;
  • Lee, Cheol-Gyu (Dept. of Applied IT and Engineering, Pusan National University) ;
  • Lee, Jae-Deok (Dept. of Applied IT and Engineering, Pusan National University) ;
  • Kim, Bo-Seok (Dept. of Applied IT and Engineering, Pusan National University) ;
  • Kim, Joo-Man (Dept. of Applied IT and Engineering, Pusan National University)
  • Received : 2021.12.06
  • Accepted : 2021.12.09
  • Published : 2021.12.31

Abstract

Recently, IoT systems are cloud-based, so that continuous and large amounts of data collected from sensor nodes are processed in the data server through the cloud. However, in the centralized configuration of large-scale cloud computing, computational processing must be performed at a physical location where data collection and processing take place, and the need for edge computers to reduce the network load of the cloud system is gradually expanding. In this paper, a cluster system consisting of 6 inexpensive Raspberry Pi boards was constructed to perform fast data processing. And we propose "Kubernetes cluster system(KCS)" for processing large data collection and analysis by model distribution and data pipeline method. To compare the performance of this study, an ensemble model of deep learning was built, and the accuracy, processing performance, and processing time through the proposed KCS system and model distribution were compared and analyzed. As a result, the ensemble model was excellent in accuracy, but the KCS implemented as a data pipeline proved to be superior in processing speed..

Keywords

Acknowledgement

This work was supported by a 2-Year Research Grant of Pusan National University.

References

  1. Tom B.Brown et al. "Language Models are Few-Shot Learners", arXiv preprint arXiv:2005.14165, (2020).
  2. Park, Yoomi et al. "Deep Learning Model Parallelism", ETRI.2018.J.330401 (2018), DOI: 10.22648/ETRI.2018.J.330401
  3. Aaron Harlap, Deepak Narayanan, Amar Phanishayee, Vivek Seshadri, Nikhil Devanur, Greg Ganger, Phil Gibbons "PipeDream: Fast and Efficient Pipeline Parallel DNN Training", arXiv:1806.03377, (2020), https://arxiv.org/pdf/1806.03377.pdf
  4. Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc'Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Andrew Y. Ng, "Large Scale Distributed Deep Networks", Advances in Neural Information Processing Systems 25 (NIPS 2012)
  5. TAL BEN-NUN and TORSTEN HOEFLER, ETH Zurich, "Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis" arXiv:1802.09941, (2018), https://arxiv.org/pdf/1802.09941.pdf
  6. Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, Ng Andrew, "Deep learning with COTS HPC systems", Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1337-1345, (2013), https://proceedings.mlr.press/v28/coates13.html
  7. Falguni Jindal, Savy Mudgal, Varad Choudhari, Prathamesh P. Churi, "Emerging trends in Internet of Things", 2018 Fifth HCT Information Technology Trends (ITT), (2019)
  8. Jin-Young Kim, Isaac Sim, Sung-Hoon Yoon, "Artificial Intelligence-based Classification Scheme to improve Time Series Data Accuracy of IoT Sensors", The Journal of the Institute of Internet, Broadcasting and Communication ::, Vol.21 No.4 | pp.57~62, (2021), DOI : 10.7236/JIIBC.2021.21.4.57
  9. Ilango Sriram, Ali Khajeh-Hosseini, "Research Agenda in Cloud Technologies", arXiv:1001.3259 (2010), https://arxiv.org/pdf/1001.3259.pdf
  10. Weisong Shi. George Pallis, Zhiwei Xu, "Edge Computing ", Proceedings of the IEEE, (2019), DOI:10.1109/JPRO C.2019.292828
  11. JongPil Youn, JongTae Lim, "Fog and Edge Computing Technology Analysis and Future Prospects from an IoT Perspective", The Journal of KINGComputing 2020, vol.16, no.6, pp. 26-37 (12 pages), (2020)
  12. Alemayehu, Temesgen Seyoum, Cho, We-Duke, "Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review", KIPS Transactions on Computer and Communication Systems Volume 9 Issue 12 / Pages.291-306, (2020), DOI:10.3745/KTCCS.2020.9.12.291
  13. Pekka Paakkonen, Daniel Pakkala, Jussi Kiljander and Roope Sarala, "Architecture for Enabling Edge Inference via Model Transfer from Cloud Domain in a Kubernetes Environment", VTT Technical Research Centre of Finland, 90571 Oulu, Finland, (2020), DOI:10.3390/fi13010005
  14. Dong-Jin Shin et. el, "Big Data-based Sensor Data Processing and Analysis for IoT Environment," The Journal of The Institute of Internet, Broadcasting and Communication(IIBC) Vol. 19, No.1, pp.117-126,(2019), DOI:10.7236/JIIBC.2019.19.1.117
  15. Young-ho Ko, Gyu-Seong Heo, and Sang-Hyun Lee, "A Study on Distributed System Construction and Numerical Calculation Using Raspberry Pi," International Journal of Advanced Smart Convergence Vol.8 No.4 194-199(2019), DOI:10.7236/IJASC.2019.8.4.194
  16. BRENDAN BURNS, BRIAN GRANT, DAVID OPPENHEIMER, ERIC BREWER, AND JOHN WILKES, "Borg, Omega, and Kubernetes", ACM Queue, (2016)
  17. Eunsook Kim, Kyungwoon Lee, Chuck Yoo, "On the Resource Management of Kubernetes", IEEE, 2021 International Conference on Information Networking (ICOIN), (2021), DOI:10.1109/ICOIN50884.2021.9333977
  18. Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks", Advances in Neural Information Processing Systems 25, (NIPS 2012)
  19. Andrew G, Howard Menglong, Zhu Bo, Chen Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam, "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications", arXiv:1704.04861 (2017)
  20. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks" (2019)
  21. Mahhub Hussain, Jordan J.Bird, Diego R. Faria, "A Study on CNN Transfer Learning for Image Classification", 19th Annual UK Workshop on Computational Intelligence (2018)
  22. Matthias Langer, Zhen He, Wenny Rahayu and Yanbo Xue, "Distributed Training of Deep Learning Models: A Taxonomic Perspective", arXiv:2007.03970 (2020), DOI:10.1109/TPDS.2020.3003307
  23. Dongsuk Yook, Hyowon Lee, and In-Chul Yoo, "A survey on parallel training algorithms for deep neural networks", The Journal of the Acoustical Society of Korea Vol.39, No.6, (2020), DOI:10.7776/ASK.2020.39.6.505