DOI QR코드

DOI QR Code

Flip Side of Artificial Intelligence Technologies: New Labor-Intensive Industry of the 21st Century

4차 산업혁명시대의 디지털 경공업

  • Heo, Seokjae (Remodeling Research Center, Dankook University) ;
  • Na, Seunguk (Department of Architectural Engineering, Dankook University) ;
  • Han, Sehee (Department of Architectural Engineering, Dankook University) ;
  • Shin, Yoonsoo (Department of Architectural Engineering, Dankook University) ;
  • Lee, Sanghyun (Department of Architectural Engineering, Dankook University)
  • Received : 2021.08.17
  • Accepted : 2021.09.24
  • Published : 2021.10.31

Abstract

The paper acknowledges that many human resources are needed on the research and development (R&D) process of artificial intelligence (AI), and discusses on factors to consider on the current method of development. Enfin, in order to enhance efficiency of AI development, it seems possible through labour division of a few managers and numerous ordinary workers as a type of light industry. Thus, the research team names the development process of AI, which maximizes production efficiency by handling digital resources named 'data' with mechanical equipment called 'computer', as digital light industry of fourth industrial era. As experienced during the previous Industrial Revolution, if human resources are efficiently distributed and utilized, digital light industry would be able to expect progress no less than the second Industrial Revolution, and human resources development for this is considered urgent.

본 연구는 인공지능 연구개발과정에 많은 인적자원이 필요함을 인지하고 현 개발방식 고려할 사항에 대해 논의한다. 결론적으로 인공지능 개발의 효율성 향상을 위해서는 소수의 관리자와 많은 일반작업자들의 분업화가 이루어져야 가능하며, 이는 마치 일종의 경공업의 형태와 유사하다고 생각된다. 따라서 본 연구진은 컴퓨터라는 기계장치로 데이터라는 디지털 자원을 다루어 생산의 효율성을 높이는 인공지능 개발과정을 4차산업시대의 디지털 경공업이라고 명명한다. 이전 산업혁명시대에서 경험한 것과 마찬가지로 인적자원을 효율적으로 배분화하고 활용한다면 디지털 경공업은 2차산업혁명 못 지 않는 발전을 기대할 수 있을 것이며, 이를 위한 인력양성이 시급하다고 판단된다.

Keywords

Acknowledgement

본 연구는 한국과학재단이 주관하는 대학중점연구소지원사업(No. NRF-2018R1A6A1A07025819)과 신진연구지원사업(No. NRF-2020R1C1C1005406)의 지원을 받아 수행되었습니다.

References

  1. Acuna, D., Ling, H., Kar, A., Fidler, S. (2018) Efficient Interactive Annotation of Segmentation Datasets with Polygon-RNN++, IEEE Conference on Computer Vision and Pattern Recognition, pp.859~868.
  2. Association, Telecommunications Technology (2021a) AI Dataset Construction Guidebook (Ministry of Science and ICT).
  3. Association, Telecommunications Technology (2021b) AI Learning Data Quality Management Guidelines v1.0.
  4. Baji, T. (2018) Evolution of the GPU Device widely used in AI and Massive Parallel Processing, IEEE 2nd Electron Devices Technology and Manufacturing Conference (EDTM), pp.7~9.
  5. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N. (2020) An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale, Computer Vision and Pattern Recognition (cs.CV), arXiv preprint arXiv:2010.11929.
  6. Dutta, A., Gupta, A., Zissermann, A. (2016) VGG Image Annotator (VIA), http://www.robots.ox.ac.uk/~vgg/software/via.
  7. Felzenszwalb, P., McAllester, D., Ramanan, D. (2008) A Discriminatively Trained, Multiscale, Deformable Part Model, IEEE Conference on Computer Vision and Pattern Recognition, pp.1~8.
  8. Girshick, R. (2015) Fast R-CNN, Proc. IEEE International Conference on Computer Vision, pp.1440~1448.
  9. Girshick, R., Donahue, J., Darrell, T., Malik, J. (2014) Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, IEEE Conference on Computer Vision and Pattern Recognition, pp.580~587.
  10. He, K., Gkioxari, G., Dollar, P., Girshick, R. (2017) Mask R-CNN, IEEE International Conference on Computer Vision, pp.2961~2969.
  11. He, K., Zhang, X., Ren, S., Sun, J. (2016) Deep Residual Learning for Image Recognition, IEEE Conference on Computer Vision and Pattern Recognition, pp.770~778.
  12. Heo, S.J., Chunwei, Z., Yu, E.J. (2018) Response Simulation, Data Cleansing and Restoration of Dynamic and Static Measurements Based on Deep Learning Algorithms, Int. J. Concr. Struct. Mater., 12, pp.1~13. https://doi.org/10.1186/s40069-018-0237-8
  13. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q. (2017) Densely Connected Convolutional Networks, IEEE Conference on Computer Vision and Pattern Recognition, pp.4700~4708.
  14. Kim, B.S., Lee, J.H., Kang, J.W., Kim, E.S., Kim, H.J. (2021) HOTR: End-to-End Human-Object Interaction Detection with Transformers, IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.74~83.
  15. Kim, J.M., Hyeon, S.G., Chae, J.H., Do, M.S. (2019) Road Crack Detection based on Object Detection Algorithm using Unmanned Aerial Vehicle Image, J. Korea Inst. Intell. Trans. Syst., 18, pp.155~163. https://doi.org/10.12815/kits.2019.18.6.155
  16. King, T.M., Arbon, J., Santiago, D., Adamo, D., Chin, W., Chin, W., Shanmugam, R. (2019) AI for Testing Today and Tomorrow: Industry Perspectives, IEEE International Conference On Artificial Intelligence Testing (AITest), pp.81~88.
  17. Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012) Imagenet Classification with Deep Convolutional Neural Networks, Adv. Neural Inf. Proc. Syst., 25, pp.1097~1105.
  18. Lee, B.Y., Yi, S.T., Kim, J.K. (2007) Surface Crack Evaluation Method in Concrete Structures, J. Korean Soc. Nondestruct. Test., 27, pp.173~182.
  19. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P.,Zitnick, C. L. (2014) Microsoft COCO: Common Objects in Context, European Conference on Computer Vision, Springer, pp.740~755.
  20. Lowe, D.G. (1999) Object Recognition from Local Scale-Invariant Features, Proceedings of the Seventh IEEE International Conference on Computer Vision, pp.1150~1157.
  21. Na, S.G., Heo, S.J., Han, S.H. (2021) Construction Waste Reduction through Application of Different Structural Systems for the Slab in a Commercial Building: A South Korean Case, Appl. Sci., 11(3), p.5870. https://doi.org/10.3390/app11135870
  22. NIA (National Information Society Agency) (2020) 'NIA2020-032', https://www.nia.or.kr.
  23. Redmon, J., Divvala, S., Girshick, R., Farhadi, A. (2016) You Only Look Once: Unified, Real-Time Object Detection, IEEE Conference on Computer Vision and Pattern Recognition, pp.779~788.
  24. Ren, S., He, K., Girshick , R., Sun, J. (2015) Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, Adv. Neural Inf. Proc. Syst., 28, pp.91~99.
  25. Revaud, J., Heo, M., Rezende, R.S., You, C., Jeong, S.G. (2019) Did It Change? Learning to Detect Point-of-Interest Changes for Proactive Map Updates, IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.4086~4095.
  26. Roh, B.S., Shin, W.H., Kim, I.D., Kim, S.W. (2021) Spatially Consistent Representation Learning, EEE/CVF Conference on Computer Vision and Pattern Recognition, pp.1144~1153.
  27. Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T. (2008) LabelMe: A Database and Web-Based Tool for Image Annotation, Int. J. Comput. Vis., 77(1-3), pp.157~173. https://doi.org/10.1007/s11263-007-0090-8
  28. Simonyan, K., Zisserman, A. (2014) Very Deep Convolutional Networks for Large-Scale Image Recognition, Computer Vision and Pattern Recognition, arXiv:1409.1556.
  29. Tan, M., Le, Q. (2019) Efficientnet: Rethinking Model Scaling for Convolutional Neural Networks, International Conference on Machine Learning, pp.6105~6114. PMLR.
  30. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017) Attention is All You Need, 31st Conference on Neural Information Processing Systems(NIPS 2017), pp.5998~6008.