DOI QR코드

DOI QR Code

The Development of Interactive Artificial Intelligence Blocks for Image Classification

이미지 분류를 위한 대화형 인공지능 블록 개발

  • Park, Youngki (Department of Computer Education, Chuncheon National University of Education) ;
  • Shin, Youhyun (Department of Computer Science and Engineering, Incheon National University)
  • 박영기 (춘천교육대학교 컴퓨터교육과) ;
  • 신유현 (인천대학교 컴퓨터공학부)
  • Received : 2021.12.07
  • Accepted : 2021.12.08
  • Published : 2021.12.31

Abstract

There are various educational programming environments in which students can train artificial intelligence (AI) using block-based programming languages, such as Entry, Machine Learning for Kids, and Teachable Machine. However, these programming environments are designed so that students can train AI through a separate menu, and then use the trained model in the code editor. These approaches have the advantage that students can check the training process more intuitively, but there is also the disadvantage that both the training menu and the code editor must be used. In this paper, we present a novel artificial intelligence block that can perform both AI training and programming in the code editor. While this AI block is presented as a Scratch block, the training process is performed through a Python server. We describe the blocks in detail through the process of training a model to classify a blue pen and a red pen, and a model to classify a dental mask and a KF94 mask. Also, we experimentally show that our approach is not significantly different from Teachable Machine in terms of performance.

엔트리, Machine Learning for Kids, Teachable Machine과 같이 블록 기반 프로그래밍 언어에서 활용할 수 있도록 인공지능을 간단히 학습시킬 수 있는 다양한 플랫폼들이 존재한다. 그러나 이와 같은 플랫폼들은 별도의 메뉴를 통해 인공지능 학습을 진행한 다음, 학습된 모델을 코드 에디터에서 활용하는 방식을 따르고 있다. 이와 같은 방식은 학습되는 과정을 학생들이 더 직관적으로 살펴볼 수 있다는 장점이 있지만, 학습 메뉴와 코드 에디터를 모두 활용해야 한다는 단점도 존재한다. 본 논문에서는 코드 에디터에서 인공지능 학습과 코딩을 모두 진행할 수 있는 인공지능 블록을 개발한다. 본 인공지능 블록은 스크래치 블록으로 제시되지만 실제 학습 과정은 파이썬 서버를 통해 수행된다. 파란색 펜과 빨간색 펜을 분류하는 모델, 덴탈 마스크와 KF94 마스크를 분류하는 모델을 학습하는 과정을 통해 본 블록에 대해 상세히 기술한다. 또, 학습 성능 면에서 Teachable Machine와 큰 차이가 없음을 실험적으로 나타내었다.

Keywords

Acknowledgement

이 논문은 2021년도 정부(교육부)의 재원으로 한국연구재단의 지원을 받아 수행된 기초연구사업임(No. NRF-2020R1I1A3068836)

References

  1. Lane, D. (2021). Machine Learning for Kids: An Interactive Introduction to Artificial Intelligence. No Starch Press.
  2. Carney, M., Webster, B., Alvarado, I., Phillips, K., Howell, N., Griffith, J., Jongejan, J., Pitaru, A., and Chen. A. (2020). Teachable machine: Approachable web-based tool for exploring machine learning classification. In Extended abstracts of the 2020 CHI conference on human factors in computing systems, 1-8.
  3. Entry, https://playentry.org/
  4. Druga, S. (2018). Growing up with AI: Cognimates: From coding to teaching machines. Ph.D. dissertation, Massachusetts Institute of Technology.
  5. Park, Y. and Shin, Y. (2021). Tooee: A Novel Scratch Extension for K-12 Big Data and Artificial Intelligence Education Using Text-Based Visual Blocks. IEEE Access, 9, 149630-149646. https://doi.org/10.1109/ACCESS.2021.3125060
  6. Tsur, M. and N. Rusk. (2018). Scratch microworlds: designing project-based introductions to coding. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education, 894-899.
  7. Resnick, M., Maloney J., Monroy-Hernandez, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum, E., Silver, J., Silverman, B., and Kafai, Y. (2009). Scratch: Programming for all. Communications of the ACM, 52(11), 60-67. https://doi.org/10.1145/1592761.1592779
  8. Maloney, J., Resnick, M., Rusk, N., Silverman B., and Eastmond, E. (2010). The Scratch programming language and environment. ACM Transactions on Computing Education. 10(4), 1-15, 2010.
  9. Park, Y. and Shin, Y. (2019). Comparing the effectiveness of scratch and app inventor with regard to learning computational thinking concepts. Electronics, 8(11), 1269-1280. https://doi.org/10.3390/electronics8111269
  10. Teachable Machine v1,https://www.infoq.com/news/2017/10/teachable-machine/
  11. Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360.
  12. Teachable Machine v2, https://teachablemachine.withgoogle.com/
  13. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, arXiv preprint arXiv:1704.04861.
  14. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4510-4520.
  15. Leanring Data & Test Data, Github, https://github.com/TooeeAI/kaie2021/