DOI QR코드

DOI QR Code

Abnormal Detection with Microscope through Deep Learning

마이크로스코프 이미지의 딥러닝 기반 이상검출

  • 정희용 (전남대학교 AI융합대학 IoT인공지능융합전공) ;
  • 고정원 (전남대학교 AI융합대학 IoT인공지능융합전공) ;
  • 신춘성 (전남대학교 문화전문대학원 미디어예술공학전공)
  • Received : 2021.01.06
  • Accepted : 2021.04.06
  • Published : 2021.04.30

Abstract

The success rate of the no-smoking campaign has been low, although everybody knows that cigarettes are harmful to the human health. The results of both regular health and cancer checks in the hospital are useful for strengthening the human intention for quitting the smoking, however, those methods are difficult to use in daily life because of the use of large-scaled particular devices such as PET(positron emission tomography). Thus, this study proposed a non-invasive method that detects the difference between smokers and non-smokers through deep-learning-based analysis. At first, observing parts were decided to the tongue surface. Then, a data set(total 1,000) was made through the experiment to measure the tongue surface(410 times magnification) with the participants of 10 smokers and 10 non-smokers. The 80% ratio of data set was used for the train, and the left 20% was for the prediction. As a result, it was found that the classification through EfficientNet with the compound scaling including three scaling methods of width scaling, depth scaling and resolution scaling was much better than other models including VGG, ResNet, and DenseNet with the only one scaling.

흡연자 중에서 담배가 인체에 유해하다는 사실을 모르는 사람은 없을 것임에도 불구하고, 정작 금연 성공률은 높지 않다. 금연을 위한 의지를 지속적으로 굳건하게 다지기 위하여 병원에서 실시하는 건강검진과 PET(positron emission tomography) 이미지를 통한 암 검사의 결과가 도움이 되지만, 일상생활 중에 간단히 실시할 수 있는 방법이 아니다. 본 연구에서는 일상생활 중에 관찰 가능한 흡연자의 신체 부위를 딥러닝 기반 마이크로스코프 이미지 측정 및 분석을 통하여 흡연자와 비흡연자의 차이를 검출할 수 있는 비침습적 방법을 제안하였다. 우선, 관찰 부위를 흡연시 직접적인 접촉을 하는 혓바닥 표면으로 설정하였다. 다음으로, 마이크로스코프로 혓바닥 표면(410배 확대)을 흡연자 10명과 비흡연자 10명의 실험 참가자를 통하여 데이터 셋(총 1,000장)을 구축하여 그 중 80%를 딥러닝 모델의 학습에 사용하였고, 나머지 20%는 예측에 사용하였다. 딥러닝 모델을 스케일링하는 방법(width scaling, depth scaling, resolution scaling) 중 한 가지 방법만 적용하는 VGG, ResNet, DenseNet과 세 가지를 모두 적용하여 스케일링하는 EfficientNet의 성능을 비교하여 모세혈관 이미지 처리에 EfficientNet의 우수성을 확인해 볼 수 있었다.

Keywords

Acknowledgement

본 연구는 전남대학교 국립대학교 육성사업비 교내 신진 학술연구비 (과제번호: 2020-2020)와 한국산업기술평가관리원 연구비 (과제번호: 2020-3414)의 지원에 의하여 진행되었다. 또한 전남대학교 4단계 BK21사업 인공지능 융합 인재 양성 사업단의 지속적인 관심과 지원에 깊은 감사를 표현한다.

References

  1. Ayanian, JZ. (1999). Perceived risks of heart disease and cancer among cigarette smokers, JAMA, 281(11), (pp. 1019-1-21). https://doi.org/10.1001/jama.281.11.1019
  2. Carosella, AM, Ossip-Klein, DJ, Watt, CA, and Podgorski, C. (2002). Smoking history, knowledge, and attitudes among older residents of a long-term care facility, Nicotine & Tobacco Research, 4, (pp. 161-169). https://doi.org/10.1080/14622200210123987
  3. Chen, WW, and Lindsey, R. (2001). Evaluation of a tobacco prevention program on knowledge, attitudes, intention and behavior of tobacco use among fourth grade students - a preliminary study, J Drug Educ, 31(4), (pp. 399-410). https://doi.org/10.2190/LN45-RRD7-B4EV-5FCN
  4. Frank, E, Denniston, M, and Pederson, L. (2002). Declines in smokers' understanding of tobacco's hazards between 1986 and 1998: a report from north Georgia, South Med J , 95(7), (pp. 675-680). https://doi.org/10.1097/00007611-200207000-00005
  5. Geckova, A, Van Dijk, JP, van lttersum-Gritter, T, and Groothoff, JW. (2002). Determinants of adolescents' smoking behaviour: A literature review, Cent Eur J publ Health, 10(3), (pp. 79-87).
  6. Hakeem, R, Thomas, J, and Badruddin, SH. (2001). Urbanisation and health related knowledge and attitudes of South Asian children, J Pak Med Assoc, 51(12), (pp. 437-443).
  7. He, K, Zhang, X, Ren, S, and Sun, J. (2016). Deep residual learning for image recognition, CVPR, (pp. 770-778).
  8. He, Y, Lin, J, Liu, Z, Wang, H, Li, LJ, and Han, S. (2018). Amc: Automl for model compression and acceleration on mobile devices, ECCV.
  9. Howard, AG, Zhu, M, Chen, B, Kalenichenko, D, Wang, W, Weyand, T, Andreetto, M, and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv: 1704.04861.
  10. Hu, J, Shen, L, and Sun, G. (2018). Squeeze-and-excitation networks, CVPR.
  11. Huang, Y, Cheng, Y, Chen, D, Lee, H, Ngiam, J, Le, QV, and Chen, Z. (2018). Gpipe: Efficient training of giant neural networks using pipeline parallelism, arXiv preprint arXiv: 1808.07233.
  12. Jeong, YS, Lee, HR, Baek, JH, Kim, KH, Chung YS, and Lee, CW. (2020). Deep Learning-based Rice Seed Segmentation for Phynotyping, Journal of the Korea Industrial Information Systems Research, 25(5), (pp. 23-29). https://doi.org/10.9723/JKSIIS.2020.25.5.023
  13. Kale, YS, Vibhute, N, Belgaumi, U, Kadashetti, V, Bommanavar, S, and Kamate, W. (2019). Effect of using tobacco on taste perception, J Family Med P rim Care, 8(8), (pp. 2699-2702). https://doi.org/10.4103/jfmpc.jfmpc_457_19
  14. Kim, SW, Cha, KA, and Park, SH. (2020). Lip-reading System based on Bayesian Classifier, Journal of the Korea Industrial Information Systems Research, 25(4), (pp. 9-16). https://doi.org/10.9723/JKSIIS.2020.25.4.009
  15. Kim, DH, Hwang, BW, Lee, SW, and Kwak, SY. (2020). 3D Human Shape Deformation using Deep Learning, Journal of the Korea Industrial Information Systems Research, 25(2), (pp. 19-27). https://doi.org/10.9723/JKSIIS.2020.25.2.019
  16. Kornblith, S, Shlens, J, and Le, QV. (2019). Do better imagenet models transfer better?, CVPR.
  17. Krizhevsky, A, Sutskever, I, and Hinton, GE. (2012). Imagenet classification with deep convolutional neural networks, NIPS, (pp. 1097-1105).
  18. Lu, Z, Pu, H, Wang, F, Hu, Z, and Wang, L. (2018). The expressive power of neural networks: A view from the width, NeurIPS.
  19. Pavlos, P, Vasilios, N, Antonia, A, Dimitrios, K, Georgios, K, and Georgios, A. (2009). Evaluation of young smokers and non-smokers with Electrogustometry and Contact Endoscopy, BMC Ear Nose Throat Disord, 9(9), doi: 10.1186/1472-6815-9-9.
  20. Sargent, JD, Beach, ML, Dalton, MA, Mott, LA, and Tickle, JJ, (2001). Ahrens, MB, Heatherton, TF. Effect of seeing tobacco use in films on trying smoking among adolescents: cross sectional study, BMJ , 323(7326), (1394-1397). https://doi.org/10.1136/bmj.323.7326.1394
  21. Sharir, O, and Shashua, A. (2018). On the expressive power of overlapping architectures of deep learning, ICLR.
  22. Suga, S, Otomo, A, Jeong, H, and Ohno, Y. (2019). Image Similarity Check of Nailfold Capillary by Template Matching, IEEE 8th Global Conference on Consumer Elecctronics (GCCE), Osaka, Japan.
  23. Szegedy, C, Liu, W, Jia, Y, Sermanet, P, Reed, S, Anguelov, D, Erhan, D, Vanhoucke, V, and Rabinovich, A. (2015). Going deeper with convolutions, CVPR, (pp. 1-9).
  24. Tan, M, and V. Le, Q. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks, Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, California, PMLR 97.
  25. Tomassini, S, Cuoghi, V, Catalani, E, Casini, G, and Bigiani, A. (2007). Long-term effects of nicotine on rat fungiform taste buds, Neuroscience, 147(3), (pp. 803-810). https://doi.org/10.1016/j.neuroscience.2007.04.053
  26. Torabi, MR, Yang, J, and Li, J. (2002). Comparison of tobacco use knowledge, attitude and practice among college students in China and the United states, Health promot int, 17(3), (pp. 247-254). https://doi.org/10.1093/heapro/17.3.247
  27. World Health Organization (WHO). (1983). Guidelines for the conduct of tobacco-smoking surveys among health professionals, Report of a WHO meeting held in Winnipeg, Canada.
  28. Woo, E. (2003). Study on the Knowledge Level of Smoking and Smoking Behavior, Master Thesis, The Graduate School of Yonsei Univeristy, Seoul, Korea.
  29. Zagoruyko, S, and Komodakis, N. (2016). Wide residual networks, BMVC.