DOI QR코드

DOI QR Code

Image Denoising Methods based on DAECNN for Medication Prescriptions

DAECNN 기반의 병원처방전 이미지잡음제거

  • Khongorzul, Dashdondov (Dept. Computer Engineering, Chungbuk National University) ;
  • Lee, Sang-Mu (Dept. Computer Engineering, Chungbuk National University) ;
  • Kim, Yong-Ki (Dept. Computer Engineering, Chungbuk National University) ;
  • Kim, Mi-Hye (Dept. Computer Engineering, Chungbuk National University)
  • Received : 2019.03.11
  • Accepted : 2019.05.20
  • Published : 2019.05.28

Abstract

We aimed to build a patient-based allergy prevention system using the smartphone and focused on the region of interest (ROI) extraction method for Optical Character Recognition (OCR) in the general environment. However, the current ROI extraction method has shown good performance in the experimental environment, but the performance in the real environment was not good due to the noisy background. Therefore, in this paper, we propose the compared methods of reducing noisy background to solve the ROI extraction problem. There five methods used as a SMF, DIN, Denoising Autoencoder(DAE), DAE with Convolution Neural Network(DAECNN) and median filter(MF) with DAECNN (MF+DAECNN). We have shown that our proposed DAECNN and MF+DAECNN methods are 69%, respectively, which is relatively higher than the conventional DAE method 55%. The verification of performance improvement uses MSE, PSNR and SSIM. The system has implemented OpenCV, C++ and Python, including its performance, is tested on real images.

본 연구는 환자의 알레르기 예방시스템을 구축하기 위해 스마트폰을 이용하여 저장된 처방전의 이미지잡음제거를 위한 ROI 추출 방법에 중점을 두었다. 현재 ROI 추출은 제한된 실험 환경에서 좋은 성능을 보여 주었지만 실제 환경에서의 성능은 잡음으로 인해 좋지 않았다. 따라서 본 연구에서는 정확도 높은 ROI 추출을 위해 스마트폰 영상에서 발생하는 잡음제거 방법을 제안한다. SMF, DIN, DAE, DAECNN(Denoising Autoencoder with Convolution Neural Network) and median filter with DAECNN(MF+DAECNN) 방법을 실험하였고 그 결과 DAECNN 및 MF + DAECNN 방법이 스마트폰에서 이미지의 잡음제거가 효과적임을 보여주었다. 성능 향상을 검증하기 위해 SSIM, PSNR 및 MSE 방법을 사용하였고 이 시스템은 OpenCV, C ++ 및 Python로 구현 및 실험되었고 실제 이미지에서 성능 테스트를 거쳐 자연잡음(natural noise)을 제거하는데 본 논문에서 제안한 DAECNN과 MF+DAECNN이 각 69%로 기존의 DAE 방법 55% 보다 상대적으로 높은 결과를 도출하였다.

Keywords

OHHGBW_2019_v10n5_17_f0001.png 이미지

Fig. 1. Dataset structure

OHHGBW_2019_v10n5_17_f0002.png 이미지

Fig. 2. Sample Images of Real Dataset

OHHGBW_2019_v10n5_17_f0003.png 이미지

Fig. 3. System architecture

OHHGBW_2019_v10n5_17_f0004.png 이미지

Fig. 4. Standard Median Filter

OHHGBW_2019_v10n5_17_f0005.png 이미지

Fig. 5. Structure of light illumination normalization

OHHGBW_2019_v10n5_17_f0006.png 이미지

Fig. 6. A Denoising Autoencoder

OHHGBW_2019_v10n5_17_f0007.png 이미지

Fig. 7. Testing loss comparison of DAE, DAECNN and MF+DAECNN methods

OHHGBW_2019_v10n5_17_f0008.png 이미지

Fig. 8. Sample of Noisy and Denoised images fordifferent denoising methods.

Table 1. MSE comparison results of different denoising methods with the original image on Eq. 2.

OHHGBW_2019_v10n5_17_t0001.png 이미지

Table 2. PSNR comparison results of different denoising methods with the original image on Eq. 3.

OHHGBW_2019_v10n5_17_t0002.png 이미지

Table 3. SSIM comparison results of different denoising methods with the original image on Eq. 4.

OHHGBW_2019_v10n5_17_t0003.png 이미지

References

  1. M. Mirmehdi, P. Clark & J. Lam. (2001). Extracting low resolution text with an active camera for OCR, Proc. of the 9th Spanish symposium on pattern recognition and image processing, 43-48. DOI: 10.1109/TSMCB.2005.857353
  2. S. Suman. (2014). Image Denoising using New Adaptive Based Median Filter, Signal and Image Processing: An International Journal, 5(4). DOI: 10.5121/sipij.2014.5401
  3. Wang G, Wang Y & Li H, et al. (2014). Morphological background detection and illumination normalization of text image with poor lighting, PLoS One, 9(11) :e110991. DOI:10.1371/journal.pone.0110991
  4. Y. Luo, Y. P. Guan & C. Q. Zhang. (2013). A robust illumination normalization method based on mean estimation for face recognition, ISRN Machine Vision, 1-10. Article ID 516052. DOI: 10.1155/2013/516052
  5. K. G. Lore, A. Akintayo & S. Sarkar (2017). LLNet: A deep autoencoder approach to natural low-light image enhancement, Pattern Recognition, 61, 650-662. DOI: 10.1016/j.patcog.2016.06.008.
  6. P. Vincent, et al., (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, Proc. of the 27th Int. Conf. on Machine Learning. 3371-3408, ACM. DOI: 10.1.1.297.3484
  7. K. H. Cho. (2013). Boltzmann machines and denoising autoencoders for image denoising. arXiv:1301.3468. DOI: arXiv:1301.3468v6.
  8. T. Remez, et al. (2017). Deep convolutional denoising of low-light images, arXiv:1701.01687. DOI: arXiv:1701.01687v1
  9. L. Gondara. (2016). Medical Image Denoising Using Convolutional Denoising Autoencoders. IEEE 16th Int. Conf. on Data Mining Workshops, pp. 241-246, Barcelona. DOI: 10.1109/ICDMW.2016.0041
  10. X. Glorot & Y. Bengio. (2010). Understanding the difficulty of training deep feedforward neural networks, Proc. of the 13th Int. Conf. on Artificial Intelligence and Statistics, 9, 249-256. DOI:10.1.1.207.2059
  11. W. Chen, J. E. Meng & W. Shiqian (2006). Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain, IEEE Transactions on Systems, Man, and Cybernetics, 36(2), 458-466. DOI: 10.1109/TSMCB.2005.857353
  12. A. Tsatsral, J. Bilguun & H. R. Keun, (2018). Unsupervised Novelty Detection Using Deep Autoencoders with Density Based Clustering, Appl. Sci., 8(9), 1468. DOI: 10.3390/app8091468
  13. D. P. Kingm & J. Ba. (2014). Adam: A method for stochastic optimization, Proc. of the 3rd Int. Conf. for Learning Representations, pp. 7-9, San Diego, CA, USA. DOI:arXiv:1412.6980v9
  14. M. Abadi, et al., (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems, arXiv:1603.04467. DOI: arXiv:1603.04467v2
  15. Q. Shan, et al., (2010). Using optical defocus to denoise. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 561-568, San Francisco, CA. DOI: 10.1109/CVPR.2010.5540164