DOI QR코드

DOI QR Code

Camera Model Identification Based on Deep Learning

딥러닝 기반 카메라 모델 판별

  • 이수현 (금오공과대학교 소프트웨어공학과) ;
  • 김동현 (금오공과대학교 소프트웨어공학과) ;
  • 이해연 (금오공과대학교 컴퓨터소프트웨어공학과)
  • Received : 2019.03.05
  • Accepted : 2019.08.10
  • Published : 2019.10.31

Abstract

Camera model identification has been a subject of steady study in the field of digital forensics. Among the increasingly sophisticated crimes, crimes such as illegal filming are taking up a high number of crimes because they are hard to detect as cameras become smaller. Therefore, technology that can specify which camera a particular image was taken on could be used as evidence to prove a criminal's suspicion when a criminal denies his or her criminal behavior. This paper proposes a deep learning model to identify the camera model used to acquire the image. The proposed model consists of four convolution layers and two fully connection layers, and a high pass filter is used as a filter for data pre-processing. To verify the performance of the proposed model, Dresden Image Database was used and the dataset was generated by applying the sequential partition method. To show the performance of the proposed model, it is compared with existing studies using 3 layers model or model with GLCM. The proposed model achieves 98% accuracy which is similar to that of the latest technology.

멀티미디어 포렌식 분야에서 영상을 촬영한 카메라 모델 판별을 위한 연구가 지속되어 왔다. 점점 고도화되는 범죄 중에 불법 촬영 등의 범죄는 카메라가 소형화됨에 따라 피해자가 알아차리기 어렵기 때문에 높은 범죄 발생 건수를 차지하고 있다. 따라서 특정 영상이 어느 카메라로 촬영되었는지를 특정할 수 있는 기술이 사용된다면 범죄자가 자신의 범죄 행위를 부정할 때, 범죄 혐의를 입증할 증거로 사용될 수 있을 것이다. 본 논문에서는 영상을 촬영한 카메라 모델 판별을 위한 딥러닝 모델을 제안한다. 제안하는 모델은 4개의 컨볼루션 계층과 2개의 전연결 계층으로 구성되었으며, 데이터 전처리를 위한 필터로 High Pass Filter를 사용하였다. 제안한 모델의 성능 검증을 위하여 Dresden Image Database를 활용하였고, 데이터셋은 순차분할 방식을 적용하여 생성하였다. 제안하는 모델을 3 계층 모델과 GLCM 적용 모델 등 기존 연구들과 비교 분석을 수행하여 우수성을 보였고, 최신 연구 결과에서 제시하는 수준의 98% 정확도를 달성하였다.

Keywords

References

  1. M. Kharrazi, H.-T. Sencar, and N. Memon, "Blind source camera identification," Proceedings of the International Conference on Image Processing, Vol.1, pp.709-712, 2004.
  2. S. Bayram, H. Sencar, N. Memon, and I. Avcibas, "Source camera identification based on CFA interpolation," Proceedings of the IEEE International Conference on Image Processing, Vol.3, pp.III-69, 2005.
  3. A. Popescu and H. Farid, "Exposing Digital Forgeries by Detecting Traces of Re-sampling," IEEE Transactions on Signal Processing, Vol.53, No.2, 2005.
  4. K.-S. Choi, E.-Y. Lam, and K.-K. Wong, "Source camera identification using footprints from lens aberration," Proceedings of SPIE, Digital Photography II, Vol.6069, pp. 60690J, 2006.
  5. J. Lukas, J. Fridrich, and M. Goljan, "Digital camera identification from sensor pattern noise," IEEE Transactions on Information Forensics and Security, Vol.1 No.2, pp.205-214, 2006. https://doi.org/10.1109/TIFS.2006.873602
  6. K. Bolouri, A. Azmoodeh, A. Dehghantanha, and M. Firouzmand, "Internet of things camera identification algorithm based on sensor pattern noise using color filter array and wavelet transform," In Handbook of Big Data and IoT Security, Springer, Cham, pp.211-223, 2019.
  7. A. Tuama, F. Comby, and M. Chaumont, "Camera model identification with the use of deep convolutional neural network," Proceedings of the IEEE International Workshop on Information Forensics and Security, pp.1-6, 2016.
  8. A. Krizhevsky, I. Sutskever, and G.-E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in Neural Information Processing Systems, pp. 1097-1105, 2012.
  9. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S.-E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9, 2015.
  10. L. Bondi, L. Baroffio, D. Guera, P. Bestagini, E.-J. Delp, and S. Tubaro, "First steps toward camera model identification with convolutional neural networks," IEEE Signal Processing Letters, Vol.24, No.3, pp.259-263, 2017. https://doi.org/10.1109/LSP.2016.2641006
  11. D. Freire-Obregon, F. Narducci, S. Barra, and M. Castrillon-Santana, "Deep learning for source camera identification on mobile devices," Pattern Recognition Letters, Vol.126, pp.86-91, 2018. https://doi.org/10.1016/j.patrec.2018.01.005
  12. S.-H. Lee and H.-Y. Lee, "Printer Identification Methods Using Global and Local Feature-Based Deep Learning," KIPS Transactions on Software and Data Engineering, Vol. 8, No.1, pp.37-44, 2019. https://doi.org/10.3745/KTSDE.2019.8.1.37
  13. J.-Y. Baek, H.-S. Lee, S.-G. Kong, J.-H. Choi, Y.-M. Yang, and H.-Y. Lee, "Color Laser Printer Identification through Discrete Wavelet Transform and Gray Level Co-occurrence Matrix," The KIPS Transactions: Part B, Vol.17, No.3, pp 197-206, 2010.
  14. B. Hosler, O. Mayer, B. Bayar, X. Zhao, C. Chen, J.-A. Shackleford, and M.-C. Stamm, "A Video Camera Model Identification System Using Deep Learning and Fusion," In ICASSP 2019-2019 IEEE International Conference on Acoustics, pp.8271-8275, 2019.
  15. V. Nair and G.-E. Hinton, "Rectified linear units improve restricted boltzmann machines," Proceedings of the International Conference on Machine Learning, pp.807-814, 2010.
  16. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: a simple way to prevent neural networks from overfitting," Journal of Machine Learning Research, Vol.15, No.1, pp.1929-1958, 2014.
  17. Dresden Image Database, [Internet], http://forensics.inf.tudresden.de/ddimgdb/