DOI QR코드

DOI QR Code

Performance Analysis of Optical Camera Communication with Applied Convolutional Neural Network

합성곱 신경망을 적용한 Optical Camera Communication 시스템 성능 분석

  • 김종인 (한국광기술원 마이크로LED디스플레이연구센터) ;
  • 박현선 (한국광기술원 마이크로LED디스플레이연구센터) ;
  • 김정현 (한국광기술원 마이크로LED디스플레이연구센터)
  • Received : 2023.03.15
  • Accepted : 2023.04.17
  • Published : 2023.04.30

Abstract

Optical Camera Communication (OCC), known as the next-generation wireless communication technology, is currently under extensive research. The performance of OCC technology is affected by the communication environment, and various strategies are being studied to improve it. Among them, the most prominent method is applying convolutional neural networks (CNN) to the receiver of OCC using deep learning technology. However, in most studies, CNN is simply used to detect the transmitter. In this paper, we experiment with applying the convolutional neural network not only for transmitter detection but also for the Rx demodulation system. We hypothesize that, since the data images of the OCC system are relatively simple to classify compared to other image datasets, high accuracy results will appear in most CNN models. To prove this hypothesis, we designed and implemented an OCC system to collect data and applied it to 12 different CNN models for experimentation. The experimental results showed that not only high-performance CNN models with many parameters but also lightweight CNN models achieved an accuracy of over 99%. Through this, we confirmed the feasibility of applying the OCC system in real-time on mobile devices such as smartphones.

차세대 무선 통신기술로 알려져 있는 Optical Camera Communication(OCC)은 많은 연구가 진행 되고 있다. 이러한 OCC 기술은 통신 환경에 의해 성능이 좌우되며 이를 개선하기 위해 다양한 전략이 연구되고 있다. 그중 가장 두각을 나타내고 있는 방법은 딥러닝 기술을 사용하여 OCC의 수신기에 CNN을 적용하는 방법이다. 하지만 대부분의 연구에서는 CNN을 단순히 송신기를 검출하는데 사용하고 있다. 본 논문에서는 CNN을 송신기 검출 뿐만 아니라 Rx 복조 시스템에 적용하여 실험한다. 그리고 OCC 시스템의 데이터 이미지는 다른 이미지 데이터셋과는 다르게 비교적 분류가 간단하기 때문에 대부분의 CNN 모델에서 높은 정확도의 결과가 나타날 것이라는 가설을 세웠다. 가설을 증명하기 위해 OCC 시스템을 설계 및 구현하여 데이터를 수집하였고 12가지의 다양한 CNN 모델에 적용하여 실험했다. 실험 결과 파라미터수가 많은 고성능의 CNN 모델 뿐만 아니라 경량화 CNN 모델에서도 99% 이상의 정확도를 달성하였고 이를 통해 스마트폰과 같은 저성능 계산 장치에 OCC 시스템 적용이 가능함을 확인했다.

Keywords

Acknowledgement

이 연구는 2020년도 산업통상자원부 및 산업기술평가관리원(KEIT) 연구비 지원에 의한 연구임('20009767')

References

  1. pureLiFi: The LiFi Technology. Available online: http://purelifi.com/ (accessed September, 10, 2021). 
  2. IEEE P802.11-Light Communication (LC) Task Group (TG). https://www.ieee802.org/11/Reports/tgbb_update.htm (accessed March, 3, 2023). 
  3. IEEE 802.15 WPAN Task Group 7 (TG7) Visible Light Communication, https://www.ieee802.org/15/pub/TG7.html (accessed March, 3, 2023). 
  4. Ahmed A, Trichy Viswanathan S, Rahman MR and Ashok A, "An Empirical Study of Deep Learning Models for LED Signal Demodulation in Optical Camera Communication," Network, vol. 1, no. 3, pp. 261-278, Oct. 2021.  https://doi.org/10.3390/network1030016
  5. P. H. Pathak, X. Feng, P. Hu and P. Mohapatra, "Visible Light Communication, Networking, and Sensing: A Survey, Potential and Challenges," IEEE Communications Surveys & Tutorials, vol.17, no.4, pp. 2047-2077, Sep. 2015.  https://doi.org/10.1109/COMST.2015.2476474
  6. Ghassemlooy, Zabih, et al., eds, Visible light communications: theory and applications, CRC press, 2017. 
  7. What Is a Photodiode? Working, Characteristics, Applications(2021). https://www.electronicshub.org/photodiode-working-characteristics-applications/ (accessed March, 3, 2023). 
  8. Cahyadi, Willy Anugrah, Yeon Ho Chung, Zabih Ghassemlooy and Navid Bani Hassan, "Optical Camera Communications: Principles, Modulations, Potential and Challenges," Electronics, vol.9, no. 9, 1339, Aug. 2020. 
  9. T. Nguyen, A. Islam, T. Hossan and Y. M. Jang, "Current Status and Performance Analysis of Optical Camera Communication Technologies for 5G Networks," IEEE Access, vol. 5, pp. 4574-4594, 2017.  https://doi.org/10.1109/ACCESS.2017.2681110
  10. Saha, Nirzhar, et al., "Survey on optical camera communications: challenges and opportunities," Iet Optoelectronics, vol.9, no.5, pp.172-183, 2015.  https://doi.org/10.1049/iet-opt.2014.0151
  11. Nam-Tuan Le, "Invisible watermarking optical camera communication and compatibility issues of IEEE 802.15.7r1 specification," Optics Communications, vol.390, pp.144-155, 2017.  https://doi.org/10.1016/j.optcom.2016.12.073
  12. T. Yamazato, et al., "Vehicle Motion and Pixel Illumination Modeling for Image Sensor Based Visible Light Communication," IEEE Journal on Selected Areas in Communications, vol.33, no.9, pp. 1793-1805, Sept. 2015.  https://doi.org/10.1109/JSAC.2015.2432511
  13. I. Takai, T. Harada, M. Andoh, K. Yasutomi, K. Kagawa and S. Kawahito, "Optical Vehicle-to-Vehicle Communication System Using LED Transmitter and Camera Receiver," IEEE Photonics Journal, vol.6, no.5, pp. 1-14, Oct. 2014. 
  14. B. Lin, Z. Ghassemlooy, C. Lin, X. Tang, Y. Li and S. Zhang, "An Indoor Visible Light Positioning System Based on Optical Camera Communications," IEEE Photonics Technology Letters, vol.29, no.7, pp. 579-582, April1, 2017.  https://doi.org/10.1109/LPT.2017.2669079
  15. J. Armstrong, Y. A. Sekercioglu and A. Neild, "Visible light positioning: a roadmap for international standardization," IEEE Communications Magazine, vol.51, no.12, pp. 68-73, December 2013.  https://doi.org/10.1109/MCOM.2013.6685759
  16. Shahjalal, Md, et al., "An implementation approach and performance analysis of image sensor based multilateral indoor localization and navigation system," Wireless Communications and Mobile Computing, vol.2018, 2018. 
  17. P. Luo, Z. Ghassemlooy, H. Le Minh, X. Tang and H. -M. Tsai, "Undersampled phase shift ON-OFF keying for camera communication," 2014 Sixth International Conference on Wireless Communications and Signal Processing (WCSP), Hefei, China, pp.1-6, 2014. 
  18. Hasan M.K., Chowdhury M.Z., Shahjalal M., Nguyen V.T. and Jang, Y.M. "Performance Analysis and Improvement of Optical Camera Communication," Applided Sciences, vol.8, no.12, pp. 2527, Dec. 2018. 
  19. H. Lee, S. H. Lee, T. Q. S. Quek and I. Lee, "Deep Learning Framework for Wireless Systems: Applications to Optical Wireless Communications," IEEE Communications Magazine, vol.57, no.3, pp.35-41, March 2019.  https://doi.org/10.1109/mcom.2019.1800584
  20. Choi, D.N.; Jin, S.Y.; Lee, J.; Kim, B.W. "Deep Learning Technique for Improving Data Reception in Optical Camera Communication-Based V2I," Proceedings of the 28th International Conference on Computer Communication and Networks (ICCCN), pp.1-2, Aug. 2019. 
  21. P. G. Pachpande, M. H. Khadr, A. F. Hussein and H. Elgala, "Visible Light Communication Using Deep Learning Techniques," 2018 IEEE 39th Sarnof Symposium, pp.1-6, Sep. 2018. 
  22. He, W.; Zhang, M.; Wang, X.; Zhou, H.; Ren, X. "Design and Implementation of Adaptive Filtering Algorithm for VLC Based on Convolutional Neural Network," Proceedings of the 2019 IEEE 5th International Conference on Computer and Communications (ICCC), pp.317-321, Dec. 2019. 
  23. Langer, K.; Grubor, J. "Recent Developments in Optical Wireless Communications using Infrared and Visible Light," Proceedings of the 2007 9th International Conference on Transparent Optical Networks, Vol.3, pp.146-151, Jul. 2007. 
  24. Lee, Hoon, et al., "Binary signaling design for visible light communication: a deep learning framework," Optics express, vol.26, no.14, pp.18131-18142, 2018.  https://doi.org/10.1364/OE.26.018131
  25. Wang, Chunxi, Guofeng Wu and Zhiyong Du, "Reinforcement learning based network selection for hybrid VLC and RF systems," MATEC Web of Conferences, Vol. 173, 2018. 
  26. B. Turan and S. Coleri, "Machine Learning Based Channel Modeling for Vehicular Visible Light Communication," IEEE Transactions on Vehicular Technology, vol.70, no.10, pp.9659-9672, Oct. 2021.  https://doi.org/10.1109/TVT.2021.3107835
  27. Wu, Xi, Zhitong Huang and Yuefeng Ji, "Deep neural network method for channel estimation in visible light communication," Optics communications, vol.462, 125272, 2020. 
  28. A. KRIZHEVSKY, I. SUTSKEVER and G. E. HINTON, "Imagenet classification with deep convolutional neural networks," Communications of the ACM, vol.60, no.6, pp.84-90, 2017.  https://doi.org/10.1145/3065386
  29. Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," Proc. IEEE, vol.86, no.11, pp.2278-2324, Nov. 1998.  https://doi.org/10.1109/5.726791
  30. K. SIMONYAN and A. ZISSERMAN, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014. 
  31. C. SZEGEDY and et al., "Going deeper with convolutions," Proceedings of the IEEE conference on computer vision and pattern recognition, pp.1-9, 2015. 
  32. K. HE and et al., "Deep residual learning for image recognition," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. 
  33. G. HUANG and et al., "Densely connected convolutional networks," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017. 
  34. A. VASWANI and et al., "Attention is all you need," Advances in neural information processing systems, 2017. 
  35. A. e. a. DOSOVITSKIY, "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020. 
  36. F. WANG and et al., "Residual attention network for image classification," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3156-3164, 2017. 
  37. J. HU, L. SHEN and G. SUN, "Squeeze-and-excitation networks," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141, 2018. 
  38. Jongchan, P.; Sanghyun, W.; Joon-Young, L.; So, K.I. "BAM: Bottleneck Attention Module," arXiv:1807.06514, 2018. 
  39. Sanghyun, W.; Jongchan, P.; Joon-Young, L.; So, K.I. "CBAM: Convolutional Block Attention Module," arXiv:1807.06521, 2018. 
  40. Alexey, D.; Lucas, B.; Alexander, K.; Dirk, W.; Xiaohua, Z.; Thomas, U.; Mostafa, D.; Matthias, M.; Georg, H.; Sylvain, G.; et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv:2010.11929, 2020. 
  41. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jegou, H., "Training data-efficient image transformers & distillation through attention," Proceedings of the International Conference on Machine Learning, Online, 18-24 July 2021. 
  42. Chollet, F. Xception: "Deep learning with depthwise separable convolutions," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1251-1258, 2017. 
  43. Iandola, I.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size," arXiv:1602.07360, 2016. 
  44. Howard, A.G.; Zhu, M.; Chen, B; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv:1704.04861, 2017. 
  45. Xiangyu, Z.; Xinyu, Z.; Mengxiao, L.; Jian, S. Shufflenet: "An extremely efficient convolutional neural network for mobile devices," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.6848-6856, 2018. 
  46. Mark, S.; Andrew, H.; Menglong, Z.; Andrey, Z.; Liang-Chieh, C. MobileNetV2: "Inverted Residuals and Linear Bottlenecks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.4510-4520, 2018. 
  47. Ningning, M.; Xiangyu, Z.; Hai-Tao, Z.; Jian, S. Shufflenet v2: "Practical guidelines for efficient cnn architecture design," Proceedings of the European Conference on Computer Vision (ECCV), pp.116-131, 2018. 
  48. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: "A large-scale hierarchical image database," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 18 August 2009. 
  49. Van Horn, G.; Branson, S.; Farrell, R.; Haber, S.; Barry, J.; Ipeirotis, P.; Perona, P.; Belongie, S. "Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection," CVPR, pp.595-604, June 2015. 
  50. Gerry. Butterfly & Moths Image Classification 100 Species. Available online: https://www.kaggle.com/datasets/gpiosenka/butter fly-images40-species (accessed Mar., 13, 2023). 
  51. Kim J-I, and et al., "EffShuffNet: An Efficient Neural Architecture for Adopting a Multi-Model," Applied Sciences, vol.13, 3505, 2023.