DOI QR코드

DOI QR Code

Development of IoT System Based on Context Awareness to Assist the Visually Impaired

  • Song, Mi-Hwa (School of Information and Communication Sciences, Semyung University)
  • Received : 2021.10.31
  • Accepted : 2021.12.07
  • Published : 2021.12.31

Abstract

As the number of visually impaired people steadily increases, interest in independent walking is also increasing. However, there are various inconveniences in the independent walking of the visually impaired at present, reducing the quality of life of the visually impaired. The white cane, which is an existing walking aid for the visually impaired, has difficulty in recognizing upper obstacles and obstacles outside the effective distance. In addition, it is inconvenient to cross the street because the sound signal to help the visually impaired cross the crosswalk is lacking or damaged. These factors make it difficult for the visually impaired to walk independently. Therefore, we propose the design of an embedded system that provides traffic light recognition through object recognition technology, voice guidance using TTS, and upper obstacle recognition through ultrasonic sensors so that blind people can realize safe and high-quality independent walking.

Keywords

References

  1. S. Park, J. Park, H. Ryu, H. Lee, and K. Lee, "Development of an auxiliary IoT device and pedestrian guidance service app for visually impaired people," Journal of the Korean Digital Contents Association, 21(2), pp. 269-275, 2020.
  2. S. Park, T. Jeon, S. Kim, S. Lee, and J. Kim, "Deep learning-based symbol recognition for blind people," The Journal of Korea Institute of Information, Electronics, and Communication, 9(3), 249, 2016. https://doi.org/10.17661/jkiiect.2016.9.3.249
  3. S. Lee, M. Kang, "Visual impairment auxiliary object detection and voice guidance system using object recognition technology," IEIE Transactions on Smart Processing & Computing, 55 (11), pp. 65-71, 2018.
  4. J. Kim, S. Jung, and Y. Yoo, "Implementation of walking deep learning cane for blind people," The Journal of the Institute of Internet, Broadcasting and Communication, 24(1), pp.343-345, 2020.
  5. S. Oh, G. Jeong, H. Kim, and Y. Kim, "Development of a visually impaired road crossing assisted embedded system using machine learning," Journal of the HCI Society of Korea, 14(2), 41-47. 2019. https://doi.org/10.17210/jhsk.2019.05.14.2.41
  6. S. Lee, K. Lee, S. Lee, J. Ko, and W. Yoo, "Technology Trends and Analysis of Deep Learning Based Object Classification and Detection," Electronics and Telecommunications Trends, 33(4), pp.33-42, 2018. DOI: 10.22648/ETRI.2018.J.330403
  7. R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proc. of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2014.
  8. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proc. of the IEEE conference on computer vision and pattern recognition. pp. 779-788, 2016.
  9. J. Redmon, A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
  10. Sense Five, https://www.behance.net/gallery/100682827/Sense-Five (accessed September 7, 2021)
  11. 'WeWork', a smart cane for the visually impaired, http://www.bizion.com/bbs/board.php?bo_table=gear&wr_id=1441&sca=Outdoor%2CLeports (accessed September 7, 2021)
  12. NVIDIA Jetson Nano, https://developer.nvidia.com/embedded/jetson-nano-developer-kit (accessed September 8, 2021)
  13. NVIDIA Jetpack SDK, https://developer.nvidia.com/embedded/jetpack (accessed September 8, 2021)