• Title/Summary/Keyword: Braille Training System

Search Result 2, Processing Time 0.021 seconds

OnDot: Braille Training System for the Blind (시각장애인을 위한 점자 교육 시스템)

  • Kim, Hak-Jin;Moon, Jun-Hyeok;Song, Min-Uk;Lee, Se-Min;Kong, Ki-sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.41-50
    • /
    • 2020
  • This paper deals with the Braille Education System which complements the shortcomings of the existing Braille Learning Products. An application dedicated to the blind is configured to perform full functions through touch gestures and voice guidance for user convenience. Braille kit is produced for educational purposes through Arduino and 3D printing. The system supports the following functions. First, the learning of the most basic braille, such as initial consonants, final consonant, vowels, abbreviations, etc. Second, the ability to check learned braille by solving step quizzes. Third, translation of braille. Through the experiment, the recognition rate of touch gestures and the accuracy of braille expression were confirmed, and in case of translation, the translation was done as intended. The system allows blind people to learn braille efficiently.

A Beverage Can Recognition System Based on Deep Learning for the Visually Impaired (시각장애인을 위한 딥러닝 기반 음료수 캔 인식 시스템)

  • Lee Chanbee;Sim Suhyun;Kim Sunhee
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.1
    • /
    • pp.119-127
    • /
    • 2023
  • Recently, deep learning has been used in the development of various institutional devices and services to help the visually impaired people in their daily lives. This is because not only are there few products and facility guides written in braille, but less than 10% of the visually impaired can use braille. In this paper, we propose a system that recognizes beverage cans in real time and outputs the beverage can name with sound for the convenience of the visually impaired. Five commercially available beverage cans were selected, and a CNN model and a YOLO model were designed to recognize the beverage cans. After augmenting the image data, model training was performed. The accuracy of the proposed CNN model and YOLO model is 91.2% and 90.8%, respectively. For practical verification, a system was built by attaching a camera and speaker to a Raspberry Pi. In the system, the YOLO model was applied. It was confirmed that beverage cans were recognized and output as sound in real time in various environments.