• Title/Summary/Keyword: ALS 의사소통 시스템

Search Result 3, Processing Time 0.028 seconds

Communication Support System for Person with Language Disabilities (중증 언어장애인을 위한 의사소통 시스템)

  • Hong Seung-Wook;Park Su-Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.324-327
    • /
    • 2006
  • The person who gets a ALS(Amyotrophic Lateral Sclerosis) has language disability and physical disability together. A common first symptom is a painless weakness in a hand, foot, arm or leg, which occurs in more than half of all cases. Other early symptoms include muscle weakness of speech. In the early stage of this disease they can communicate with other persons, but it will become increasingly difficult. In our research we have designed and implemented communication tools for them. We have implemented Chunjiin(the Korean computer keyboard) at PDA(personal digital assistant). And we have also implemented software which is consisted of frequently used words.

  • PDF

Implementation to human-computer interface system with motion tracking using OpenCV and FPGA (FPGA와 OpenCV를 이용한 눈동자 모션인식을 통한 의사소통 시스템)

  • Lee, Hee Bin;Heo, Seung Won;Lee, Seung Jun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.696-699
    • /
    • 2018
  • This paper introduces a system that enables pupillary tracing and communication with patients with amyotrophic lateral sclerosis (ALS) who can not move free. Face and pupil are tracked using OpenCV, and eye movements are detected using DE1-SoC board. We use the webcam, track the pupil, identify the pupil's movement according to the pupil coordinate value, and select the character according to the user's intention. We propose a system that can use relatively low development cost and FPGA can be reusable, and can select a text easily to mobile phone by using Bluetooth.

  • PDF

Communication Support System for ALS Patient Based on Text Input Interface Using Eye Tracking and Deep Learning Based Sound Synthesi (눈동자 추적 기반 입력 및 딥러닝 기반 음성 합성을 적용한 루게릭 환자 의사소통 지원 시스템)

  • Park Hyunjoo;Jeong Seungdo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.20 no.2
    • /
    • pp.27-36
    • /
    • 2024
  • Accidents or disease can lead to acquired voice dysphonia. In this case, we propose a new input interface based on eye movements to facilitate communication for patients. Unlike the existing method that presents the English alphabet as it is, we reorganized the layout of the alphabet to support the Korean alphabet and designed it so that patients can enter words by themselves using only eye movements, gaze, and blinking. The proposed interface not only reduces fatigue by minimizing eye movements, but also allows for easy and quick input through an intuitive arrangement. For natural communication, we also implemented a system that allows patients who are unable to speak to communicate with their own voice. The system works by tracking eye movements to record what the patient is trying to say, then using Glow-TTS and Multi-band MelGAN to reconstruct their own voice using the learned voice to output sound.