• Title/Summary/Keyword: Hearing impaired

Search Result 281, Processing Time 0.023 seconds

Development of robotic hands of signbot, advanced Malaysian sign-language performing robot

  • Al-Khulaidi, Rami Ali;Akmeliawati, Rini;Azlan, Norsinnira Zainul;Bakr, Nuril Hana Abu;Fauzi, Norfatehah M.
    • Advances in robotics research
    • /
    • v.2 no.3
    • /
    • pp.183-199
    • /
    • 2018
  • This paper presents the development of a 3D printed humanoid robotic hands of SignBot, which can perform Malaysian Sign Language (MSL). The study is considered as the first attempt to ease the means of communication between the general community and the hearing-impaired individuals in Malaysia. The signed motions performed by the developed robot in this work can be done by two hands. The designed system, unlike previously conducted work, includes a speech recognition system that can feasibly integrate with the controlling platform of the robot. Furthermore, the design of the system takes into account the grammar of the MSL which differs from that of Malay spoken language. This reduces the redundancy and makes the design more efficient and effective. The robot hands are built with detailed finger joints. Micro servo motors, controlled by Arduino Mega, are also loaded to actuate the relevant joints of selected alphabetical and numerical signs as well as phrases for emergency contexts from MSL. A database for the selected signs is developed wherein the sequential movements of the servo motor arrays are stored. The results showed that the system performed well as the selected signs can be understood by hearing-impaired individuals.

Estimation of Automatic Video Captioning in Real Applications using Machine Learning Techniques and Convolutional Neural Network

  • Vaishnavi, J;Narmatha, V
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.316-326
    • /
    • 2022
  • The prompt development in the field of video is the outbreak of online services which replaces the television media within a shorter period in gaining popularity. The online videos are encouraged more in use due to the captions displayed along with the scenes for better understandability. Not only entertainment media but other marketing companies and organizations are utilizing videos along with captions for their product promotions. The need for captions is enabled for its usage in many ways for hearing impaired and non-native people. Research is continued in an automatic display of the appropriate messages for the videos uploaded in shows, movies, educational videos, online classes, websites, etc. This paper focuses on two concerns namely the first part dealing with the machine learning method for preprocessing the videos into frames and resizing, the resized frames are classified into multiple actions after feature extraction. For the feature extraction statistical method, GLCM and Hu moments are used. The second part deals with the deep learning method where the CNN architecture is used to acquire the results. Finally both the results are compared to find the best accuracy where CNN proves to give top accuracy of 96.10% in classification.

A Study on the Development of Interactive Smart Clothing for Non-Verbal Communication between People with Hearing Impairment (청각장애인 간의 비언어적 커뮤니케이션을 위한 인터랙티브 스마트의류 개발연구)

  • IM, Mi Ji;Kim, Youn Hee;Lee, Jae Jung
    • Journal of the Korean Society of Costume
    • /
    • v.66 no.2
    • /
    • pp.61-75
    • /
    • 2016
  • The purpose of this study was to develop interactive smart clothing based on visual and tactile sensitivities to promote non-verbal communication between people with hearing impairments. The study analyzed various cases of interactive smart clothing, different non-verbal communication tools, as well as results of user demand survey to extract essential factors. Then these factors were categorized into technology or design concept. The technological aspect of the development considered the following factors: the usability, detachability, purposiveness, and economic feasibility. The design aspect considered the following factors: the usability, detachability, formativeness and wearability. A prototype was designed considering the user's requirements. The developed prototype had sensors, Bluetooth technology, and gave access to wireless communication in order to enable non-verbal communication between people with hearing impairments.

A study on speech analysis of person with presbycusis (노인성 난청인의 음성특성에 관한 연구)

  • Lee, S.M.;Song, C.G.;Woo, H.C.;Lee, Y.M.;Kim, W.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.67-70
    • /
    • 1997
  • In this paper, we evaluated the character of speech of hearing impaired person (HIP) who acquire his hearing loss after the youth. It is usually observed that severe HIP decreased not only speech perception but also vocalization. so there is a need for sensitive and quantitative measures or the assesment of the speech of the HIP to serve both diagnostic and prognosic purposes, 7 HIP and 12 normal hearing person(NHP) were studied with pure tone test and speaking test using word/sentence table which consists of vowel(a:), mono and two syllables and a sentence. we analyzed formant frequency, pitch, sound intensity, speech duration of HIP and NHP speech. According to the results, in the HIP's speech we find that formant frequency was shifted, first-formant prominence was reduced, the dynamic range of sound intensity was decreased, speech duration was prolonged. In the next, we expect the correlation between hearing and speech character of HIP is cleared through analysis of more acoustic parameters and precise selection of HIP group.

  • PDF

A Study on the Self-voice Suppression Algorithm in a ZigBee CROS Hearing Aid (지그비 크로스 보청기에서의 자기음성 억제 알고리즘 연구)

  • Im, Won-Jin;Goh, Young-Hwan;Jeon, Yu-Yong;Kil, Se-Kee;Yoon, Kwang-Sub;Lee, Sang-Min
    • Journal of IKEEE
    • /
    • v.13 no.3
    • /
    • pp.62-71
    • /
    • 2009
  • In this study, we developed a wireless CROS(contralateral routing of signal) hearing aid for unilateral impaired people. CROS hearing aid takes sound from an ear with poorer hearing and transmit to another ear with better hearing. Generally, the self-voice delivered through the receiver of CROS hearing aid can be very loud. It is hard to perceive target speech because of loud self-voice. To compensate it, a self-voice suppression algorithm has been developed. we performed SDT(speech discrimination test) for evaluation of the self-voice suppression algorithm. One-syllable words was used as test speech and recorded with self-voice at a 1m distance. As the results, SDT score was improved about 11% when the self-voice suppression algorithm was processed. It is verified that the self-voice suppression algorithm helps speech perception at a time to communicate with others.

  • PDF

Carrier frequency of SLC26A4 mutations causing inherited deafness in the Korean population

  • Kim, Hyogyeong;Lim, Hwan-Sub;Ryu, Jae-Song;Kim, Hyun-Chul;Lee, Sanghoo;Kim, Yun-Tae;Kim, Young-Jin;Lee, Kyoung-Ryul;Park, Hong-Joon;Han, Sung-Hee
    • Journal of Genetic Medicine
    • /
    • v.11 no.2
    • /
    • pp.63-68
    • /
    • 2014
  • Purpose: The mutation of the SLC26A4 gene is the second most common cause of congenital hearing loss after GJB2 mutations. It has been identified as a major cause of autosomal recessive nonsyndromic hearing loss associated with enlarged vestibular aqueduct and Pendred syndrome. Although most studies of SLC26A4 mutations have dealt with hearing-impaired patients, there are a few reports on the frequency of these mutations in the general population. The purpose of this study was to evaluate the prevalence of SLC26A4 mutations that cause inherited deafness in the general Korean population. Materials and Methods: We obtained blood samples from 144 Korean individuals with normal hearing. The samples were subjected to polymerase chain reaction to amplify the entire coding region of the SLC26A4 gene, followed by direct DNA sequencing. Results: Sequencing analysis of this gene identified 5 different variants (c.147C>G, c.225G>C, c.1723A>G, c.2168A>G, and c.2283A>G). The pathogenic mutation c.2168A>G (p.H723R) was identified in 1.39% (2/144) of the subjects with normal hearing. Conclusion: These data provide information about carrier frequency for SLC26A4 mutation-associated hearing loss and have important implications for genetic diagnostic testing for inherited deafness in the Korean population.

Development of Korean Consonant Perception Test (자음지각검사 (KCPT)의 개발)

  • Kim, Jin-Sook;Shin, Eun-Yeong;Shin, Hyun-Wook;Lee, Ki-Do
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.5
    • /
    • pp.295-302
    • /
    • 2011
  • The purpose of this study was to develop Korean Consonant Perception Test (KCPT), that is a phonemic level including elementary data to evaluate speech and consonant perception ability of the normal and the hearing impaired qualitatively and quantitatively. KCPT was completed with meaningful monosyllabic words out of possible all Korean monosyllabic words, considering articulation characteristics, the degree of difficulty, and the frequency of the phonemic appearance, after assembling a tentative initial and final consonants testing items using four multiple-choice method which were applied to the seven final consonant regulation and controlled with the familiarity of the target words. Conclusively, the final three hundred items were developed including two- and one-hundred items for initial and final testing items, respectively, with the evaluation of the 20 normal hearing adults. Through this process, the final KCPT was composed upon the colloquial frequency following identification of no speakers' variances statistically and elimination of the highly difficult items. The 30 hearing impaired were tested with KCPT and found that the half lists, A and B, were not different statistically and the initial and final testing items were appropriate for evaluating initial and final consonants, respectively.

HaptiSole: Wearable Haptic System in Vibrotactile Guidance Shoes for Visually Impaired Wayfinding

  • Slim Kammoun;Rahma Bouaziz;Faisal Saeed;Sultan Noman Qasem;Tawfik Al-Hadhrami
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.3064-3082
    • /
    • 2023
  • During the last decade, several Electronic Orientation Aids devices have been proposed to solve the autonomy problems of visually impaired people. When hearing is considered the primary sense for Visually Impaired people (VI) and it is generally loaded with the environment, the use of tactile sense can be considered a solution to transmit directional information. This paper presents a new wearable haptic system based on four motors implemented in shoes, while six directions can be played. This study aims to introduce an interface design and investigate an appropriate means of spatial information delivery through haptic sense. The first experiment of the proposed system was performed with 15 users in an indoor environment. The results showed that the users were able to recognize, with high accuracy, the directions displayed on their feet. The second experiment was conducted in an outdoor environment with five blindfolded users who were guided along 120 meters. The users, guided only by the haptic system, successfully reached their destinations. The potential of tactile-foot stimulation to help VI understand Electronic Orientation Aids (EOA) instructions was discussed, and future challenges were defined.

Real Time Environmental Classification Algorithm Using Neural Network for Hearing Aids (인공 신경망을 이용한 보청기용 실시간 환경분류 알고리즘)

  • Seo, Sangwan;Yook, Sunhyun;Nam, Kyoung Won;Han, Jonghee;Kwon, See Youn;Hong, Sung Hwa;Kim, Dongwook;Lee, Sangmin;Jang, Dong Pyo;Kim, In Young
    • Journal of Biomedical Engineering Research
    • /
    • v.34 no.1
    • /
    • pp.8-13
    • /
    • 2013
  • Persons with sensorineural hearing impairment have troubles in hearing at noisy environments because of their deteriorated hearing levels and low-spectral resolution of the auditory system and therefore, they use hearing aids to compensate weakened hearing abilities. Various algorithms for hearing loss compensation and environmental noise reduction have been implemented in the hearing aid; however, the performance of these algorithms vary in accordance with external sound situations and therefore, it is important to tune the operation of the hearing aid appropriately in accordance with a wide variety of sound situations. In this study, a sound classification algorithm that can be applied to the hearing aid was suggested. The proposed algorithm can classify the different types of speech situations into four categories: 1) speech-only, 2) noise-only, 3) speech-in-noise, and 4) music-only. The proposed classification algorithm consists of two sub-parts: a feature extractor and a speech situation classifier. The former extracts seven characteristic features - short time energy and zero crossing rate in the time domain; spectral centroid, spectral flux and spectral roll-off in the frequency domain; mel frequency cepstral coefficients and power values of mel bands - from the recent input signals of two microphones, and the latter classifies the current speech situation. The experimental results showed that the proposed algorithm could classify the kinds of speech situations with an accuracy of over 94.4%. Based on these results, we believe that the proposed algorithm can be applied to the hearing aid to improve speech intelligibility in noisy environments.

Design of External Coil System for Reducing Artifact of MR Image due to Implantable Hearing Aid (이식형 보청기에 의한 자기공명 영상의 인공음영 축소를 위한 외부 코일 시스템 설계)

  • Ahn, Hyoung Jun;Lim, Hyung-Gyu;Kim, Myoung Nam;Cho, Jin-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.375-385
    • /
    • 2016
  • Recently, several implantable hearing aids such as cochlear implant, middle ear implant, etc., which have a module receiving power and signal from outside the body, are frequently used to treat the hearing impaired patients. Most of implantable hearing aids are adopted permanent magnet pairs to couple between internal and external devices for the enhancement of power transmission. Generally, the internal device which containing the magnet in the center of receiving coil is implanted under the skin of human temporal bone. In case of MRI scanning of a patient with the implantable hearing aid, however, homogeneous magnetic fields of the MRI might be interfered by the implanted magnet. For the above reasons, the MR image is degraded by large area of artifact, so that diagnostics are almost impossible in deteriorated region. In this paper, we proposed an external coil system that can reduce the artifact of MR image due to the internal coupling magnet. By finite element analysis estimating area of MR artifact according to varying current and shape of the external coil, optimal coil parameters were extracted. Finally, the effectiveness of the proposed external coil system was verified by confirming the artifact at real MRI scan.