• Title/Summary/Keyword: hard of hearing

Search Result 38, Processing Time 0.025 seconds

The Trend of Damping Alloys Patent (방진합금기술의 특허동향)

  • Kim, Jong-Heon;Kim, Chang-Gyu;Kwak, Hee-Hwan
    • Journal of Korea Foundry Society
    • /
    • v.32 no.6
    • /
    • pp.305-312
    • /
    • 2012
  • As the industrial civilization develops, humankind can expect to receive the convenience and richness. But the various by-product which it leaves as the pollution threatens the natural environment. Especially, noise and vibration of these pollutions are causative of the mental instability and hard of hearing. In addition, they cause the performance degradation of the precision instrument and the early rising fatigue fracture of parts of precision instrument from the industry side. So recently interest in the damping technology and damping alloy is increasing. Therefore, in order to grasp the advanced technology of the damping alloy, we analyzed global techniques and patents information in this paper.

A construction of dictionary for Korean Text to Sign Language Translation (한글문장-수화 번역기를 위한 사전구성)

  • 권경혁;민홍기
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.841-844
    • /
    • 1998
  • Korean Text to Sign Language Traslator could be applied to learn letters for both the deaf and hard-of-hearing people, and to have a conversation with normal people. This paper describes some useful dictionaries for developing korean text to sign language translator; Base sign language dictionary, Compound sign language dictionary, and Resemble sign language dictionary. As korean sign language is composed entirely of about 6,000 words, the additional dictionaries are required for matching them to korean written language. We design base sign language dictionary which was composed of basic symbols and moving picture of korean sign language, and propose the definition of compound isng language dictionary which was composed of symbols of base sing language. In addition, resemble sign language dictionary offer sign symbols and letters which is used same meaning in conversation. By using these methods, we could search quickly sign language during korean text to sign language translating process, and save storage space. We could also solve the lack of sign language words by using them, which are appeared on translating process.

  • PDF

Impedance Audiometry in Children (학동기 아동의 Impedance Audiometry에 대한 연구)

  • 소진명;전승하;장혁순
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1979.05a
    • /
    • pp.4.2-4
    • /
    • 1979
  • Since Metz had employed the impedance audiometry in 1946, scholars have carried out many investigation. Brook, Jeger, Cooper reported and evaluated the clinical studies of impedance audiometry and its screening test. Recently, in Korea, a studies of impedance audiometry have been reported. We analysed 100 children aged between 7-16 years old who visited OPD of E.N.T. department with the complaints of the nasal obstruction and hard of hearing from Nov. 77-Feb. 79. Through the use of otoscope and impedance audiometry, we evaluated the types of tympanogram, static compliance and the acoustic reflex. This paper is dealing with the statistical study of impedance audiometric result and its literature.

  • PDF

Face Recognition and Preprocessing Technique for Speaker Identification in hard of hearing broadcasting (청각장애인용 방송에서 화자 식별을 위한 얼굴 인식 알고리즘 및 전처리 연구)

  • Kim, Nayeon;Cho, Sukhee;Bae, Byungjun;Ahn, ChungHyun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.450-452
    • /
    • 2020
  • 본 논문에서는 딥러닝 기반 얼굴 인식 알고리즘에 대해 살펴보고, 이를 청각장애인용 방송에서 화자를 식별하고 감정 표현 자막을 표출하기 위한 배우 얼굴 인식 기술에 적용하고자 한다. 우선, 배우 얼굴 인식을 위한 방안으로 원샷 학습 기반의 딥러닝 얼굴 인식 알고리즘인 ResNet-50 기반 VGGFace2 모델의 구성에 대해 이해하고, 이러한 모델을 기반으로 다양한 전처리 방식을 적용하여 정확도를 측정함으로써 실제 청각장애인용 방송에서 배우 얼굴을 인식하기 위한 방안에 대해 모색한다.

  • PDF

Implementation of MP3 decoder with TMS320C541 DSP (TMS320C541 DSP를 이용한 MP3 디코더 구현)

  • 윤병우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.3
    • /
    • pp.7-14
    • /
    • 2003
  • MPEG-1 audio standard is the algorithm for the compression of high-qualify digital audio signals. The standard dictates the functions of encoder and decoder pair, and includes three different layers as the complexity and the performance of the encoder and decoder. In this paper, we implemented the real-time system of MPEG-1 audio layer III decoder(MP3) with the TMS320C541 fixed point DSP chip. MP3 algorithm uses psycho-acoustic characteristic of human hearing system, and it reduces the amount of data with eliminating the signals hard to be heard to the hearing system of human being. It is difficult to implement MP3 decoder with fixed Point DSP because of it's broad dynamic range. We implemented realtime system with fixed DSP chip by using weighted look-up tables to reduce the amount of calculation and solve the problem of broad dynamic range.

  • PDF

Web-based Text-To-Sign Language Translating System (웹기반 청각장애인용 수화 웹페이지 제작 시스템)

  • Park, Sung-Wook;Wang, Bo-Hyeun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.265-270
    • /
    • 2014
  • Hearing-impaired people have difficulty in hearing, so it is also hard for them to learn letters that represent sound and text that conveys complex and abstract concepts. Therefore it has been natural choice for the hearing-impaired people to use sign language for communication, which employes facial expression, and hands and body motion. However, the major communication methods in daily life are text and speech, which are big obstacles for the hearing-impaired people to access information, to learn and make intellectual activities, and to get jobs. As delivering information via internet become common the hearing-impaired people are experiencing more difficulty in accessing information since internet represents information mostly in text forms. This intensifies unbalance of information accessibility. This paper reports web-based text-to-sign language translating system that helps web designer to use sign language in web page design. Since the system is web-based, if web designers are equipped with common computing environment for internet browsing, they can use the system. The web-based text-to-sign language system takes the format of bulletin board as user interface. When web designers write paragraphs and post them through the bulletin board to the translating server, the server translates the incoming text to sign language, animates with 3D avatar and records the animation in a MP4 file. The file addresses are fetched by the bulletin board and it enables web designers embed the translated sign language file into their web pages by using HTML5 or Javascript. Also we analyzed text used by web pages of public services, then figured out new words to the translating system, and added to improve translation. This addition is expected to encourage wide and easy acceptance of web pages for hearing-impaired people to public services.

Implementation of closed caption service S/W module on DTV receiver (DTV 수신기의 자막방송 S/W 모듈의 구현)

  • Kim Sun-Gwon;No Seung-Yong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.1
    • /
    • pp.69-76
    • /
    • 2004
  • Recently, The development of DTV receiver and the need of its additional services have been increased vastly. In this paper, we implement new closed caption engine for the deaf and hard of hearing person and languages studying on DTV receiver, The specification of domestic closed caption is almost adopted that of EIA-608A. In this paper, with fully following the specification, we will present how to implement functions of closed caption with new algorithm. the function includes paint-on, Pop-on, roll-up/down, etc. experimental results show that the proposed technique provides satisfactory performance on DTV receiver.

Voice Frequency Synthesis using VAW-GAN based Amplitude Scaling for Emotion Transformation

  • Kwon, Hye-Jeong;Kim, Min-Jeong;Baek, Ji-Won;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.713-725
    • /
    • 2022
  • Mostly, artificial intelligence does not show any definite change in emotions. For this reason, it is hard to demonstrate empathy in communication with humans. If frequency modification is applied to neutral emotions, or if a different emotional frequency is added to them, it is possible to develop artificial intelligence with emotions. This study proposes the emotion conversion using the Generative Adversarial Network (GAN) based voice frequency synthesis. The proposed method extracts a frequency from speech data of twenty-four actors and actresses. In other words, it extracts voice features of their different emotions, preserves linguistic features, and converts emotions only. After that, it generates a frequency in variational auto-encoding Wasserstein generative adversarial network (VAW-GAN) in order to make prosody and preserve linguistic information. That makes it possible to learn speech features in parallel. Finally, it corrects a frequency by employing Amplitude Scaling. With the use of the spectral conversion of logarithmic scale, it is converted into a frequency in consideration of human hearing features. Accordingly, the proposed technique provides the emotion conversion of speeches in order to express emotions in line with artificially generated voices or speeches.

A Low-Cost Speech to Sign Language Converter

  • Le, Minh;Le, Thanh Minh;Bui, Vu Duc;Truong, Son Ngoc
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.37-40
    • /
    • 2021
  • This paper presents a design of a speech to sign language converter for deaf and hard of hearing people. The device is low-cost, low-power consumption, and it can be able to work entirely offline. The speech recognition is implemented using an open-source API, Pocketsphinx library. In this work, we proposed a context-oriented language model, which measures the similarity between the recognized speech and the predefined speech to decide the output. The output speech is selected from the recommended speech stored in the database, which is the best match to the recognized speech. The proposed context-oriented language model can improve the speech recognition rate by 21% for working entirely offline. A decision module based on determining the similarity between the two texts using Levenshtein distance decides the output sign language. The output sign language corresponding to the recognized speech is generated as a set of sequential images. The speech to sign language converter is deployed on a Raspberry Pi Zero board for low-cost deaf assistive devices.

Sign Language Translation Using Deep Convolutional Neural Networks

  • Abiyev, Rahib H.;Arslan, Murat;Idoko, John Bush
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.631-653
    • /
    • 2020
  • Sign language is a natural, visually oriented and non-verbal communication channel between people that facilitates communication through facial/bodily expressions, postures and a set of gestures. It is basically used for communication with people who are deaf or hard of hearing. In order to understand such communication quickly and accurately, the design of a successful sign language translation system is considered in this paper. The proposed system includes object detection and classification stages. Firstly, Single Shot Multi Box Detection (SSD) architecture is utilized for hand detection, then a deep learning structure based on the Inception v3 plus Support Vector Machine (SVM) that combines feature extraction and classification stages is proposed to constructively translate the detected hand gestures. A sign language fingerspelling dataset is used for the design of the proposed model. The obtained results and comparative analysis demonstrate the efficiency of using the proposed hybrid structure in sign language translation.