• Title/Summary/Keyword: Speech Recognition Technology

Search Result 527, Processing Time 0.025 seconds

Spoken-to-written text conversion for enhancement of Korean-English readability and machine translation

  • HyunJung Choi;Muyeol Choi;Seonhui Kim;Yohan Lim;Minkyu Lee;Seung Yun;Donghyun Kim;Sang Hun Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.127-136
    • /
    • 2024
  • The Korean language has written (formal) and spoken (phonetic) forms that differ in their application, which can lead to confusion, especially when dealing with numbers and embedded Western words and phrases. This fact makes it difficult to automate Korean speech recognition models due to the need for a complete transcription training dataset. Because such datasets are frequently constructed using broadcast audio and their accompanying transcriptions, they do not follow a discrete rule-based matching pattern. Furthermore, these mismatches are exacerbated over time due to changing tacit policies. To mitigate this problem, we introduce a data-driven Korean spoken-to-written transcription conversion technique that enhances the automatic conversion of numbers and Western phrases to improve automatic translation model performance.

Speaker Identification using Phonetic GMM (음소별 GMM을 이용한 화자식별)

  • Kwon Sukbong;Kim Hoi-Rin
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.185-188
    • /
    • 2003
  • In this paper, we construct phonetic GMM for text-independent speaker identification system. The basic idea is to combine of the advantages of baseline GMM and HMM. GMM is more proper for text-independent speaker identification system. In text-dependent system, HMM do work better. Phonetic GMM represents more sophistgate text-dependent speaker model based on text-independent speaker model. In speaker identification system, phonetic GMM using HMM-based speaker-independent phoneme recognition results in better performance than baseline GMM. In addition to the method, N-best recognition algorithm used to decrease the computation complexity and to be applicable to new speakers.

  • PDF

Pronunciation Dictionary for English Pronunciation Tutoring System (영어 발음교정시스템을 위한 발음사전 구축)

  • Kim Hyosook;Kim Sunju
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.168-171
    • /
    • 2003
  • This study is about modeling pronunciation dictionary necessary for PLU(phoneme like unit) level word recognition. The recognition of nonnative speakers' pronunciation enables an automatic diagnosis and an error detection which are the core of English pronunciation tutoring system. The above system needs two pronunciation dictionaries. One is for representing standard English pronunciation. The other is for representing Korean speakers' English Pronunciation. Both dictionaries are integrated to generate pronunciation networks for variants.

  • PDF

Performance Improvement on Hearing Aids Via Environmental Noise Reduction (배경 잡음 제거를 통한 보청 시스템의 성능 향상)

  • 박선준;윤대희;김동욱;박영철
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.2
    • /
    • pp.61-67
    • /
    • 2000
  • Recent progress in digital and VLSI technology has offered new possibility fer noticeable advance of hearing aids. Yet, environmental noise remains one of the major problems to hearing aid users. This paper describes results which speech recognition performance and speech discrimination performance was measured for listeners with sensorineural hearing loss, while listeners in speech-band noise. In addition, to ameliorate hearing-aided environments of hearing impaired listeners, environmental noise reduction using speech enhancement techniques are investigated as a front-end of conventional hearing aids. Speech enhancement techniques are implemented in a realtime system equipped with DSP board. The clinical test results suggest that the speech enhancement technique may work in synergy with gain functions fer the greater SNR improvement as the preprocessing algorithm of digital hearing aids.

  • PDF

Deep learning-based speech recognition for Korean elderly speech data including dementia patients (치매 환자를 포함한 한국 노인 음성 데이터 딥러닝 기반 음성인식)

  • Jeonghyeon Mun;Joonseo Kang;Kiwoong Kim;Jongbin Bae;Hyeonjun Lee;Changwon Lim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.1
    • /
    • pp.33-48
    • /
    • 2023
  • In this paper we consider automatic speech recognition (ASR) for Korean speech data in which elderly persons randomly speak a sequence of words such as animals and vegetables for one minute. Most of the speakers are over 60 years old and some of them are dementia patients. The goal is to compare deep-learning based ASR models for such data and to find models with good performance. ASR is a technology that can recognize spoken words and convert them into written text by computers. Recently, many deep-learning models with good performance have been developed for ASR. Training data for such models are mostly composed of the form of sentences. Furthermore, the speakers in the data should be able to pronounce accurately in most cases. However, in our data, most of the speakers are over the age of 60 and often have incorrect pronunciation. Also, it is Korean speech data in which speakers randomly say series of words, not sentences, for one minute. Therefore, pre-trained models based on typical training data may not be suitable for our data, and hence we train deep-learning based ASR models from scratch using our data. We also apply some data augmentation methods due to small data size.

A Study on Embedded DSP Implementation of Keyword-Spotting System using Call-Command (호출 명령어 방식 핵심어 검출 시스템의 임베디드 DSP 구현에 관한 연구)

  • Song, Ki-Chang;Kang, Chul-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.9
    • /
    • pp.1322-1328
    • /
    • 2010
  • Recently, keyword spotting system is greatly in the limelight as UI(User Interface) technology of ubiquitous home network system. Keyword spotting system is vulnerable to non-stationary noises such as TV, radio, dialogue. Especially, speech recognition rate goes down drastically under the embedded DSP(Digital Signal Processor) environments because it is relatively low in the computational capability to process input speech in real-time. In this paper, we propose a new keyword spotting system using the call-command method, which is consisted of small number of recognition networks. We select the call-command such as 'narae', 'home manager' and compose the small network as a token which is consisted of silence with the noise and call commands to carry the real-time recognition continuously for input speeches.

Traffic Signal Recognition System Based on Color and Time for Visually Impaired

  • P. Kamakshi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.48-54
    • /
    • 2023
  • Nowadays, a blind man finds it very difficult to cross the roads. They should be very vigilant with every step they take. To resolve this problem, Convolutional Neural Networks(CNN) is a best method to analyse the data and automate the model without intervention of human being. In this work, a traffic signal recognition system is designed using CNN for the visually impaired. To provide a safe walking environment, a voice message is given according to light state and timer state at that instance. The developed model consists of two phases, in the first phase the CNN model is trained to classify different images captured from traffic signals. Common Objects in Context (COCO) labelled dataset is used, which includes images of different classes like traffic lights, bicycles, cars etc. The traffic light object will be detected using this labelled dataset with help of object detection model. The CNN model detects the color of the traffic light and timer displayed on the traffic image. In the second phase, from the detected color of the light and timer value a text message is generated and sent to the text-to-speech conversion model to make voice guidance for the blind person. The developed traffic light recognition model recognizes traffic light color and countdown timer displayed on the signal for safe signal crossing. The countdown timer displayed on the signal was not considered in existing models which is very useful. The proposed model has given accurate results in different scenarios when compared to other models.

The Korean Word Length Effect on AudWord Recognition (청각단어 재인에서 나타난 한국어 단어 길이 효과)

  • Choi Wonil;Nam Kichun
    • MALSORI
    • /
    • no.44
    • /
    • pp.33-46
    • /
    • 2002
  • This study was conducted to examine the effect of word length on auditory word recognition. Word length can be defined by several sublexical units, such as letters, phonemes, syllables, etc. To find out which sublexical units are influential in auditory word recognition, the auditory lexical decision task was used. In Experiment 1, we examined the partial correlation between the speed of reaction time and the number of sublexical units, and in Experiment 2, we executed ANOVA to find out which sublexical length variable was an influential unit. Through these two experiment, we concluded syllable length was the most important variable on auditory word recognition.

  • PDF

Recognition of Korean Vowels using Bayesian Classification with Mouth Shape (베이지안 분류 기반의 입 모양을 이용한 한글 모음 인식 시스템)

  • Kim, Seong-Woo;Cha, Kyung-Ae;Park, Se-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.8
    • /
    • pp.852-859
    • /
    • 2019
  • With the development of IT technology and smart devices, various applications utilizing image information are being developed. In order to provide an intuitive interface for pronunciation recognition, there is a growing need for research on pronunciation recognition using mouth feature values. In this paper, we propose a system to distinguish Korean vowel pronunciations by detecting feature points of lips region in images and applying Bayesian based learning model. The proposed system implements the recognition system based on Bayes' theorem, so that it is possible to improve the accuracy of speech recognition by accumulating input data regardless of whether it is speaker independent or dependent on small amount of learning data. Experimental results show that it is possible to effectively distinguish Korean vowels as a result of applying probability based Bayesian classification using only visual information such as mouth shape features.

Emotion Recognition of Low Resource (Sindhi) Language Using Machine Learning

  • Ahmed, Tanveer;Memon, Sajjad Ali;Hussain, Saqib;Tanwani, Amer;Sadat, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.369-376
    • /
    • 2021
  • One of the most active areas of research in the field of affective computing and signal processing is emotion recognition. This paper proposes emotion recognition of low-resource (Sindhi) language. This work's uniqueness is that it examines the emotions of languages for which there is currently no publicly accessible dataset. The proposed effort has provided a dataset named MAVDESS (Mehran Audio-Visual Dataset Mehran Audio-Visual Database of Emotional Speech in Sindhi) for the academic community of a significant Sindhi language that is mainly spoken in Pakistan; however, no generic data for such languages is accessible in machine learning except few. Furthermore, the analysis of various emotions of Sindhi language in MAVDESS has been carried out to annotate the emotions using line features such as pitch, volume, and base, as well as toolkits such as OpenSmile, Scikit-Learn, and some important classification schemes such as LR, SVC, DT, and KNN, which will be further classified and computed to the machine via Python language for training a machine. Meanwhile, the dataset can be accessed in future via https://doi.org/10.5281/zenodo.5213073.