• Title/Summary/Keyword: 자음인식

Search Result 106, Processing Time 0.03 seconds

Multi-Emotion Recognition Model with Text and Speech Ensemble (텍스트와 음성의 앙상블을 통한 다중 감정인식 모델)

  • Yi, Moung Ho;Lim, Myoung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.8
    • /
    • pp.65-72
    • /
    • 2022
  • Due to COVID-19, the importance of non-face-to-face counseling is increasing as the face-to-face counseling method has progressed to non-face-to-face counseling. The advantage of non-face-to-face counseling is that it can be consulted online anytime, anywhere and is safe from COVID-19. However, it is difficult to understand the client's mind because it is difficult to communicate with non-verbal expressions. Therefore, it is important to recognize emotions by accurately analyzing text and voice in order to understand the client's mind well during non-face-to-face counseling. Therefore, in this paper, text data is vectorized using FastText after separating consonants, and voice data is vectorized by extracting features using Log Mel Spectrogram and MFCC respectively. We propose a multi-emotion recognition model that recognizes five emotions using vectorized data using an LSTM model. Multi-emotion recognition is calculated using RMSE. As a result of the experiment, the RMSE of the proposed model was 0.2174, which was the lowest error compared to the model using text and voice data, respectively.

Finger-Touch based Hangul Input Interface for Usability Enhancement among Visually Impaired Individuals (시각 장애인의 입력 편의성 향상을 위한 손가락 터치 기반의 한글 입력 인터페이스)

  • Kang, Seung-Shik;Choi, Yoon-Seung
    • Journal of KIISE
    • /
    • v.43 no.11
    • /
    • pp.1307-1314
    • /
    • 2016
  • Virtual Hangul keyboards like Chun-Ji-In, Narat-Gul, and QWERTY are based on eyesight recognition, in which input letter positions are fixed in the smartphone environment. The input method of a fixed-position style is not very convenient for visually impaired individuals. In order to resolve the issue of inconvenience of the Hangul input system, we propose a new paradigm of the finger-touch based Hangul input system that does not need eyesight recognition of input buttons. For the convenience of learning the touch-motion based keyboard, finger touches are designed by considering the shape and frequencies of Hangul vowels and consonants together with the preference of fingers. The base position is decided by the first touch of the screen, and the finger-touch keyboard is used in the same way for all the other touch-style devices, regardless of the differences in size and operation system. In this input method, unique finger-touch motions are assigned for Hangul letters that significantly reduce the input errors.

A Study on Pitch Extraction Method using FIR-STREAK Digital Filter (FIR-STREAK 디지털 필터를 사용한 피치추출 방법에 관한 연구)

  • Lee, Si-U
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.1
    • /
    • pp.247-252
    • /
    • 1999
  • In order to realize a speech coding at low bit rates, a pitch information is useful parameter. In case of extracting an average pitch information form continuous speech, the several pitch errors appear in a frame which consonant and vowel are coexistent; in the boundary between adjoining frames and beginning or ending of a sentence. In this paper, I propose an Individual Pitch (IP) extraction method using residual signals of the FIR-STREAK digital filter in order to restrict the pitch extraction errors. This method is based on not averaging pitch intervals in order to accomodate the changes in each pitch interval. As a result, in case of Ip extraction method suing FIR-STREAK digital filter, I can't find the pitch errors in a frame which consonant and vowel are consistent; in the boundary between adjoining frames and beginning or ending of a sentence. This method has the capability of being applied to many fields, such as speech coding, speech analysis, speech synthesis and speech recognition.

  • PDF

Analysis on Vowel and Consonant Sounds of Patent's Speech with Velopharyngeal Insufficiency (VPI) and Simulated Speech (구개인두부전증 환자와 모의 음성의 모음과 자음 분석)

  • Sung, Mee Young;Kim, Heejin;Kwon, Tack-Kyun;Sung, Myung-Whun;Kim, Wooil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.7
    • /
    • pp.1740-1748
    • /
    • 2014
  • This paper focuses on listening test and acoustic analysis of patients' speech with velopharyngeal insufficiency (VPI) and normal speakers' simulation speech. In this research, a set consisting of 50-words, vowels and single syllables is determined for speech database construction. A web-based listening evaluation system is developed for a convenient/automated evaluation procedure. The analysis results show the trend of incorrect recognition for VPI speech and the one for simulation speech are similar. Such similarity is also confirmed by comparing the formant locations of vowel and spectrum of consonant sounds. These results show that the simulation method for VPI speech is effective at generating the speech signals similar to actual VPI patient's speech. It is expected that the simulation speech data can be effectively employed for our future work such as acoustic model adaptation.

The Study on Automatic Speech Recognizer Utilizing Mobile Platform on Korean EFL Learners' Pronunciation Development (자동음성인식 기술을 이용한 모바일 기반 발음 교수법과 영어 학습자의 발음 향상에 관한 연구)

  • Park, A Young
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1101-1107
    • /
    • 2017
  • This study explored the effect of ASR-based pronunciation instruction, using a mobile platform, on EFL learners' pronunciation development. Particularly, this quasi-experimental study focused on whether using mobile ASR, which provides voice-to-text feedback, can enhance the perception and production of target English consonants minimal pairs (V-B, R-L, and G-Z) of Korean EFL learners. Three intact classes of 117 Korean university students were assigned to three groups: a) ASR Group: ASR-based pronunciation instruction providing textual feedback by the mobile ASR; b) Conventional Group: conventional face-to-face pronunciation instruction providing individual oral feedback by the instructor; and the c) Hybrid Group: ASR-based pronunciation instruction plus conventional pronunciation instruction. The ANCOVA results showed that the adjusted mean score for pronunciation production post-test on the Hybrid instruction group (M=82.71, SD =3.3) was significantly higher than the Conventional group (M=62.6, SD =4.05) (p<.05).

A Study on Korean Phoneme Classification using Recursive Least-Square Algorithm (Recursive Least-Square 알고리즘을 이용한 한국어 음소분류에 관한 연구)

  • Kim, Hoe-Rin;Lee, Hwang-Su;Un, Jong-Gwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.6 no.3
    • /
    • pp.60-67
    • /
    • 1987
  • In this paper, a phoneme classification method for Korean speech recognition has been proposed and its performance has been studied. The phoneme classification has been done based on the phonemic features extracted by the prewindowed recursive least-square (PRLS) algorithm that is a kind of adaptive filter algorithms. Applying the PRLS algorithm to input speech signal, precise detection of phoneme boundaries has been made, Reference patterns of Korean phonemes have been generated by the ordinery vector quantization (VQ) of feature vectors obtained manualy from prototype regions of each phoneme. In order to obtain the performance of the proposed phoneme classification method, the method has been tested using spoken names of seven Korean cities which have eleven different consonants and eight different vowels. In the speaker-dependent phoneme classification, the accuracy is about $85\%$ considering simple phonemic rules of Korean language, while the accuracy of the speaker-independent case is far less than that of the speaker-dependent case.

  • PDF

Classification of nasal places of articulation based on the spectra of adjacent vowels (모음 스펙트럼에 기반한 전후 비자음 조음위치 판별)

  • Jihyeon Yun;Cheoljae Seong
    • Phonetics and Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.25-34
    • /
    • 2023
  • This study examined the utility of the acoustic features of vowels as cues for the place of articulation of Korean nasal consonants. In the acoustic analysis, spectral and temporal parameters were measured at the 25%, 50%, and 75% time points in the vowels neighboring nasal consonants in samples extracted from a spontaneous Korean speech corpus. Using these measurements, linear discriminant analyses were performed and classification accuracies for the nasal place of articulation were estimated. The analyses were applied separately for vowels following and preceding a nasal consonant to compare the effects of progressive and regressive coarticulation in terms of place of articulation. The classification accuracies ranged between approximately 50% and 60%, implying that acoustic measurements of vowel intervals alone are not sufficient to predict or classify the place of articulation of adjacent nasal consonants. However, given that these results were obtained for measurements at the temporal midpoint of vowels, where they are expected to be the least influenced by coarticulation, the present results also suggest the potential of utilizing acoustic measurements of vowels to improve the recognition accuracy of nasal place. Moreover, the classification accuracy for nasal place was higher for vowels preceding the nasal sounds, suggesting the possibility of higher anticipatory coarticulation reflecting the nasal place.

The Virtual Robot Arm Control Method by EMG Pattern Recognition using the Hybrid Neural Network System (혼합형 신경회로망을 이용한 근전도 패턴 분류에 의한 가상 로봇팔 제어 방식)

  • Jung, Kyung-Kwon;Kim, Joo-Woong;Eom, Ki-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.10
    • /
    • pp.1779-1785
    • /
    • 2006
  • This paper presents a method of virtual robot arm control by EMG pattern recognition using the proposed hybrid system. The proposed hybrid system is composed of the LVQ and the SOFM, and the SOFM is used for the preprocessing of the LVQ. The SOFM converts the high dimensional EMG signals to 2-dimensional data. The EMG measurement system uses three surface electrodes to acquire the EMG signal from operator. Six hand gestures can be classified sufficiently by the proposed hybrid system. Experimental results are presented that show the effectiveness of the virtual robot arm control by the proposed hybrid system based classifier for the recognition of hand gestures from EMG signal patterns.

An Acoustic Analysis on the Plosives of Korean and Japanese

  • Lee Seungmie
    • MALSORI
    • /
    • no.21_24
    • /
    • pp.111-122
    • /
    • 1992
  • 본 논문에서는 한국어에 있어서 세 가지 유형의 파열음과 일본어에 있어서 두 가지 유형의 파열음과 일본어에 있어서 두 가지 유형의 파열음이 보여주는 시간적 특성을 어두 위치 및 모음간 위치로 나누어 비교해 보았다- 한국어에 있어서 세 가지 유형의 파열음은 어두 위치에서 모두 무성음으로 실현되므로 성의 대립으로는 이들을 유형화 할 수 없고, 그보다는 조음의 힘과 기식의 유무에 따라 연음, 무기 경음, 유기 경음으로 분류하는 것이 타당하다. 이에 비해 일본어 파열음은 유성음인 연음과 무성음인 경음의 두가지 유형으로 대립된다. 유성음과 무성음, 그리고 유기음과 무기음의 구분에는 파열음의 개방에서부터 성대 진동까지의 시간인 성 시작 시간(VOT)과 기식의 길이가 변수가 된다. 경음과 연음의 구분에는 선행 모음의 길이, 폐쇄 지속 시간, Vl/(Vl+CL)의 비율이 유용한 정보가 된다. 양국어 어두 파열음의 VOT를 비교해 볼 때, 일본어 유성음은 음수의 VOT를 가지며, 한국어 무기 경음에서는 VOT가 10msec정도로 짧게 나타나고, 그 다음으로 한국어 연음. 일본어 무성음, 한국어 유기 경음의 순서로 길어진다. $\frac{선행 모음의 길이}{(선행 모음의 길이+폐쇄 지속 시간)}$의 비율은 언어의 특성도 반영해 주는데, 한국어의 경우 연음: 무기 경음: 유기 경음의 비는 0.63: 0.30:0.35, 일본어의 경우 유성음:무성음의 비는 0.69: 0.45로 나타났다. 청취 실험을 통해 한국인의 자음 인식 경향을 살펴본 결과, 성대 진동의 유무를 변별적으로 사용하지 않는 한국인 화자는 일본어 유성음은 연음으로, 무성음은 경음으로 인식하는 경향이 있는 것으로 나타났다.

  • PDF

Mathematical Analysis of the Structure of Korean Characters (한글문자의 인식에 관한 연구(IV))

  • 최주근
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.9 no.4
    • /
    • pp.25-32
    • /
    • 1972
  • This paper: a) discusses the structure of Korean charactors from a unified point of view. The forming process of vowels, consonants, and the combined characters are described in the same way. b) makes clear that vowels and consonants are unique determinants of combined characters according to speech sound. c) describes the way in which 10 vowels and 14 consonants are arranged systematically by the matrix equation, which forms 14,364 kinds of combined characters.

  • PDF