• Title/Summary/Keyword: Speech Learning Model

Search Result 187, Processing Time 0.026 seconds

Development of Age Classification Deep Learning Algorithm Using Korean Speech (한국어 음성을 이용한 연령 분류 딥러닝 알고리즘 기술 개발)

  • So, Soonwon;You, Sung Min;Kim, Joo Young;An, Hyun Jun;Cho, Baek Hwan;Yook, Sunhyun;Kim, In Young
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.2
    • /
    • pp.63-68
    • /
    • 2018
  • In modern society, speech recognition technology is emerging as an important technology for identification in electronic commerce, forensics, law enforcement, and other systems. In this study, we aim to develop an age classification algorithm for extracting only MFCC(Mel Frequency Cepstral Coefficient) expressing the characteristics of speech in Korean and applying it to deep learning technology. The algorithm for extracting the 13th order MFCC from Korean data and constructing a data set, and using the artificial intelligence algorithm, deep artificial neural network, to classify males in their 20s, 30s, and 50s, and females in their 20s, 40s, and 50s. finally, our model confirmed the classification accuracy of 78.6% and 71.9% for males and females, respectively.

A Domain Action Classification Model Using Conditional Random Fields (Conditional Random Fields를 이용한 영역 행위 분류 모델)

  • Kim, Hark-Soo
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.1
    • /
    • pp.1-14
    • /
    • 2007
  • In a goal-oriented dialogue, speakers' intentions can be represented by domain actions that consist of pairs of a speech act and a concept sequence. Therefore, if we plan to implement an intelligent dialogue system, it is very important to correctly infer the domain actions from surface utterances. In this paper, we propose a statistical model to determine speech acts and concept sequences using conditional random fields at the same time. To avoid biased learning problems, the proposed model uses low-level linguistic features such as lexicals and parts-of-speech. Then, it filters out uninformative features using the chi-square statistic. In the experiments in a schedule arrangement domain, the proposed system showed good performances (the precision of 93.0% on speech act classification and the precision of 90.2% on concept sequence classification).

  • PDF

Adaptive Speech Emotion Recognition Framework Using Prompted Labeling Technique (프롬프트 레이블링을 이용한 적응형 음성기반 감정인식 프레임워크)

  • Bang, Jae Hun;Lee, Sungyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.160-165
    • /
    • 2015
  • Traditional speech emotion recognition techniques recognize emotions using a general training model based on the voices of various people. These techniques can not consider personalized speech character exactly. Therefore, the recognized results are very different to each person. This paper proposes an adaptive speech emotion recognition framework made from user's' immediate feedback data using a prompted labeling technique for building a personal adaptive recognition model and applying it to each user in a mobile device environment. The proposed framework can recognize emotions from the building of a personalized recognition model. The proposed framework was evaluated to be better than the traditional research techniques from three comparative experiment. The proposed framework can be applied to healthcare, emotion monitoring and personalized service.

CHMM Modeling using LMS Algorithm for Continuous Speech Recognition Improvement (연속 음성 인식 향상을 위해 LMS 알고리즘을 이용한 CHMM 모델링)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.10 no.11
    • /
    • pp.377-382
    • /
    • 2012
  • In this paper, the echo noise robust CHMM learning model using echo cancellation average estimator LMS algorithm is proposed. To be able to adapt to the changing echo noise. For improving the performance of a continuous speech recognition, CHMM models were constructed using echo noise cancellation average estimator LMS algorithm. As a results, SNR of speech obtained by removing Changing environment noise is improved as average 1.93dB, recognition rate improved as 2.1%.

Comparison of Korean Real-time Text-to-Speech Technology Based on Deep Learning (딥러닝 기반 한국어 실시간 TTS 기술 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.640-645
    • /
    • 2021
  • The deep learning based end-to-end TTS system consists of Text2Mel module that generates spectrogram from text, and vocoder module that synthesizes speech signals from spectrogram. Recently, by applying deep learning technology to the TTS system the intelligibility and naturalness of the synthesized speech is as improved as human vocalization. However, it has the disadvantage that the inference speed for synthesizing speech is very slow compared to the conventional method. The inference speed can be improved by applying the non-autoregressive method which can generate speech samples in parallel independent of previously generated samples. In this paper, we introduce FastSpeech, FastSpeech 2, and FastPitch as Text2Mel technology, and Parallel WaveGAN, Multi-band MelGAN, and WaveGlow as vocoder technology applying non-autoregressive method. And we implement them to verify whether it can be processed in real time. Experimental results show that by the obtained RTF all the presented methods are sufficiently capable of real-time processing. And it can be seen that the size of the learned model is about tens to hundreds of megabytes except WaveGlow, and it can be applied to the embedded environment where the memory is limited.

A Comparative Study of Second Language Acquisition Models: Focusing on Vowel Acquisition by Chinese Learners of Korean (중국인 학습자의 한국어 모음 습득에 대한 제2언어 습득 모델 비교 연구)

  • Kim, Jooyeon
    • Phonetics and Speech Sciences
    • /
    • v.6 no.4
    • /
    • pp.27-36
    • /
    • 2014
  • This study provided longitudinal examination of the Chinese learners' acquisition of Korean vowels. Specifically, I examined the Chinese learners' Korean monophthongs /i, e, ɨ, ${\Lambda}$, a, u, o/ that were created at the time of 1 month and 12 months, tried to verify empirically how they learn by dealing with their mother tongue, and Korean vowels through dealing with pattern of the Perceptual Assimilation Model (henceforth PAM) of Best (Best, 1993; 1994; Best & Tyler, 2007) and the Speech Learning Model (henceforth SLM) of Flege (Flege, 1987; Bohn & Flege, 1992, Flege, 1995). As a result, most of the present results are shown to be similarly explained by the PAM and SLM, and the only discrepancy between these two models is found in the 'similar' category of sounds between the learners' native language and the target language. Specifically, the acquisition pattern of /u/ and /o/ in Korean is well accounted for the PAM, but not in the SLM. The SLM did not explain why the Chinese learners had difficulty in acquiring the Korean vowel /u/, because according to the SLM, the vowel /u/ in Chinese (the native language) is matched either to the vowel /u/ or /o/ in Korean (the target language). Namely, there is only a one-to-one matching relationship between the native language and the target language. In contrast, the Chinese learners' difficulty for the Korean vowel /u/ is well accounted for in the PAM in that the Chinese vowel /u/ is matched to the vowel pair /o, u/ in Korean, not the single vowel, /o/ or /u/.

Japanese Adults' Perceptual Categorization of Korean Three-way Distinction (한국어 3중 대립 음소에 대한 일본인의 지각적 범주화)

  • Kim, Jee-Hyun;Kim, Jung-Oh
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2005.05a
    • /
    • pp.163-167
    • /
    • 2005
  • Current theories of cross-language speech perception claim that patterns of perceptual assimilation of non-native segments to native categories predict relative difficulties in learning to perceive (and produce) non-native phones. Perceptual assimilation patterns by Japanese listeners of the three-way voicing distinction in Korean syllable-initial obstruent consonants were assessed directly. According to Speech Learning Model (SLM) and Perceptual Assimilation Model (PAM), the resulting perceptual assimilation pattern predicts relative difficulty in discrimination between lenis and aspirated consonants, and relative ease in the discrimination of fortis. This study compared the effects of two different training conditions on Japanese adults’perceptual categorization of Korean three-way distinction. In one condition, participants were trained to discriminate lenis and aspirated consonants which were predicted to be problematic, whereas in another condition participants were trained with all three classes of 'learnability' did not seem to depend lawfully on the perceived cross-language similarity of Korean and Japanese consonants.

  • PDF

A Study on Speech Recognition Using Auditory Model and Recurrent Network (청각모델과 회귀회로망을 이용한 음성인식에 관한 연구)

  • 김동준;이재혁
    • Journal of Biomedical Engineering Research
    • /
    • v.11 no.1
    • /
    • pp.157-162
    • /
    • 1990
  • In this study, a peripheral auditory model is used as a frequency feature extractor and a recurrent network which has recurrent links on input nodes is constructed in order to show the reliability of the recurrent network as a recognizer by executing recognition tests for 4 Korean place names and syllables. In the case of using the general learning rule, it is found that the weights are diverged for a long sequence because of the characteristics of the node function in the hidden and output layers. So, a refined weight compensation method is proposed and, using this method, it is possible to improve the system operation and to use long data. The recognition results are considerably good, even if time worping and endpoint detection are omitted and learning patterns and test patterns are made of average length of data. The recurrent network used in this study reflects well time information of temporal speech signal.

  • PDF

A Design and Implementation of The Deep Learning-Based Senior Care Service Application Using AI Speaker

  • Mun Seop Yun;Sang Hyuk Yoon;Ki Won Lee;Se Hoon Kim;Min Woo Lee;Ho-Young Kwak;Won Joo Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.23-30
    • /
    • 2024
  • In this paper, we propose a deep learning-based personalized senior care service application. The proposed application uses Speech to Text technology to convert the user's speech into text and uses it as input to Autogen, an interactive multi-agent large-scale language model developed by Microsoft, for user convenience. Autogen uses data from previous conversations between the senior and ChatBot to understand the other user's intent and respond to the response, and then uses a back-end agent to create a wish list, a shared calendar, and a greeting message with the other user's voice through a deep learning model for voice cloning. Additionally, the application can perform home IoT services with SKT's AI speaker (NUGU). The proposed application is expected to contribute to future AI-based senior care technology.

Text-to-speech with linear spectrogram prediction for quality and speed improvement (음질 및 속도 향상을 위한 선형 스펙트로그램 활용 Text-to-speech)

  • Yoon, Hyebin
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.71-78
    • /
    • 2021
  • Most neural-network-based speech synthesis models utilize neural vocoders to convert mel-scaled spectrograms into high-quality, human-like voices. However, neural vocoders combined with mel-scaled spectrogram prediction models demand considerable computer memory and time during the training phase and are subject to slow inference speeds in an environment where GPU is not used. This problem does not arise in linear spectrogram prediction models, as they do not use neural vocoders, but these models suffer from low voice quality. As a solution, this paper proposes a Tacotron 2 and Transformer-based linear spectrogram prediction model that produces high-quality speech and does not use neural vocoders. Experiments suggest that this model can serve as the foundation of a high-quality text-to-speech model with fast inference speed.