• Title/Summary/Keyword: 연속음성인식

Search Result 259, Processing Time 0.027 seconds

Improvement of Semicontinuous Hiden Markov Models and One-Pass Algorithm for Recognition of Keywords in Korean Continuous Speech (한국어 연속음성중 키워드 인식을 위한 반연속 은닉 마코브 모델과 One-Pass 알고리즘의 개선방안)

  • 최관선
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.358-363
    • /
    • 1994
  • This paper presents the improvement of the SCHMM using discrete VQ and One-Pass algorithm for keywords recognition in Korean continuous speech. The SCHMM using discrete VQ is a simple model that is composed of a variable mixture gaussian probability density function with dynamic mixture number. One-Pass algorithm is improved such that recognition rates are enhanced by fathoming any undesirable semisyllable with the low likelihood and the high duration penalty, and computation time is reduced by testing only the frame which is dissimilar to the previously testd frame. In recognition experiments for speaker-dependent case, the improved One-Pass algorithm has shown recognition rates as high as 99.7% and has reduced compution time by about 30% compared with the currently abailable one-pass algorithm.

  • PDF

음성인식 기반 인터렉티브 미디어아트의 연구 - 소리-시각 인터렉티브 설치미술 "Water Music" 을 중심으로-

  • Lee, Myung-Hak;Jiang, Cheng-Ri;Kim, Bong-Hwa;Kim, Kyu-Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.354-359
    • /
    • 2008
  • This Audio-Visual Interactive Installation is composed of a video projection of a video Projection and digital Interface technology combining with the viewer's voice recognition. The Viewer can interact with the computer generated moving images growing on the screen by blowing his/her breathing or making sound. This symbiotic audio and visual installation environment allows the viewers to experience an illusionistic spacephysically as well as psychologically. The main programming technologies used to generate moving water waves which can interact with the viewer in this installation are visual C++ and DirectX SDK For making water waves, full-3D rendering technology and particle system were used.

  • PDF

Feature Compensation Method Based on Parallel Combined Mixture Model (병렬 결합된 혼합 모델 기반의 특징 보상 기술)

  • 김우일;이흥규;권오일;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.603-611
    • /
    • 2003
  • This paper proposes an effective feature compensation scheme based on speech model for achieving robust speech recognition. Conventional model-based method requires off-line training with noisy speech database and is not suitable for online adaptation. In the proposed scheme, we can relax the off-line training with noisy speech database by employing the parallel model combination technique for estimation of correction factors. Applying the model combination process over to the mixture model alone as opposed to entire HMM makes the online model combination possible. Exploiting the availability of noise model from off-line sources, we accomplish the online adaptation via MAP (Maximum A Posteriori) estimation. In addition, the online channel estimation procedure is induced within the proposed framework. For more efficient implementation, we propose a selective model combination which leads to reduction or the computational complexities. The representative experimental results indicate that the suggested algorithm is effective in realizing robust speech recognition under the combined adverse conditions of additive background noise and channel distortion.

The Study on the Speaker Adaptation Using Speaker Characteristics of Phoneme (음소에 따른 화자특성을 이용한 화자적응방법에 관한 연구)

  • 채나영;황영수
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.6-9
    • /
    • 2003
  • In this paper, we studied on the difference of speaker adaptation according to the phoneme classification for Korean Speech recognition. In order to study of speech adaptation according to the weight of difference of phoneme as recognition unit, we used SCHMM as recognition system. And Speaker adaptation method used in this paper was MAPE(Maximum A Posteriori Probability Estimation), Linear Spectral Estimation. In order to evaluate the performance of these methods, we used 10 Korean isolated numbers as the experimental data. It is possible for the first and the second methods to be carried out unsupervised learning and used in on-line system. And the first method was shown performance improvement over the second method, and hybrid adaptation showed the better recognition results than those which performed each method. And the result of Speaker adaptation using the variable weight according to the phoneme had better than the result using fixed weight.

  • PDF

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

EEG based Vowel Feature Extraction for Speech Recognition System using International Phonetic Alphabet (EEG기반 언어 인식 시스템을 위한 국제음성기호를 이용한 모음 특징 추출 연구)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.90-95
    • /
    • 2014
  • The researchs using brain-computer interface, the new interface system which connect human to macine, have been maded to implement the user-assistance devices for control of wheelchairs or input the characters. In recent researches, there are several trials to implement the speech recognitions system based on the brain wave and attempt to silent communication. In this paper, we studied how to extract features of vowel based on international phonetic alphabet (IPA), as a foundation step for implementing of speech recognition system based on electroencephalogram (EEG). We conducted the 2 step experiments with three healthy male subjects, and first step was speaking imagery with single vowel and second step was imagery with successive two vowels. We selected 32 channels, which include frontal lobe related to thinking and temporal lobe related to speech function, among acquired 64 channels. Eigen value of the signal was used for feature vector and support vector machine (SVM) was used for classification. As a result of first step, we should use over than 10th order of feature vector to analyze the EEG signal of speech and if we used 11th order feature vector, the highest average classification rate was 95.63 % in classification between /a/ and /o/, the lowest average classification rate was 86.85 % with /a/ and /u/. In the second step of the experiments, we studied the difference of speech imaginary signals between single and successive two vowels.

A Study on the Speaker Adaptation of a Continuous Speech Recognition using HMM (HMM을 이용한 연속 음성 인식의 화자적응화에 관한 연구)

  • Kim, Sang-Bum;Lee, Young-Jae;Koh, Si-Young;Hur, Kang-In
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.5-11
    • /
    • 1996
  • In this study, the method of speaker adaptation for uttered sentence using syllable unit hmm is proposed. Segmentation of syllable unit for sentence is performed automatically by concatenation of syllable unit hmm and viterbi segmentation. Speaker adaptation is performed using MAPE(Maximum A Posteriori Probabillity Estimation) which can adapt any small amount of adaptation speech data and add one sequentially. For newspaper editorial continuous speech, the recognition rates of adaptation of HMM was 71.8% which is approximately 37% improvement over that of unadapted HMM

  • PDF

The Error Pattern Analysis of the HMM-Based Automatic Phoneme Segmentation (HMM기반 자동음소분할기의 음소분할 오류 유형 분석)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.213-221
    • /
    • 2006
  • Phone segmentation of speech waveform is especially important for concatenative text to speech synthesis which uses segmented corpora for the construction of synthetic units. because the quality of synthesized speech depends critically on the accuracy of the segmentation. In the beginning. the phone segmentation was manually performed. but it brings the huge effort and the large time delay. HMM-based approaches adopted from automatic speech recognition are most widely used for automatic segmentation in speech synthesis, providing a consistent and accurate phone labeling scheme. Even the HMM-based approach has been successful, it may locate a phone boundary at a different position than expected. In this paper. we categorized adjacent phoneme pairs and analyzed the mismatches between hand-labeled transcriptions and HMM-based labels. Then we described the dominant error patterns that must be improved for the speech synthesis. For the experiment. hand labeled standard Korean speech DB from ETRI was used as a reference DB. Time difference larger than 20ms between hand-labeled phoneme boundary and auto-aligned boundary is treated as an automatic segmentation error. Our experimental results from female speaker revealed that plosive-vowel, affricate-vowel and vowel-liquid pairs showed high accuracies, 99%, 99.5% and 99% respectively. But stop-nasal, stop-liquid and nasal-liquid pairs showed very low accuracies, 45%, 50% and 55%. And these from male speaker revealed similar tendency.

Improvement of Recognition Speed for Real-time Address Speech Recognition (실시간 주소 음성인식을 위한 인식 시스템의 인식속도 개선)

  • Hwang Cheol-Jun;Oh Se-Jin;Kim Bum-Koog;Jung Ho-Youl;Chung Hyun-Yeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.74-77
    • /
    • 1999
  • 본 논문에서는 본 연구실에서 개발한 주소 음성인식 시스템의 인식 속도를 개선시키기 위하예 새로운 가변 프루닝 문턱치를 적용하는 방법을 제안하고 실험을 통하여 그 유효성을 확인하였다. 기존의 가변 프루닝 문턱치는 일정 프레임이 경과하면 일정 값을 가진 문턱치를 계속하여 감소시켜나가는 방법을 반복하기 때문에, 불필요한 탐색공간을 탐색하게 된다. 본 논문에서 새로이 제안하는 가변 프루닝 문턱치를 채용하는 방법은 처음 일정 구간이 경과되면 일정 문턱치를 감소시키나, 다음 일정 프레임에서는 탐색되어야할 후보에 따라서 문턱치를 변화시켜 프루닝시키기 때문에 탐색공간을 효과적으로 감소시킬 수 있다. 제안된 방법의 유효성을 확인하기 위하여, 본 연구실에서 개발한 한국어 주소 입력 시스템에 적용하였다. 이 시스템은 48개의 연속 HMM 유사음소단위(Phoneme Like Units; PLUs)를 인식의 기본단위로 하고, .사용환경 변화에 의한 인식성능의 저하를 최소화하기 위해 최대사후 확률추정법(Maximum A Posteriori Probability Estimation; MAP)을 사용하며, 인식알고리즘으로는OPDP(One Pass Dynamic Programming)법을 이용하고 있다. 남성화자 3인에 의한 75개의 연결주소명을 이용하여 인식 실험을 수행한 결과 고정 프루닝 문턱치를 적용한 경우 인식률은 평균 $96.0\%$, 인식 시간은 5.26초였고, 기존의 가변 프루닝 문턱치의 경우 인식률은 평균 $96.0\%$, 인식 시간은 5.1초인 데 비하여, 새로운 가변 프루닝 문턱치를 적용찬 경우에는 인식률 저하없이 인식 시간이 4.34초로, 기존에 비해 각각 0.92초, 0.76초 인식 시간이 감소되어 제안한 방법의 유효성을 확인할 수 있었다.는 달리 각 산란 영역에서 그 지수는 1씩 작은 값을 갖는다.향에 따라 음장변화가 크게 다를 것이 예상되므로 이를 규명하기 위해서는 궁극적으로 3차원적인 음장분포 연구가 필요하다. 음향센서를 해저면에 매설할 경우 수충의 수온변화와 센서 주변의 수온변화 사이에는 어느 정도의 시간지연이 존재하게 되므로 이에 대한 영향을 규명하는 것도 센서의 성능예측을 위해서 필요하리라 사료된다.가지는 심부 가스의 개발 성공률을 증가시키기 위하여 심부 가스가 존재하는 지역의 지질학적 부존 환경 및 조성상의 특성과 생산시 소요되는 생산비용을 심도에 따라 분석하고 생산에 수반되는 기술적 문제점들을 정리하였으며 마지막으로 향후 요구되는 연구 분야들을 제시하였다. 또한 참고로 현재 심부 가스의 경우 미국이 연구 개발 측면에서 가장 활발한 활동을 전개하고 있으며 그 결과 다수의 신뢰성 있는 자료들을 확보하고 있으므로 본 논문은 USGS와 Gas Research Institute(GRI)에서 제시한 자료에 근거하였다.ऀĀ耀Ā삱?⨀؀Ā Ā?⨀ጀĀ耀Ā?돀ꢘ?⨀硩?⨀ႎ?⨀?⨀넆돐쁖잖⨀쁖잖⨀/ࠐ?⨀焆덐瀆倆Āⶇ퍟ⶇ퍟ĀĀĀĀ磀鲕좗?⨀肤?⨀⁅Ⴅ?⨀쀃잖⨀䣙熸ጁ↏?⨀

  • PDF

Noise Reduction using Spectral Subtraction in the Discrete Wavelet Transform Domain (이산 웨이브렛 변환영역에서의 스펙트럼 차감법을 이용한 잡음제거)

  • 김현기;이상운;홍재근
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.4
    • /
    • pp.306-315
    • /
    • 2001
  • In noise reduction method from noisy speech for speech recognition in noisy environments, conventional spectral subtraction method has a disadvantage which distinction of noise and speech is difficult, and characteristic of noise can't be estimated accurately. Also, noise reduction method in the wavelet transform domain has a disadvantage which loss of signal is generated in the high frequency domain. In order to compensate theme disadvantage, this paper propose spectral subtraction method in continuous wavelet transform domain which speech and non- speech intervals is distinguished by standard deviation of wavelet coefficient, and signal is divided three scales at different scale. The proposed method extract accurately characteristic of noise in order to apply spectral subtraction method by end detection and band division. The proposed method shows better performance than noise reduction method using conventional spectral subtraction and wavelet transform from viewpoint signal to noise ratio and Itakura-Saito distance by experimental.

  • PDF