• Title/Summary/Keyword: speaker dependent

Search Result 139, Processing Time 0.024 seconds

Fast Algorithm for Recognition of Korean Isolated Words (한국어 고립단어인식을 위한 고속 알고리즘)

  • 남명우;박규홍;정상국;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.1
    • /
    • pp.50-55
    • /
    • 2001
  • This paper presents a korean isolated words recognition algorithm which used new endpoint detection method, auditory model, 2D-DCT and new distance measure. Advantages of the proposed algorithm are simple hardware construction and fast recognition time than conventional algorithms. For comparison with conventional algorithm, we used DTW method. At result, we got similar recognition rate for speaker dependent korean isolated words and better it for speaker independent korean isolated words. And recognition time of proposed algorithm was 200 times faster than DTW algorithm. Proposed algorithm had a good result in noise environments too.

  • PDF

A embodiment of mouse pointing system using 3-axis accelerometer and sound-recognition module (3축 가속도센서 및 음성인식 모듈을 이용한 마우스 포인팅 시스템의 구현)

  • Lee, Seung-Joon;Shin, Dong-Hwan;Kasno, Mohamad Afif B.;Kim, Joo-Woong;Park, Jin-Woo;Eom, Ki-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.934-937
    • /
    • 2010
  • In this paper, we did pursue the embodiment of a mouse pointing system which help the handicapped and people of not familiar with using electronics use electronic devices easily. Speech Recognition and 3-axis acceleration sensors in conjunction with a headset, a new mouse pointing system is constructed. We used speaker dependent system module which are generating the BCD code by recognizing human voices because it has high recognition rate rather than speaker independent system. Head-set mouse system is organized by 3-axis accelerometer, sound recognition module and TMS320F2812 processor. The main controller, TMS320F2812 DSP-processor is communicated with main computer by using SCI communications. The system is operated by Visual Basic in PC.

  • PDF

Normalized gestural overlap measures and spatial properties of lingual movements in Korean non-assimilating contexts

  • Son, Minjung
    • Phonetics and Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.31-38
    • /
    • 2019
  • The current electromagnetic articulography study analyzes several articulatory measures and examines whether, and if so, how they are interconnected, with a focus on cluster types and an additional consideration of speech rates and morphosyntactic contexts. Using articulatory data on non-assimilating contexts from three Seoul-Korean speakers, we examine how speaker-dependent gestural overlap between C1 and C2 in a low vowel context (/a/-to-/a/) and their resulting intergestural coordination are realized. Examining three C1C2 sequences (/k(#)t/, /k(#)p/, and /p(#)t/), we found that three normalized gestural overlap measures (movement onset lag, constriction onset lag, and constriction plateau lag) were correlated with one another for all speakers. Limiting the scope of analysis to C1 velar stop (/k(#)t/ and /k(#)p/), the results are recapitulated as follows. First, for two speakers (K1 and K3), i) longer normalized constriction plateau lags (i.e., less gestural overlap) were observed in the pre-/t/ context, compared to the pre-/p/ (/k(#)t/>/k(#)p/), ii) the tongue dorsum at the constriction offset of C1 in the pre-/t/ contexts was more anterior, and iii) these two variables are correlated. Second, the three speakers consistently showed greater horizontal distance between the vertical tongue dorsum and the vertical tongue tip position in /k(#)t/ sequences when it was measured at the time of constriction onset of C2 (/k(#)t/>/k(#)p/): the tongue tip completed its constriction onset by extending further forward in the pre-/t/ contexts than the uncontrolled tongue tip articulator in the pre-/p/ contexts (/k(#)t/>/k(#)p/). Finally, most speakers demonstrated less variability in the horizontal distance of the lingual-lingual sequences, which were taken as the active articulators (/k(#)t/=/k(#)p/ for K1; /k(#)t/

A Study on Phoneme Likely Units to Improve the Performance of Context-dependent Acoustic Models in Speech Recognition (음성인식에서 문맥의존 음향모델의 성능향상을 위한 유사음소단위에 관한 연구)

  • 임영춘;오세진;김광동;노덕규;송민규;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.5
    • /
    • pp.388-402
    • /
    • 2003
  • In this paper, we carried out the word, 4 continuous digits. continuous, and task-independent word recognition experiments to verify the effectiveness of the re-defined phoneme-likely units (PLUs) for the phonetic decision tree based HM-Net (Hidden Markov Network) context-dependent (CD) acoustic modeling in Korean appropriately. In case of the 48 PLUs, the phonemes /ㅂ/, /ㄷ/, /ㄱ/ are separated by initial sound, medial vowel, final consonant, and the consonants /ㄹ/, /ㅈ/, /ㅎ/ are also separated by initial sound, final consonant according to the position of syllable, word, and sentence, respectively. In this paper. therefore, we re-define the 39 PLUs by unifying the one phoneme in the separated initial sound, medial vowel, and final consonant of the 48 PLUs to construct the CD acoustic models effectively. Through the experimental results using the re-defined 39 PLUs, in word recognition experiments with the context-independent (CI) acoustic models, the 48 PLUs has an average of 7.06%, higher recognition accuracy than the 39 PLUs used. But in the speaker-independent word recognition experiments with the CD acoustic models, the 39 PLUs has an average of 0.61% better recognition accuracy than the 48 PLUs used. In the 4 continuous digits recognition experiments with the liaison phenomena. the 39 PLUs has also an average of 6.55% higher recognition accuracy. And then, in continuous speech recognition experiments, the 39 PLUs has an average of 15.08% better recognition accuracy than the 48 PLUs used too. Finally, though the 48, 39 PLUs have the lower recognition accuracy, the 39 PLUs has an average of 1.17% higher recognition characteristic than the 48 PLUs used in the task-independent word recognition experiments according to the unknown contextual factor. Through the above experiments, we verified the effectiveness of the re-defined 39 PLUs compared to the 48PLUs to construct the CD acoustic models in this paper.

A Study on the Computational Model of Word Sense Disambiguation, based on Corpora and Experiments on Native Speaker's Intuition (직관 실험 및 코퍼스를 바탕으로 한 의미 중의성 해소 계산 모형 연구)

  • Kim, Dong-Sung;Choe, Jae-Woong
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.4
    • /
    • pp.303-321
    • /
    • 2006
  • According to Harris'(1966) distributional hypothesis, understanding the meaning of a word is thought to be dependent on its context. Under this hypothesis about human language ability, this paper proposes a computational model for native speaker's language processing mechanism concerning word sense disambiguation, based on two sets of experiments. Among the three computational models discussed in this paper, namely, the logic model, the probabilistic model, and the probabilistic inference model, the experiment shows that the logic model is first applied fer semantic disambiguation of the key word. Nexr, if the logic model fails to apply, then the probabilistic model becomes most relevant. The three models were also compared with the test results in terms of Pearson correlation coefficient value. It turns out that the logic model best explains the human decision behaviour on the ambiguous words, and the probabilistic inference model tomes next. The experiment consists of two pans; one involves 30 sentences extracted from 1 million graphic-word corpus, and the result shows the agreement rate anong native speakers is at 98% in terms of word sense disambiguation. The other pm of the experiment, which was designed to exclude the logic model effect, is composed of 50 cleft sentences.

  • PDF

Design and Implementation of a Real-Time Lipreading System Using PCA & HMM (PCA와 HMM을 이용한 실시간 립리딩 시스템의 설계 및 구현)

  • Lee chi-geun;Lee eun-suk;Jung sung-tae;Lee sang-seol
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1597-1609
    • /
    • 2004
  • A lot of lipreading system has been proposed to compensate the rate of speech recognition dropped in a noisy environment. Previous lipreading systems work on some specific conditions such as artificial lighting and predefined background color. In this paper, we propose a real-time lipreading system which allows the motion of a speaker and relaxes the restriction on the condition for color and lighting. The proposed system extracts face and lip region from input video sequence captured with a common PC camera and essential visual information in real-time. It recognizes utterance words by using the visual information in real-time. It uses the hue histogram model to extract face and lip region. It uses mean shift algorithm to track the face of a moving speaker. It uses PCA(Principal Component Analysis) to extract the visual information for learning and testing. Also, it uses HMM(Hidden Markov Model) as a recognition algorithm. The experimental results show that our system could get the recognition rate of 90% in case of speaker dependent lipreading and increase the rate of speech recognition up to 40~85% according to the noise level when it is combined with audio speech recognition.

  • PDF

The Low Cost Implementation of Speech Recognition System for the Web (웹에서의 저가 음성인식 시스템의 구현)

  • Park, Yong-Beom;Park, Jong-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.1129-1135
    • /
    • 1999
  • isolated word recognition using the Dynamic Time warping algorithm has shown good recognition rate on speaker dependent environment. But, practically, since the searching time of the dynamic Time Warping algorithm is rapidly increased as searching data is increased. it is hard to implement. In the context-dependent-short-query system such as educational children's workbook on the Web, the number of responses to the specific questions is limited. Therefore, the searching space for the answers can be reduced depending on the questions. In this paper, low cost implementation method using DTW for the Web has been proposed. To cover the weakness of DTW, the searching space is reduced by the context. the searching space, depends on the specific questions, is chosen from interest searchable candidates. In the real implementation, the proposed method show better performance of both time and recognition rate.

  • PDF

Pitch Accent Realization in North Kyungsang Korean: Tonal Alignment as a Function of Nasal Position in Syllables

  • Sohn, Hyang-Sook
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.37-52
    • /
    • 2011
  • This study investigates patterns of the alignment of the accentual peaks in bisyllabic words of the CVNCV, CVNV, and CVNNV structures in North Kyungsang Korean. Based on the tonal alignment, patterns of the F0 pitch excursion are discussed relative to one another. Issues are addressed concerning how the tonal targets are aligned, and how the tonal specifications of nasals in postvocalic, intervocalic, and prevocalic environments are supplied in the LH, HL, and HH classes. Tonal specification of nasals in various environments is accounted for by extension of the L target, displacement of the pitch peak, and interpolation between two tonal targets, depending on the tonal class. The results in this study provide preliminary evidence that the categorical alignment of the tonal targets is implemented by simply checking the presence or absence of a nasal before or after the nucleus vowel on the segmental string, without reference to the constituency of the nasal in the syllable structure. However, the prosodic structure has a key role to play in explaining speaker-dependent variations in the tonal alignment. Sensitivity to tautosyllabicity has an effect on the shape of the F0 contour, and disparity in the patterns of the pitch excursion is represented as a function of syllable structure correlated with segmental composition of the nasal.

  • PDF

A Study on Duration Length and Place of Feature Extraction for Phoneme Recognition (음소 인식을 위한 특징 추출의 위치와 지속 시간 길이에 관한 연구)

  • Kim, Bum-Koog;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.4
    • /
    • pp.32-39
    • /
    • 1994
  • As a basic research to realize Korean speech recognition system, phoneme recognition was carried out to find out ; 1) the best place which represents each phoneme's characteristics, and 2) the reasonable length of duration for obtaining the best recognition rates. For the recognition experiments, multi-speaker dependent recognition with Bayesian decision rule using 21 order of cepstral coefficient as a feature parameter was adopted. It turned out that the best place of feature extraction for the highest recognition rates were 10~50ms in vowels, 40~100ms in fricatives and affricates, 10~50ms in nasals and liquids, and 10~50ms in plosives. And about 70ms of duration was good enough for the recognition of all 35 phonemes.

  • PDF

User Adaptive Post-Processing in Speech Recognition for Mobile Devices (모바일 기기를 위한 음성인식의 사용자 적응형 후처리)

  • Kim, Young-Jin;Kim, Eun-Ju;Kim, Myung-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.338-342
    • /
    • 2007
  • In this paper we propose a user adaptive post-processing method to improve the accuracy of speaker dependent, isolated word speech recognition, particularly for mobile devices. Our method considers the recognition result of the basic recognizer simply as a high-level speech feature and processes it further for correct recognition result. Our method learns correlation between the output of the basic recognizer and the correct final results and uses it to correct the erroneous output of the basic recognizer. A multi-layer perceptron model is built for each incorrectly recognized word with high frequency. As the result of experiments, we achieved a significant improvement of 41% in recognition accuracy (41% error correction rate).