• Title/Summary/Keyword: word boundary detection

Search Result 14, Processing Time 0.031 seconds

A Modified Viterbi Algorithm for Word Boundary Detection Error Compensation (단어 경계 검출 오류 보정을 위한 수정된 비터비 알고리즘)

  • Chung, Hoon;Chung, Ik-Joo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.1E
    • /
    • pp.21-26
    • /
    • 2007
  • In this paper, we propose a modified Viterbi algorithm to compensate for endpoint detection error during the decoding phase of an isolated word recognition task. Since the conventional Viterbi algorithm explores only the search space whose boundaries are fixed to the endpoints of the segmented utterance by the endpoint detector, the recognition performance is highly dependent on the accuracy level of endpoint detection. Inaccurately segmented word boundaries lead directly to recognition error. In order to relax the degradation of recognition accuracy due to endpoint detection error, we describe an unconstrained search of word boundaries and present an algorithm to explore the search space with efficiency. The proposed algorithm was evaluated by performing a variety of simulated endpoint detection error cases on an isolated word recognition task. The proposed algorithm reduced the Word Error Rate (WER) considerably, from 84.4% to 10.6%, while consuming only a little more computation power.

Word Boundary Detection of Voice Signal Using Recurrent Fuzzy Associative Memory (순환 퍼지연상기억장치를 이용한 음성경계 추출)

  • Ma Chang-Su;Kim Gye-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1171-1179
    • /
    • 2004
  • We describe word boundary detection that extracts the boundary between speech and non-speech. The proposed method uses two features. One is the normalized root mean square of speech signal, which is insensitive to white noises and represents temporal information. The other is the normalized met-frequency band energy of voice signal, which is frequency information of the signal. Our method detects word boundaries using a recurrent fuzzy associative memory(RFAM) that extends FAM by adding recurrent nodes. Hebbian learning method is employed to establish the degree of association between an input and output. An error back-propagation algorithm is used for teaming the weights between the consequent layer and the recurrent layer. To confirm the effectiveness, we applied the suggested system to voice data obtained from KAIST.

The Role of Post-lexical Intonational Patterns in Korean Word Segmentation

  • Kim, Sa-Hyang
    • Speech Sciences
    • /
    • v.14 no.1
    • /
    • pp.37-62
    • /
    • 2007
  • The current study examines the role of post-lexical tonal patterns of a prosodic phrase in word segmentation. In a word spotting experiment, native Korean listeners were asked to spot a disyllabic or trisyllabic word from twelve syllable speech stream that was composed of three Accentual Phrases (AP). Words occurred with various post-lexical intonation patterns. The results showed that listeners spotted more words in phrase-initial than in phrase-medial position, suggesting that the AP-final H tone from the preceding AP helped listeners to segment the phrase-initial word in the target AP. Results also showed that listeners' error rates were significantly lower when words occurred with initial rising tonal pattern, which is the most frequent intonational pattern imposed upon multisyllabic words in Korean, than with non-rising patterns. This result was observed both in AP-initial and in AP-medial positions, regardless of the frequency and legality of overall AP tonal patterns. Tonal cues other than initial rising tone did not positively influence the error rate. These results not only indicate that rising tone in AP-initial and AP_final position is a reliable cue for word boundary detection for Korean listeners, but further suggest that phrasal intonation contours serve as a possible word boundary cue in languages without lexical prominence.

  • PDF

The Role of Prosodic Boundary Cues in Word Segmentation in Korean

  • Kim, Sa-Hyang
    • Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.29-41
    • /
    • 2006
  • This study investigates the degree to which various prosodic cues at the boundaries of prosodic phrases in Korean contribute to word segmentation. Since most phonological words in Korean are produced as one Accentual Phrase (AP), it was hypothesized that the detection of acoustic cues at AP boundaries would facilitate word segmentation. The prosodic characteristics of Korean APs include initial strengthening at the beginning of the phrase and pitch rise and final lengthening at the end. A perception experiment utilizing an artificial language learning paradigm revealed that cues conforming to the aforementioned prosodic characteristics of Korean facilitated listeners' word segmentation. Results also indicated that duration and amplitude cues were more helpful in segmentation than pitch. Nevertheless, results did show that a pitch cue that did not conform to the Korean AP interfered with segmentation.

  • PDF

Endpoint Detection of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 끝점검출)

  • 석종원;배건성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.57-64
    • /
    • 1999
  • In this paper, we investigated the robust endpoint detection algorithm in noisy environment. A new feature parameter based on a discrete wavelet transform is proposed for word boundary detection of isolated utterances. The sum of standard deviation of wavelet coefficients in the third coarse and weighted first detailed scale is defined as a new feature parameter for endpoint detection. We then developed a new and robust endpoint detection algorithm using the feature found in the wavelet domain. For the performance evaluation, we evaluated the detection accuracy and the average recognition error rate due to endpoint detection in an HMM-based recognition system across several signal-to-noise ratios and noise conditions.

  • PDF

Automatic Synthesis Method Using Prosody-Rich Database (대용량 운율 음성데이타를 이용한 자동합성방식)

  • 김상훈
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.87-92
    • /
    • 1998
  • In general, the synthesis unit database was constructed by recording isolated word. In that case, each boundary of word has typical prosodic pattern like a falling intonation or preboundary lengthening. To get natural synthetic speech using these kinds of database, we must artificially distort original speech. However, that artificial process rather resulted in unnatural, unintelligible synthetic speech due to the excessive prosodic modification on speech signal. To overcome these problems, we gathered thousands of sentences for synthesis database. To make a phone level synthesis unit, we trained speech recognizer with the recorded speech, and then segmented phone boundaries automatically. In addition, we used laryngo graph for the epoch detection. From the automatically generated synthesis database, we chose the best phone and directly concatenated it without any prosody processing. To select the best phone among multiple phone candidates, we used prosodic information such as break strength of word boundaries, phonetic contexts, cepstrum, pitch, energy, and phone duration. From the pilot test, we obtained some positive results.

  • PDF

Noise-Robust Speech Detection Using The Coefficient of Variation of Spectrum (스펙트럼의 변동계수를 이용한 잡음에 강인한 음성 구간 검출)

  • Kim Youngmin;Hahn Minsoo
    • MALSORI
    • /
    • no.48
    • /
    • pp.107-116
    • /
    • 2003
  • This paper deals with a new parameter for voice detection which is used for many areas of speech engineering such as speech synthesis, speech recognition and speech coding. CV (Coefficient of Variation) of speech spectrum as well as other feature parameters is used for the detection of speech. CV is calculated only in the specific range of speech spectrum. Average magnitude and spectral magnitude are also employed to improve the performance of detector. From the experimental results the proposed voice detector outperformed the conventional energy-based detector in the sense of error measurements.

  • PDF

A Study on Korean Isolated Word Speech Detection and Recognition using Wavelet Feature Parameter (Wavelet 특징 파라미터를 이용한 한국어 고립 단어 음성 검출 및 인식에 관한 연구)

  • Lee, Jun-Hwan;Lee, Sang-Beom
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2238-2245
    • /
    • 2000
  • In this papr, eatue parameters, extracted using Wavelet transform for Korean isolated worked speech, are sued for speech detection and recognition feature. As a result of the speech detection, it is shown that it produces more exact detection result than eh method of using energy and zero-crossing rate on speech boundary. Also, as a result of the method with which the feature parameter of MFCC, which is applied to he recognition, it is shown that the result is equal to the result of the feature parameter of MFCC using FFT in speech recognition. So, it has been verified the usefulness of feature parameters using Wavelet transform for speech analysis and recognition.

  • PDF

Word Boundary Detection of Voice Signal Using Recurrent Fuzzy Associative Memory (순환 퍼지연상기억장치를 이용한 음성경계 추출)

  • 마창수;김계영;최형일
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.235-237
    • /
    • 2003
  • 본 논문에서는 음성인식을 위한 전처리 단계로 음성인식의 대상을 찾아내는 음성경계 추출에 대하여 기술한다. 음성경계 추출을 위한 특징 벡터로는 시간 정보인 RMS와 주파수 정보인 MFBE를 사용한다. 사용하는 알고리즘은 학습을 통해 규칙을 생성하는 퍼지연상기억장치에 음성의 시간 정보를 적용하기 위해 순환노드를 추가한 새로운 형태의 순환 퍼지연상기억장치를 제안한다.

  • PDF

Improved Sentence Boundary Detection Method for Web Documents (웹 문서를 위한 개선된 문장경계인식 방법)

  • Lee, Chung-Hee;Jang, Myung-Gil;Seo, Young-Hoon
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.6
    • /
    • pp.455-463
    • /
    • 2010
  • In this paper, we present an approach to sentence boundary detection for web documents that builds on statistical-based methods and uses rule-based correction. The proposed system uses the classification model learned offline using a training set of human-labeled web documents. The web documents have many word-spacing errors and frequently no punctuation mark that indicates the end of sentence boundary. As sentence boundary candidates, the proposed method considers every Ending Eomis as well as punctuation marks. We optimize engine performance by selecting the best feature, the best training data, and the best classification algorithm. For evaluation, we made two test sets; Set1 consisting of articles and blog documents and Set2 of web community documents. We use F-measure to compare results on a large variety of tasks, Detecting only periods as sentence boundary, our basis engine showed 96.5% in Set1 and 56.7% in Set2. We improved our basis engine by adapting features and the boundary search algorithm. For the final evaluation, we compared our adaptation engine with our basis engine in Set2. As a result, the adaptation engine obtained improvements over the basis engine by 39.6%. We proved the effectiveness of the proposed method in sentence boundary detection.