• Title/Summary/Keyword: spoken word recognition

Search Result 49, Processing Time 0.024 seconds

A Study on Recognition of Korean Postpositions and Suffixes in Continuous Speech (한국어 연속음성에서의 조사 및 어미 인식에 관한 연구)

  • Song, Min-Suck;Lee, Ki-Young
    • Speech Sciences
    • /
    • v.6
    • /
    • pp.181-195
    • /
    • 1999
  • This study proposes a method of recognizing postpositions and suffixes in Korean spoken language, using prosodic information. We detect grammatical boundaries automatically at first, by using prosodic information of the accentual phrase, and then we recognize grammatical function words by backward-tracking from the boundaries. The experiment employs 300 sentential speech data of 10 men's and 5 women's voice spoken in standard Korean, in which 1080 accentual phrases and 11 postpositions and suffixes are included. The result shows the recognition rate of postpositions in two cases. In one case in which only correctly detected boundaries are included, the recognition rate is 97.5%, and in the other case in which all detected boundaries are included, the recognition rate is 74.8%.

  • PDF

Retrieval of Player Event in Golf Videos Using Spoken Content Analysis (음성정보 내용분석을 통한 골프 동영상에서의 선수별 이벤트 구간 검색)

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.7
    • /
    • pp.674-679
    • /
    • 2009
  • This paper proposes a method of player event retrieval using combination of two functions: detection of player name in speech information and detection of sound event from audio information in golf videos. The system consists of indexing module and retrieval module. At the indexing time audio segmentation and noise reduction are applied to audio stream demultiplexed from the golf videos. The noise-reduced speech is then fed into speech recognizer, which outputs spoken descriptors. The player name and sound event are indexed by the spoken descriptors. At search time, text query is converted into phoneme sequences. The lists of each query term are retrieved through a description matcher to identify full and partial phrase hits. For the retrieval of the player name, this paper compares the results of word-based, phoneme-based, and hybrid approach.

Speech Recognition in the Car Noise Environment (자동차 소음 환경에서 음성 인식)

  • 김완구;차일환;윤대희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.2
    • /
    • pp.51-58
    • /
    • 1993
  • This paper describes the development of a speaker-dependent isolated word recognizer as applied to voice dialing in a car noise environment. for this purpose, several methods to improve performance under such condition are evaluated using database collected in a small car moving at 100km/h The main features of the recognizer are as follow: The endpoint detection error can be reduced by using the magnitude of the signal which is inverse filtered by the AR model of the background noise, and it can be compensated by using variants of the DTW algorithm. To remove the noise, an autocorrelation subtraction method is used with the constraint that residual energy obtainable by linear predictive analysis should be positive. By using the noise rubust distance measure, distortion of the feature vector is minimized. The speech recognizer is implemented using the Motorola DSP56001(24-bit general purpose digital signal processor). The recognition database is composed of 50 Korean names spoken by 3 male speakers. The recognition error rate of the system is reduced to 4.3% using a single reference pattern for each word and 1.5% using 2 reference patterns for each word.

  • PDF

A Study on the Recognition of Korean 4 Connected Digits Considering Co-articulation (조음결합을 고려한 4연 숫자음 인식에 관한 연구)

  • 이종진;이광석;허강인;김명기;고시영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.1
    • /
    • pp.20-28
    • /
    • 1992
  • Co-articulation is one of major factors that make connected word recognition difficult. This Study Considers the fact that the head Part Of the following word is changed by the Preceding word in a connection point, by applying the co-articulation model, and adj usting the following word .We choose a critical damping second order linear system for the co-articulation model, combining a one-stage DP matching recognition algorithm with this model, and Investigating the effects. The recognition experiment is carried out for 35 Korean 4 connected digits spoken by 5 male speakers, and recognition rate Is upgraded by 4.7 percent.

  • PDF

Study on the Recognition of Spoken Korean Continuous Digits Using Phone Network (음성망을 이용한 한국어 연속 숫자음 인식에 관한 연구)

  • Lee, G.S.;Lee, H.J.;Byun, Y.G.;Kim, S.H.
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.624-627
    • /
    • 1988
  • This paper describes the implementation of recognition of speaker - dependent Korean spoken continuous digits. The recognition system can be divided into two parts, acoustic - phonetic processor and lexical decoder. Acoustic - phonetic processor calculates the feature vectors from input speech signal and the performs frame labelling and phone labelling. Frame labelling is performed by Bayesian classification method and phone labelling is performed using labelled frame and posteriori probability. The lexical decoder accepts segments (phones) from acoustic - phonetic processor and decodes its lexical structure through phone network which is constructed from phonetic representation of ten digits. The experiment carried out with two sets of 4continuous digits, each set is composed of 35 patterns. An evaluation of the system yielded a pattern accuracy of about 80 percent resulting from a word accuracy of about 95 percent.

  • PDF

Korean Speech Recognition using Dynamic Multisection Model (DMS 모델을 이용한 한국어 음성 인식)

  • 안태옥;변용규;김순협
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.12
    • /
    • pp.1933-1939
    • /
    • 1990
  • In this paper, we proposed an algorithm which used backtracking method to get time information, and it be modelled DMS (Dynamic Multisection) by feature vectors and time information whic are represented to similiar feature in word patterns spoken during continuous time domain, for Korean Speech recognition by independent speaker using DMS. Each state of model is represented time sequence, and have time information and feature vector. Typical feature vector is determined as the feature vector of each state to minimize the distance between word patterns. DDD Area names are selected as recognition wcabulary and 12th LPC cepstrum coefficients are used as the feature parameter. State of model is made 8 multisection and is used 0.2 as weight for time information. Through the experiment result, recognition rate by DMS model is 94.8%, and it is shown that this is better than recognition rate (89.3%) by MSVQ(Multisection Vector Quantization) method.

  • PDF

Using Utterance and Semantic Level Confidence for Interactive Spoken Dialog Clarification

  • Jung, Sang-Keun;Lee, Cheong-Jae;Lee, Gary Geunbae
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.1
    • /
    • pp.1-25
    • /
    • 2008
  • Spoken dialog tasks incur many errors including speech recognition errors, understanding errors, and even dialog management errors. These errors create a big gap between the user's intention and the system's understanding, which eventually results in a misinterpretation. To fill in the gap, people in human-to-human dialogs try to clarify the major causes of the misunderstanding to selectively correct them. This paper presents a method of clarification techniques to human-to-machine spoken dialog systems. We viewed the clarification dialog as a two-step problem-Belief confirmation and Clarification strategy establishment. To confirm the belief, we organized the clarification process into three systematic phases. In the belief confirmation phase, we consider the overall dialog system's processes including speech recognition, language understanding and semantic slot and value pairs for clarification dialog management. A clarification expert is developed for establishing clarification dialog strategy. In addition, we proposed a new design of plugging clarification dialog module in a given expert based dialog system. The experiment results demonstrate that the error verifiers effectively catch the word and utterance-level semantic errors and the clarification experts actually increase the dialog success rate and the dialog efficiency.

Lexico-semantic interactions during the visual and spoken recognition of homonymous Korean Eojeols (한국어 시·청각 동음동철이의 어절 재인에 나타나는 어휘-의미 상호작용)

  • Kim, Joonwoo;Kang, Kathleen Gwi-Young;Yoo, Doyoung;Jeon, Inseo;Kim, Hyun Kyung;Nam, Hyeomin;Shin, Jiyoung;Nam, Kichun
    • Phonetics and Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.1-15
    • /
    • 2021
  • The present study investigated the mental representation and processing of an ambiguous word in the bimodal processing system by manipulating the lexical ambiguity of a visually or auditorily presented word. Homonyms (e.g., '물었다') with more than two meanings and control words (e.g., '고통을') with a single meaning were used in the experiments. The lemma frequency of words was manipulated while the relative frequency of multiple meanings of each homonym was balanced. In both experiments using the lexical decision task, a robust frequency effect and a critical interaction of word type by frequency were found. In Experiment 1, spoken homonyms yielded faster latencies relative to control words (i.e., ambiguity advantage) in the low frequency condition, while ambiguity disadvantage was found in the high frequency condition. A similar interactive pattern was found in visually presented homonyms in the subsequent Experiment 2. Taken together, the first key finding is that interdependent lexico-semantic processing can be found both in the visual and auditory processing system, which in turn suggests that semantic processing is not modality dependent, but rather takes place on the basis of general lexical knowledge. The second is that multiple semantic candidates provide facilitative feedback only when the lemma frequency of the word is relatively low.

A Study on the Spoken KOrean-Digit Recognition Using the Neural Netwok (神經網을 利用한 韓國語 數字音 認識에 관한 硏究)

  • Park, Hyun-Hwa;Gahang, Hae Dong;Bae, Keun Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.11 no.3
    • /
    • pp.5-13
    • /
    • 1992
  • Taking devantage of the property that Korean digit is a mono-syllable word, we proposed a spoken Korean-digit recognition scheme using the multi-layer perceptron. The spoken Korean-digit is divided into three segments (initial sound, medial vowel, and final consonant) based on the voice starting / ending points and a peak point in the middle of vowel sound. The feature vectors such as cepstrum, reflection coefficients, ${\Delta}$cepstrum and ${\Delta}$energy are extracted from each segment. It has been shown that cepstrum, as an input vector to the neural network, gives higher recognition rate than reflection coefficients. Regression coefficients of cepstrum did not affect as much as we expected on the recognition rate. That is because, it is believed, we extracted features from the selected stationary segments of the input speech signal. With 150 ceptral coefficients obtained from each spoken digit, we achieved correct recognition rate of 97.8%.

  • PDF