• Title/Summary/Keyword: Part of speech

Search Result 439, Processing Time 0.021 seconds

A Study of Fundamental Frequency for Focused Word Spotting in Spoken Korean (한국어 발화음성에서 중점단어 탐색을 위한 기본주파수에 대한 연구)

  • Kwon, Soon-Il;Park, Ji-Hyung;Park, Neung-Soo
    • The KIPS Transactions:PartB
    • /
    • v.15B no.6
    • /
    • pp.595-602
    • /
    • 2008
  • The focused word of each sentence is a help in recognizing and understanding spoken Korean. To find the method of focused word spotting at spoken speech signal, we made an analysis of the average and variance of Fundamental Frequency and the average energy extracted from a focused word and the other words in a sentence by experiments with the speech data from 100 spoken sentences. The result showed that focused words have either higher relative average F0 or higher relative variances of F0 than other words. Our findings are to make a contribution to getting prosodic characteristics of spoken Korean and keyword extraction based on natural language processing.

The Acoustic Study on the Voices of Korean Normal Adults (한국 성인의 정상 음성에 관한 기본 음성 측정치 연구)

  • Pyo, H.Y.;Sim, H.S.;Song, Y.K.;Yoon, Y.S.;Lee, E.K.;Lim, S.E.;Hah, H.R.;Choi, H.S.
    • Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.179-192
    • /
    • 2002
  • Our present study was performed to investigate acoustically the Korean normal adults' voices, with enough large number of subjects to be reliable. 120 Korean normal adults (60 males and 60 females) of the age of 20 to 39 years produced sustained three vowels, /a/, /i/, and /u/ and read a part of 'Taking a Walk' paragraph, and by analyzing them acoustically with MDVP of CSL, we could get the fundamental frequency ($F_{0}$), jitter, shimmer and NHR of sustained vowels: speaking fundamental frequency ($SF_{0}$), highest speaking frequency (SFhi), lowest speaking frequency (SFlo) of continuous speech. As results, on the average, male voices showed 118.1$\sim$122.6 Hz in $F_{0}$, 0.467$\sim$0.659% in jitter, 1.538$\sim$2.674% in shimmer, 0.117$\sim$0.114 in NHR, 120.8 Hz in $SF_{0}$, 183.2 Hz in SFhi, 82.6 Hz in SFlo. And, female voices showed 211.6∼220.3 Hz in F0, 0.678∼0.935% in jitter, 1.478∼2.582% in shimmer, 0.098∼0.114 in NHR, 217.1 Hz in $SF_{0}$, 340.9 Hz in SFhi, 136.0 Hz in SFlo. Among the 7 parameters, every parameters except shimmer showed the significant difference between male and female voices. And, when we compared the three vowels, they showed significant differences one another in shimmer and NHR of both genders, but not in $F_{0}$ of males and jitter of females.

  • PDF

Phoneme-Boundary-Detection and Phoneme Recognition Research using Neural Network (음소경계검출과 신경망을 이용한 음소인식 연구)

  • 임유두;강민구;최영호
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.224-229
    • /
    • 1999
  • In the field of speech recognition, the research area can be classified into the following two categories: one which is concerned with the development of phoneme-level recognition system, the other with the efficiency of word-level recognition system. The resonable phoneme-level recognition system should detect the phonemic boundaries appropriately and have the improved recognition abilities all the more. The traditional LPC methods detect the phoneme boundaries using Itakura-Saito method which measures the distance between LPC of the standard phoneme data and that of the target speech frame. The MFCC methods which treat spectral transitions as the phonemic boundaries show the lack of adaptability. In this paper, we present new speech recognition system which uses auto-correlation method in the phonemic boundary detection process and the multi-layered Feed-Forward neural network in the recognition process respectively. The proposed system outperforms the traditional methods in the sense of adaptability and another advantage of the proposed system is that feature-extraction part is independent of the recognition process. The results show that frame-unit phonemic recognition system should be possibly implemented.

  • PDF

Concept-based Translation System in the Korean Spoken Language Translation System (한국어 대화체 음성언어 번역시스템에서의 개념기반 번역시스템)

  • Choi, Un-Cheon;Han, Nam-Yong;Kim, Jae-Hoon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.8
    • /
    • pp.2025-2037
    • /
    • 1997
  • The concept-based translation system, which is a part of the Korean spoken language translation system, translates spoken utterances from Korean speech recognizer into one of English, Japanese and Korean in a travel planning task. Our system regulates semantic rather than the syntactic category in order to process the spontaneous speech which tends to be regarded as the one ungrammatical and subject to recognition errors. Utterances are parsed into concept structures, and the generation module produces the sentence of the specified target language. We have developed a token-separator using base-words and an automobile grammar corrector for Korean processing. We have also developed postprocessors for each target language in order to improve the readability of the generation results.

  • PDF

Development and Evaluation of an Address Input System Employing Speech Recognition (음성인식 기능을 가진 주소입력 시스템의 개발과 평가)

  • 김득수;황철준;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2
    • /
    • pp.3-10
    • /
    • 1999
  • This paper describes the development and evaluation of a Korean address input system employing automatic speech recognition technique as user interface for input Korean address. Address consists of cities, provinces and counties. The system works on a window 95 environment of personal computer with built-in soundcard. In the speech recognition part, the Continuous density Hidden Markov Model(CHMM) for making phoneme like units(PLUs) and One Pass Dynamic Programming(OPDP) algorithm is used for recognition. For address recognition, Finite State Automata(FSA) suitable for Korean address structure is constructed. To achieve an acceptable performance against the variation of speakers, microphones, and environmental noises, Maximum a posteriori(MAP) estimation is implemented in adaptation. And to improve the recognition speed, fast search method using variable pruning threshold is newly proposed. In the evaluation tests conducted for the 100 connected words uttered by 3 males the system showed above average 96.0% of recognition accuracy for connected words after adaption and recognition speed within 2 seconds, showing the effectiveness of the system.

  • PDF

A Study of Keyword Spotting System Based on the Weight of Non-Keyword Model (비핵심어 모델의 가중치 기반 핵심어 검출 성능 향상에 관한 연구)

  • Kim, Hack-Jin;Kim, Soon-Hyub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.381-388
    • /
    • 2003
  • This paper presents a method of giving weights to garbage class clustering and Filler model to improve performance of keyword spotting system and a time-saving method of dialogue speech processing system for keyword spotting by calculating keyword transition probability through speech analysis of task domain users. The point of the method is grouping phonemes with phonetic similarities, which is effective in sensing similar phoneme groups rather than individual phonemes, and the paper aims to suggest five groups of phonemes obtained from the analysis of speech sentences in use in Korean morphology and in stock-trading speech processing system. Besides, task-subject Filler model weights are added to the phoneme groups, and keyword transition probability included in consecutive speech sentences is calculated and applied to the system in order to save time for system processing. To evaluate performance of the suggested system, corpus of 4,970 sentences was built to be used in task domains and a test was conducted with subjects of five people in their twenties and thirties. As a result, FOM with the weights on proposed five phoneme groups accounts for 85%, which has better performance than seven phoneme groups of Yapanel [1] with 88.5% and a little bit poorer performance than LVCSR with 89.8%. Even in calculation time, FOM reaches 0.70 seconds than 0.72 of seven phoneme groups. Lastly, it is also confirmed in a time-saving test that time is saved by 0.04 to 0.07 seconds when keyword transition probability is applied.

Implementation of Augmentative and Alternative Communication System Using Image Dictionary and Verbal based Sentence Generation Rule (이미지 사전과 동사기반 문장 생성 규칙을 활용한 보완대체 의사소통 시스템 구현)

  • Ryu, Je;Han, Kwang-Rok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.569-578
    • /
    • 2006
  • The present study implemented AAC(Augmentative and Alternative Communication) system using images that speech defectives can easily understand. In particular, the implementation was focused on the portability and mobility of the AAC system as well as communication system of a more flexible form. For mobility and portability, we implemented a system operable in mobile devices such as PDA so that speech defectives can communicate as food as ordinary People at any Place using the system Moreover, in order to overcome the limitation of storage space for a large volume of image data, we implemented the AAC system in client/server structure in mobile environment. What is more, for more flexible communication, we built an image dictionary by taking verbs as the base and sub-categorizing nouns according to their corresponding verbs, and regularized the types of sentences generated according to the type of verb, centering on verbs that play the most important role in composing a sentence.

A New Temporal Filtering Method for Improved Automatic Lipreading (향상된 자동 독순을 위한 새로운 시간영역 필터링 기법)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.15B no.2
    • /
    • pp.123-130
    • /
    • 2008
  • Automatic lipreading is to recognize speech by observing the movement of a speaker's lips. It has received attention recently as a method of complementing performance degradation of acoustic speech recognition in acoustically noisy environments. One of the important issues in automatic lipreading is to define and extract salient features from the recorded images. In this paper, we propose a feature extraction method by using a new filtering technique for obtaining improved recognition performance. The proposed method eliminates frequency components which are too slow or too fast compared to the relevant speech information by applying a band-pass filter to the temporal trajectory of each pixel in the images containing the lip region and, then, features are extracted by principal component analysis. We show that the proposed method produces improved performance in both clean and visually noisy conditions via speaker-independent recognition experiments.

Phonetic Acoustic Knowledge and Divide And Conquer Based Segmentation Algorithm (음성학적 지식과 DAC 기반 분할 알고리즘)

  • Koo, Chan-Mo;Wang, Gi-Nam
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.215-222
    • /
    • 2002
  • This paper presents a reliable fully automatic labeling system which fits well with languages having well-developed syllables such as in Korean. The ASL System utilize DAC (Divide and Conquer), a control mechanism, based segmentation algorithm to use phonetic and acoustic information with greater efficiency. The segmentation algorithm is to devide speech signals into speechlets which is localized speech signal pieces and to segment each speechlet for speech boundaries. While HMM method has uniform and definite efficiencies, the suggested method gives framework to steadily develope and improve specified acoustic knowledges as a component. Without using statistical method such as HMM, this new method use only phonetic-acoustic information. Therefore, this method has high speed performance, is consistent extending the specific acoustic knowledge component, and can be applied in efficient way. we show experiment result to verify suggested method at the end.

Development of Neck-Type Electrolarynx Blueton and Acoustic Characteristic Analysis (경부형 전기인공후두 Blueton의 개발과 음향학적 성능 분석)

  • Choi, Seong-Hee;Park, Young-Jae;Park, Young-Kwan;Kim, Tae-Jung;Nam, Do-Hyun;Lim, Sung-Eun;Lee, Sung-Eun;Kim, Han-Soo;Choi, Hong-Shik;Kim, Kwang-Moon
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.15 no.1
    • /
    • pp.37-42
    • /
    • 2004
  • Electrolarynx(EL), battery operated vibrators which are held against the neck by on-off button, has been widely used as a verbal communication method among post-laryngectomized patients. EL speech can produce easily without need of any additional surgery or special training and be used with any other methods. This institute developed a neck-typed EL named "Blueton" in commperation with EL Company Linkus, which consists of 3 parts : Vibrator part, Control part, Battery part. In this study we evaluated the acoustic characteristics of the produced voices by Blueton compared with Servox-inton using MDVP. Three EL users (2 full time users, 1 part time user) were participated. The results revelaed that NHR higher in Servox than Blueton and intensity is higher in Blueton than Servox. The spectra for vowels produced by EL speakers are mixed signals combined with talkers' vocal output and electrolarynx noise. The spectra pattern is similar with two ELs. High, SPI index and vowel spectra from MDVP demonstrated characteristics of both electrolarynxes related to noise signal. This finding suggests that Blueton helps to provide one of useful rehabilitation options in the post laryngectomy patients.

  • PDF