• Title/Summary/Keyword: Speech Corpus

Search Result 300, Processing Time 0.024 seconds

A Parser of Definitions in Korean Dictionary based on Probabilistic Grammar Rules (확률적 문법규칙에 기반한 국어사전의 뜻풀이말 구문분석기)

  • Lee, Su Gwang;Ok, Cheol Yeong
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.5
    • /
    • pp.448-448
    • /
    • 2001
  • The definitions in Korean dictionary not only describe meanings of title, but also include various semantic information such as hypernymy/hyponymy, meronymy/holonymy, polysemy, homonymy, synonymy, antonymy, and semantic features. This paper purposes to implement a parser as the basic tool to acquire automatically the semantic information from the definitions in Korean dictionary. For this purpose, first we constructed the part-of-speech tagged corpus and the tree tagged corpus from the definitions in Korean dictionary. And then we automatically extracted from the corpora the frequency of words which are ambiguous in part-of-speech tag and the grammar rules and their probability based on the statistical method. The parser is a kind of the probabilistic chart parser that uses the extracted data. The frequency of words which are ambiguous in part-of-speech tag and the grammar rules and their probability resolve the noun phrase's structural ambiguity during parsing. The parser uses a grammar factoring, Best-First search, and Viterbi search In order to reduce the number of nodes during parsing and to increase the performance. We experiment with grammar rule's probability, left-to-right parsing, and left-first search. By the experiments, when the parser uses grammar rule's probability and left-first search simultaneously, the result of parsing is most accurate and the recall is 51.74% and the precision is 87.47% on raw corpus.

Vocabulary Analyzer Based on CEFR-J Wordlist for Self-Reflection (VACSR) Version 2

  • Yukiko Ohashi;Noriaki Katagiri;Takao Oshikiri
    • Asia Pacific Journal of Corpus Research
    • /
    • v.4 no.2
    • /
    • pp.75-87
    • /
    • 2023
  • This paper presents a revised version of the vocabulary analyzer for self-reflection (VACSR), called VACSR v.2.0. The initial version of the VACSR automatically analyzes the occurrences and the level of vocabulary items in the transcribed texts, indicating the frequency, the unused vocabulary items, and those not belonging to either scale. However, it overlooked words with multiple parts of speech due to their identical headword representations. It also needed to provide more explanatory result tables from different corpora. VACSR v.2.0 overcomes the limitations of its predecessor. First, unlike VACSR v.1, VACSR v.2.0 distinguishes words that are different parts of speech by syntactic parsing using Stanza, an open-source Python library. It enables the categorization of the same lexical items with multiple parts of speech. Second, VACSR v.2.0 overcomes the limited clarity of VACSR v.1 by providing precise result output tables. The updated software compares the occurrence of vocabulary items included in classroom corpora for each level of the Common European Framework of Reference-Japan (CEFR-J) wordlist. A pilot study utilizing VACSR v.2.0 showed that, after converting two English classes taught by a preservice English teacher into corpora, the headwords used mostly corresponded to CEFR-J level A1. In practice, VACSR v.2.0 will promote users' reflection on their vocabulary usage and can be applied to teacher training.

Study of Emotion in Speech (감정변화에 따른 음성정보 분석에 관한 연구)

  • 장인창;박미경;김태수;박면웅
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.1123-1126
    • /
    • 2004
  • Recognizing emotion in speech is required lots of spoken language corpus not only at the different emotional statues, but also in individual languages. In this paper, we focused on the changes speech signals in different emotions. We compared the features of speech information like formant and pitch according to the 4 emotions (normal, happiness, sadness, anger). In Korean, pitch data on monophthongs changed in each emotion. Therefore we suggested the suitable analysis techniques using these features to recognize emotions in Korean.

  • PDF

A Model of English Part-Of-Speech Determination for English-Korean Machine Translation (영한 기계번역에서의 영어 품사결정 모델)

  • Kim, Sung-Dong;Park, Sung-Hoon
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.3
    • /
    • pp.53-65
    • /
    • 2009
  • The part-of-speech determination is necessary for resolving the part-of-speech ambiguity in English-Korean machine translation. The part-of-speech ambiguity causes high parsing complexity and makes the accurate translation difficult. In order to solve the problem, the resolution of the part-of-speech ambiguity must be performed after the lexical analysis and before the parsing. This paper proposes the CatAmRes model, which resolves the part-of-speech ambiguity, and compares the performance with that of other part-of-speech tagging methods. CatAmRes model determines the part-of-speech using the probability distribution from Bayesian network training and the statistical information, which are based on the Penn Treebank corpus. The proposed CatAmRes model consists of Calculator and POSDeterminer. Calculator calculates the degree of appropriateness of the partof-speech, and POSDeterminer determines the part-of-speech of the word based on the calculated values. In the experiment, we measure the performance using sentences from WSJ, Brown, IBM corpus.

  • PDF

A study on the change of prosodic units by speech rate and frequency of turn-taking (발화 속도와 말차례 교체 빈도에 따른 운율 단위 변화에 관한 연구)

  • Won, Yugwon
    • Phonetics and Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.29-38
    • /
    • 2022
  • This study aimed to analyze the speech appearing in the National Institute of Korean Language's Daily Conversation Speech Corpus (2020) and reveal how the speech rate and the frequency of turn-taking affect the change in prosody units. The analysis results showed a positive correlation between intonation phrase, word phrase frequency, and speaking duration as the speech speed increased; however, the correlation was low, and the suitability of the regression model of the speech rate was 3%-11%, which was weak in explanatory power. There was a significant difference in the mean speech rate according to the frequency of the turn-taking, and the speech rate decreased as the frequency of the turn-taking increased. In addition, as the frequency of turn-taking increased, the frequency of intonation phrases, the frequency of word phrases, and the speaking duration decreased; there was a high negative correlation. The suitability of the regression model of the turn-taking frequency was calculated as 27%-32%. The frequency of turn-taking functions as a factor in changing the speech rate and prosodic units. It is presumed that this can be influenced by the disfluency of the dialogue, the characteristics of turn-taking, and the active interaction between the speakers.

The Formant Frequency Differences of English Vowels as a Function of Stress and its Applications on Vowel Pronunciation Training (강세에 따른 영어 모음의 포먼트 변이와 모음 발음 교육에의 응용)

  • Kim, Ji-Eun;Yoon, Kyuchul
    • Phonetics and Speech Sciences
    • /
    • v.5 no.2
    • /
    • pp.53-58
    • /
    • 2013
  • The purpose of this study is to compare the first two vowel formants of the stressed and unstressed English vowels produced by ten young males (in their twenties and thirties) and ten old males (in their forties or fifties) from the Buckeye Corpus of Conversational Speech. The results indicate that the stressed and unstressed vowels, /i/ and $/{\ae}/$ in particular, from the two groups are different in their formant frequencies. In addition, the vowel space of the unstressed vowels is somewhat smaller than that of the stressed vowels. Specifically, the range of the second formant of the unstressed vowels and that of the first formant of the unstressed front vowels were compressed. The findings from this study can be applied to the pronunciation training for the Korean learners of English vowels. We propose that teachers of English pay attention to the stress patterns of English vowels as well as their formant frequencies.

Growth curve modeling of nucleus F0 on Korean accentual phrase

  • Yoon, Tae-Jin
    • Phonetics and Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.17-23
    • /
    • 2017
  • The present study investigates the effect of Accentual Phrase on F0 using a subset of large-scale corpus of Seoul Korean. Four syllable words which were neither preceded nor followed by silent pauses were presumed to be canonical exemplars of Accentual Phrases in Korean. These four syllable words were extracted from female speakers' speech samples. Growth curve analyses, combination of regression and polynomial curve fitting, were applied to the four syllable words. Four syllable words were divided into four groups depending on the categorical status of the initial segment: voiceless obstruents, voiced obstruents, sonorants, and vowels. Results of growth curve analyses indicate that initial segment types have an effect on the F0 (in semitone) in the nucleus of the initial syllable, and the cubic polynomial term revealed that some of the medial low tones in the 4 syllable words may be guided by the principle of contrast maximization, while others may be governed by the principle of ease of articulation.

Phonological processes of vowels in pronounced phrasal words of the Seoul Corpus by gender and age groups (서울코퍼스의 성별·연령 집단별 말 어절 모음에 나타난 음운변동)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.23-29
    • /
    • 2017
  • This paper investigated the phonological processes of monophthongs and diphthongs in pronounced phrasal words of the Seoul Corpus by gender and age groups in order to provide linguists and phoneticians with a clearer understanding of the spoken Korean. Both orthographic and pronounced phrasal words were extracted from the transcribed label scripts of the Corpus using Praat. Then, phonological processes of monophthongs and diphthongs were tabulated using an R script after syllabifying the phrasal words into separate components. Results revealed that 97% of the number of syllables in the orthographic and pronounced phrasal words were the same while 65.8% showed difference in the syllable structure. 90.5% of the vowels in the orthographic phrasal words were realized in the pronounced phrasal words. A Chi-square test of independence was performed to obtain a significant dependence in the distribution of phonological process types of male and female groups along with a very strong correlation. Female group changed the diphthong yo into yv at the end of the pronounced phrasal words more often than the male group did. Age groups also showed a significant dependence in the distribution of phonological process types along with a very strong correlation. Females in the 40s produced the diphthong yv and made the vowel raising at the end of the pronounced phrasal words most often among the gender and age groups. From the results, this paper concludes that an analysis of phonological processes in light of syllable structure can contribute greatly to the understanding of the spoken Korean.

Automatic pronunciation assessment of English produced by Korean learners using articulatory features (조음자질을 이용한 한국인 학습자의 영어 발화 자동 발음 평가)

  • Ryu, Hyuksu;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.103-113
    • /
    • 2016
  • This paper aims to propose articulatory features as novel predictors for automatic pronunciation assessment of English produced by Korean learners. Based on the distinctive feature theory, where phonemes are represented as a set of articulatory/phonetic properties, we propose articulatory Goodness-Of-Pronunciation(aGOP) features in terms of the corresponding articulatory attributes, such as nasal, sonorant, anterior, etc. An English speech corpus spoken by Korean learners is used in the assessment modeling. In our system, learners' speech is forced aligned and recognized by using the acoustic and pronunciation models derived from the WSJ corpus (native North American speech) and the CMU pronouncing dictionary, respectively. In order to compute aGOP features, articulatory models are trained for the corresponding articulatory attributes. In addition to the proposed features, various features which are divided into four categories such as RATE, SEGMENT, SILENCE, and GOP are applied as a baseline. In order to enhance the assessment modeling performance and investigate the weights of the salient features, relevant features are extracted by using Best Subset Selection(BSS). The results show that the proposed model using aGOP features outperform the baseline. In addition, analysis of relevant features extracted by BSS reveals that the selected aGOP features represent the salient variations of Korean learners of English. The results are expected to be effective for automatic pronunciation error detection, as well.

Correlation analysis of antipsychotic dose and speech characteristics according to extrapyramidal symptoms (추체외로 증상에 따른 항정신병 약물 복용량과 음성 특성의 상관관계 분석)

  • Lee, Subin;Kim, Seoyoung;Kim, Hye Yoon;Kim, Euitae;Yu, Kyung-Sang;Lee, Ho-Young;Lee, Kyogu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.367-374
    • /
    • 2022
  • In this paper, correlation analysis between speech characteristics and the dose of antipsychotic drugs was performed. To investigate the pattern of speech characteristics of ExtraPyramidal Symptoms (EPS) related to voice change, a common side effect of antipsychotic drugs, a Korean-based extrapyramidal symptom speech corpus was constructed through the sentence development. Through this, speech patterns of EPS and non-EPS groups were investigated, and in particular, a strong speech feature correlation was shown in the EPS group. In addition, it was confirmed that the type of speech sentence affects the speech feature pattern, and these results suggest the possibility of early detection of antipsychotics-induced EPS based on the speech features.