• 제목/요약/키워드: lexical segmentation

검색결과 11건 처리시간 0.021초

The Effect of Acoustic Correlates of Domain-initial Strengthening in Lexical Segmentation of English by Native Korean Listeners

  • Kim, Sa-Hyang;Cho, Tae-Hong
    • 말소리와 음성과학
    • /
    • 제2권3호
    • /
    • pp.115-124
    • /
    • 2010
  • The current study investigated the role of acoustic correlates of domain-initial strengthening in lexical segmentation of a non-native language. In a series of cross-modal identity-priming experiments, native Korean listeners heard English auditory stimuli and made lexical decision to visual targets (i.e., written words). The auditory stimuli contained critical two word sequences which created temporal lexical ambiguity (e.g., 'mill#company', with the competitor 'milk'). There was either an IP boundary or a word boundary between the two words in the critical sequences. The initial CV of the second word (e.g., [$k_{\Lambda}$] in 'company') was spliced from another token of the sequence in IP- or Wd-initial positions. The prime words were postboundary words (e.g., company) in Experiment 1, and preboundary words (e.g., mill) in Experiment 2. In both experiments, Korean listeners showed priming effects only in IP contexts, indicating that they can make use of IP boundary cues of English in lexical segmentation of English. The acoustic correlates of domain-initial strengthening were also exploited by Korean listeners, but significant effects were found only for the segmentation of postboundary words. The results therefore indicate that L2 listeners can make use of prosodically driven phonetic detail in lexical segmentation of L2, as long as the direction of those cues are similar in their L1 and L2. The exact use of the cues by Korean listeners was, however, different from that found with native English listeners in Cho, McQueen, and Cox (2007). The differential use of the prosodically driven phonetic cues by the native and non-native listeners are thus discussed.

  • PDF

DHMM과 어휘해석을 이용한 Voice dialing 시스템 (The Voice Dialing System Using Dynamic Hidden Markov Models and Lexical Analysis)

  • 최성호;이강성;김순협
    • 전자공학회논문지B
    • /
    • 제28B권7호
    • /
    • pp.548-556
    • /
    • 1991
  • In this paper, Korean spoken continuous digits are ercognized using DHMM(Dynamic Hidden Markov Model) and lexical analysis to provide the base of developing voice dialing system. After segmentation by phoneme unit, it is recognized. This system can be divided into the segmentation section, the design of standard speech section, the recognition section, and the lexical analysis section. In the segmentation section, it is segmented using the ZCR, O order LPC cepstrum, and Ai, parameter of voice speech dectaction, which is changed according to time. In the standard speech design section, 19 phonemes or syllables are trained by DHMM and designed as a standard speech. In the recognition section, phomeme stream are recognized by the Viterbi algorithm.In the lexical decoder section, finally recognized continuous digits are outputed. This experiment shiwed the recognition rate of 85.1% using data spoken 7 times of 21 classes of 7 continuous digits which are combinated all of the occurence, spoken by 10 man.

  • PDF

The Role of Post-lexical Intonational Patterns in Korean Word Segmentation

  • Kim, Sa-Hyang
    • 음성과학
    • /
    • 제14권1호
    • /
    • pp.37-62
    • /
    • 2007
  • The current study examines the role of post-lexical tonal patterns of a prosodic phrase in word segmentation. In a word spotting experiment, native Korean listeners were asked to spot a disyllabic or trisyllabic word from twelve syllable speech stream that was composed of three Accentual Phrases (AP). Words occurred with various post-lexical intonation patterns. The results showed that listeners spotted more words in phrase-initial than in phrase-medial position, suggesting that the AP-final H tone from the preceding AP helped listeners to segment the phrase-initial word in the target AP. Results also showed that listeners' error rates were significantly lower when words occurred with initial rising tonal pattern, which is the most frequent intonational pattern imposed upon multisyllabic words in Korean, than with non-rising patterns. This result was observed both in AP-initial and in AP-medial positions, regardless of the frequency and legality of overall AP tonal patterns. Tonal cues other than initial rising tone did not positively influence the error rate. These results not only indicate that rising tone in AP-initial and AP_final position is a reliable cue for word boundary detection for Korean listeners, but further suggest that phrasal intonation contours serve as a possible word boundary cue in languages without lexical prominence.

  • PDF

Research on Keyword-Overlap Similarity Algorithm Optimization in Short English Text Based on Lexical Chunk Theory

  • Na Li;Cheng Li;Honglie Zhang
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.631-640
    • /
    • 2023
  • Short-text similarity calculation is one of the hot issues in natural language processing research. The conventional keyword-overlap similarity algorithms merely consider the lexical item information and neglect the effect of the word order. And some of its optimized algorithms combine the word order, but the weights are hard to be determined. In the paper, viewing the keyword-overlap similarity algorithm, the short English text similarity algorithm based on lexical chunk theory (LC-SETSA) is proposed, which introduces the lexical chunk theory existing in cognitive psychology category into the short English text similarity calculation for the first time. The lexical chunks are applied to segment short English texts, and the segmentation results demonstrate the semantic connotation and the fixed word order of the lexical chunks, and then the overlap similarity of the lexical chunks is calculated accordingly. Finally, the comparative experiments are carried out, and the experimental results prove that the proposed algorithm of the paper is feasible, stable, and effective to a large extent.

영어의 강음절(강세 음절)과 한국어 화자의 단어 분절 (Strong (stressed) syllables in English and lexical segmentation by Koreans)

  • 김선미;남기춘
    • 말소리와 음성과학
    • /
    • 제3권1호
    • /
    • pp.3-14
    • /
    • 2011
  • It has been posited that in English, native listeners use the Metrical Segmentation Strategy (MSS) for the segmentation of continuous speech. Strong syllables tend to be perceived as potential word onsets for English native speakers, which is due to the high proportion of strong syllables word-initially in the English vocabulary. This study investigates whether Koreans employ the same strategy when segmenting speech input in English. Word-spotting experiments were conducted using vowel-initial and consonant-initial bisyllabic targets embedded in nonsense trisyllables in Experiment 1 and 2, respectively. The effect of strong syllable was significant in the RT (reaction times) analysis but not in the error analysis. In both experiments, Korean listeners detected words more slowly when the word-initial syllable is strong (stressed) than when it is weak (unstressed). However, the error analysis showed that there was no effect of initial stress in Experiment 1 and in the item (F2) analysis in Experiment 2. Only the subject (F1) analysis in Experiment 2 showed that the participants made more errors when the word starts with a strong syllable. These findings suggest that Koran listeners do not use the Metrical Segmentation Strategy for segmenting English speech. They do not treat strong syllables as word beginnings, but rather have difficulties recognizing words when the word starts with a strong syllable. These results are discussed in terms of intonational properties of Korean prosodic phrases which are found to serve as lexical segmentation cues in the Korean language.

  • PDF

강음절이 한국어 화자의 영어 연속 음성의 어휘 분절에 미치는 영향 (The Effect of Strong Syllables on Lexical Segmentation in English Continuous Speech by Korean Speakers)

  • 김선미;남기춘
    • 말소리와 음성과학
    • /
    • 제5권2호
    • /
    • pp.43-51
    • /
    • 2013
  • English native listeners have a tendency to treat strong syllables in a speech stream as the potential initial syllables of new words, since the majority of lexical words in English have a word-initial stress. The current study investigates whether Korean (L1) - English (L2) late bilinguals perceive strong syllables in English continuous speech as word onsets, as English native listeners do. In Experiment 1, word-spotting was slower when the word-initial syllable was strong, indicating that Korean listeners do not perceive strong syllables as word onsets. Experiment 2 was conducted in order to avoid any possibilities that the results of Experiment 1 may be due to the strong-initial targets themselves used in Experiment 1 being slower to recognize than the weak-initial targets. We employed the gating paradigm in Experiment 2, and measured the Isolation Point (IP, the point at which participants correctly identify a word without subsequently changing their minds) and the Recognition Point (RP, the point at which participants correctly identify the target with 85% or greater confidence) for the targets excised from the non-words in the two conditions of Experiment 1. Both the mean IPs and the mean RPs were significantly earlier for the strong-initial targets, which means that the results of Experiment 1 reflect the difficulty of segmentation when the initial syllable of words was strong. These results are consistent with Kim & Nam (2011), indicating that strong syllables are not perceived as word onsets for Korean listeners and interfere with lexical segmentation in English running speech.

단어 빈도와 음절 이웃 크기가 한국어 명사의 음성 분절에 미치는 영향 (The Effect of Word Frequency and Neighborhood Density on Spoken Word Segmentation in Korean)

  • 송진영;남기춘;구민모
    • 말소리와 음성과학
    • /
    • 제4권2호
    • /
    • pp.3-20
    • /
    • 2012
  • The purpose of this study was to investigate whether a segmentation unit for a Korean noun is a 'syllable' and whether the process of segmenting spoken words occurs at the lexical level. A syllable monitoring task was administered which required participants to detect an auditorily presented target from visually presented words. In Experiment 1, syllable neighborhood density of high frequency words which can be segmented into both CV-CVC and CVC-VC were controlled. The syllable effect and the neighborhood density effect were significant, and the syllable effect emerged differently depending on the syllable neighborhood density. Similar results were obtained in Experiment 2 where low frequency words were used. The significance of word frequency effect on syllable effect was also examined. The results of Experiments 1 and 2 indicated that the segmentation unit for a Korean noun is indeed a 'syllable', and this process can occur at the lexical level.

효율적인 영어 구문 분석을 위한 최대 엔트로피 모델에 의한 문장 분할 (Intra-Sentence Segmentation using Maximum Entropy Model for Efficient Parsing of English Sentences)

  • 김성동
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제32권5호
    • /
    • pp.385-395
    • /
    • 2005
  • 긴 문장 분석은 높은 분석 복잡도로 인해 기계 번역에서 매우 어려운 문제이다. 구문 분석의 복잡도를 줄이기 위하여 문장 분할 방법이 제안되었으며 본 논문에서는 문장 분할의 적용률과 정확도를 높이기 위한 최대 엔트로피 확률 모델 기반의 문장 분할 방법을 제시한다. 분할 위치의 어휘 문맥적 특징을 추출하여 후보 분할 위치를 선정하는 규칙을 학습을 통해 자동적으로 획득하고 각 후보 분할 위치에 분할 확률 값을 제공하는 확률 모델을 생성한다. 어휘 문맥은 문장 분할 위치가 표시된 말뭉치로부터 추출되며 최대 엔트로피 원리에 기반하여 확률 모델에 결합된다. Wall Street Journal의 문장을 추출하여 학습 데이타를 생성하는 말뭉치를 구축하고 네 개의 서로 다른 영역으로부터 문장을 추출하여 문장 분할 실험을 하였다. 실험을 통해 약 $88\%$의 문장 분할의 정확도와 약 $98\%$의 적용률을 보였다. 또한 문장 분할이 효율적인 파싱에 기여하는 정도를 측정하여 분석 시간 면에서 약 4.8배, 공간 면에서 약 3.6배의 분석 효율이 향상되었음을 확인하였다.

통계 정보와 유전자 학습에 의한 최적의 문장 분할 위치 결정 (Determination of an Optimal Sentence Segmentation Position using Statistical Information and Genetic Learning)

  • 김성동;김영택
    • 전자공학회논문지C
    • /
    • 제35C권10호
    • /
    • pp.38-47
    • /
    • 1998
  • 실용적인 기계번역 시스템을 위한 구문 분석은 긴 문장의 분석을 허용하여야 하는데 긴 문장의 분석은 높은 분석의 복잡도 때문에 매우 어려운 문제이다. 본 논문에서는 긴 문장의 효율적인 분석을 위해 문장을 분할하는 방법을 제안하며 통계 정보와 유전자 학습에 의한 최적의 문장 분할 위치 결정 방법을 소개한다. 문장 분할 위치의 결정은 분할 위치가 태그된 훈련 데이타에서 얻어진 어휘 문맥 제한 조건을 이용하여 입력문장의 분할 가능 위치를 결정하는 부분과 여러 개의 분할 가능 위치 중에서 안전한 분할을 보장하고 보다 많은 분석의 효율 향상을 얻을 수 있는 최적의 분할 위치를 학습을 통해 선택하는 부분으로 구성된다. 실험을 통해 제안된 문장 분할 위치 결정 방법이 안전한 분할을 수행하며 문장 분석의 효율을 향상시킴을 보인다.

  • PDF

A Hybrid Approach for the Morpho-Lexical Disambiguation of Arabic

  • Bousmaha, Kheira Zineb;Rahmouni, Mustapha Kamel;Kouninef, Belkacem;Hadrich, Lamia Belguith
    • Journal of Information Processing Systems
    • /
    • 제12권3호
    • /
    • pp.358-380
    • /
    • 2016
  • In order to considerably reduce the ambiguity rate, we propose in this article a disambiguation approach that is based on the selection of the right diacritics at different analysis levels. This hybrid approach combines a linguistic approach with a multi-criteria decision one and could be considered as an alternative choice to solve the morpho-lexical ambiguity problem regardless of the diacritics rate of the processed text. As to its evaluation, we tried the disambiguation on the online Alkhalil morphological analyzer (the proposed approach can be used on any morphological analyzer of the Arabic language) and obtained encouraging results with an F-measure of more than 80%.