• Title/Summary/Keyword: Lexical Analysis

Search Result 174, Processing Time 0.022 seconds

Analysis of Lexical Effect on Spoken Word Recognition Test (한국어 단음절 낱말 인식에 미치는 어휘적 특성의 영향)

  • Yoon, Mi-Sun;Yi, Bong-Won
    • MALSORI
    • /
    • no.54
    • /
    • pp.15-26
    • /
    • 2005
  • The aim of this paper was to analyze the lexical effects on spoken word recognition of Korean monosyllabic word. The lexical factors chosen in this paper was frequency, density and lexical familiarity of words. Result of the analysis was as follows; frequency was the significant factor to predict spoken word recognition score of monosyllabic word. The other factors were not significant. This result suggest that word frequency should be considered in speech perception test.

  • PDF

A Study on Lexical Ambiguity Resolution of Korean Morphological Analyzer (형태소 분석기의 어휘적 중의성 해결에 관한 연구)

  • Park, Yong-Uk
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.4
    • /
    • pp.783-787
    • /
    • 2012
  • It is not easy to find out syntactic error in a spelling checker systems of Korean, because the spelling checker is generally to correct each phrase and it cannot check the errors of contextual ill-matched words. Spelling checker system tests errors based on a words. Disambiguation of lexical ambiguities is important in natural language processing. Its outputs is used in syntactic analysis. For accurate analysis of a sentence, syntactic analysis system must find out the ambiguity of morphemes in a word. In this paper, we suggest several rules to resolve the ambiguities of morphemes in a word. Using these methods, we can reduce many lexical ambiguities in Korean.

Analysis of Lexical Effect on Spoken Word Recognition Test (낱말 인식 검사에 대한 어휘적 특성의 영향 분석)

  • Yoon, Mi-Sun;Yi, Bong-Won
    • Proceedings of the KSPS conference
    • /
    • 2005.04a
    • /
    • pp.77-80
    • /
    • 2005
  • The aim of this paper was to analyze the lexical effects on spoken word recognition of Korean monosyllabic word. The lexical factors chosen in this paper was frequency, density and lexical familiarity of words. Result of the analysis was as follows; frequency was the significant factor to predict spoken word recognition score of monosyllabic word. The other factors were not significant. This result suggest that word frequency should be considered in speech perception test.

  • PDF

Understanding of Mathematics Terms with Lexical Ambiguity

  • Hwang, Jihyun
    • Research in Mathematical Education
    • /
    • v.24 no.2
    • /
    • pp.69-82
    • /
    • 2021
  • The purpose of this study is to explore how mathematics educators understand the terms having lexical ambiguity. Five terms with lexical ambiguity, leave, times, high, continuous, and convergent were selected based on literature review and recommendations of college calculus instructors. The participants consisted of four mathematics educators at a large Midwestern university. The qualitative data were collected from open-ended items in the survey. As a result of analysis, I provided participants' sentences with five terms showing their understanding of each term. The data analysis revealed that mathematics educators were not able to separate the meanings of the words such as leave and high when these words are frequently used in daily life, and the meanings in mathematics context are similar with that in daily context. Lexical ambiguity shown by mathematics educators can help mathematics teachers to understand the terms with lexical ambiguity and improve their instructions when those terms should be found in students' conversations.

Lexical Sophistication Features to Distinguish the English Proficiency Level Using a Discriminant Function Analysis (판별분석을 통해 살펴본 영어 능력 수준을 구별하는 어휘의 정교화 특성)

  • Lee, Young-Ju
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.691-696
    • /
    • 2022
  • This study explored the lexical sophistication features to distinguish the group membership of English proficiency, using the automatic analysis program of lexical sophistication. A total of 600 essays written by 300 Korean college students were extracted from the ICNALE (International Corpus Network of Asian Learners of English) corpus and a discriminant function analysis was performed using SPSS program. Results showed that the lexical features to distinguish three groups of English proficiency are SUBTLEXUS frequency content words, age of acquisition content words, lexical decision mean reaction time function words, and hypernymy verbs. High-level Korean students used frequent content words from SUBTLEXUS corpus to a lesser degree and produced more sophisticated words that can be learned at a later age and take longer reaction time in lexical decision task, and more concrete verbs.

A study of flaps in American English based on the Buckeye Corpus (Buckeye corpus에 나타난 탄설음화 현상 분석)

  • Hwang, Byeonghoo;Kang, Seokhan
    • Phonetics and Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.9-18
    • /
    • 2018
  • This paper presents an acoustic and phonological study of the alveolar flaps in American English. Based on the Buckeye Corpus, the flapping tokens produced by twenty men are analyzed at both lexical and post-lexical levels. The data, analyzed with Pratt speech analysis, include duration, F2 and F3 in voicing during the flap, as well as duration, F1, F2, F3, and f0 in the adjacent vowels. The results provide evidence on two issues: (1) The different ways in which voiced and voiceless alveolar stops give rise to neutralized flapping stops by following lexical and post-lexical levels, (2) The extent to which the vowel features (height, frontness, and tenseness) affect flapping sounds. The results show that flaps are affected by pre-consonantal vowel features at the lexical as well as post-lexical levels. Unlike previous studies, this study uses the Praat method to distinguish flapped from unflapped tokens in the Buckeye Corpus and examines connections between the lexical and post-lexical levels.

An Effective Estimation method for Lexical Probabilities in Korean Lexical Disambiguation (한국어 어휘 중의성 해소에서 어휘 확률에 대한 효과적인 평가 방법)

  • Lee, Ha-Gyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1588-1597
    • /
    • 1996
  • This paper describes an estimation method for lexical probabilities in Korean lexical disambiguation. In the stochastic to lexical disambiguation lexical probabilities and contextual probabilities are generally estimated on the basis of statistical data extracted form corpora. It is desirable to apply lexical probabilities in terms of word phrases for Korean because sentences are spaced in the unit of word phrase. However, Korean word phrases are so multiform that there are more or less chances that lexical probabilities cannot be estimated directly in terms of word phrases though fairly large corpora are used. To overcome this problem, similarity for word phrases is defined from the lexical analysis point of view in this research and an estimation method for Korean lexical probabilities based on the similarity is proposed. In this method, when a lexical probability for a word phrase cannot be estimated directly, it is estimated indirectly through the word phrase similar to the given one. Experimental results show that the proposed approach is effective for Korean lexical disambiguation.

  • PDF

A Corpus-based Lexical Analysis of the Speech Texts: A Collocational Approach

  • Kim, Nahk-Bohk
    • English Language & Literature Teaching
    • /
    • v.15 no.3
    • /
    • pp.151-170
    • /
    • 2009
  • Recently speech texts have been increasingly used for English education because of their various advantages as language teaching and learning materials. The purpose of this paper is to analyze speech texts in a corpus-based lexical approach, and suggest some productive methods which utilize English speaking or writing as the main resource for the course, along with introducing the actual classroom adaptations. First, this study shows that a speech corpus has some unique features such as different selections of pronouns, nouns, and lexical chunks in comparison to a general corpus. Next, from a collocational perspective, the study demonstrates that the speech corpus consists of a wide variety of collocations and lexical chunks which a number of linguists describe (Lewis, 1997; McCarthy, 1990; Willis, 1990). In other words, the speech corpus suggests that speech texts not only have considerable lexical potential that could be exploited to facilitate chunk-learning, but also that learners are not very likely to unlock this potential autonomously. Based on this result, teachers can develop a learners' corpus and use it by chunking the speech text. This new approach of adapting speech samples as important materials for college students' speaking or writing ability should be implemented as shown in samplers. Finally, to foster learner's productive skills more communicatively, a few practical suggestions are made such as chunking and windowing chunks of speech and presentation, and the pedagogical implications are discussed.

  • PDF

The Voice Dialing System Using Dynamic Hidden Markov Models and Lexical Analysis (DHMM과 어휘해석을 이용한 Voice dialing 시스템)

  • 최성호;이강성;김순협
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.7
    • /
    • pp.548-556
    • /
    • 1991
  • In this paper, Korean spoken continuous digits are ercognized using DHMM(Dynamic Hidden Markov Model) and lexical analysis to provide the base of developing voice dialing system. After segmentation by phoneme unit, it is recognized. This system can be divided into the segmentation section, the design of standard speech section, the recognition section, and the lexical analysis section. In the segmentation section, it is segmented using the ZCR, O order LPC cepstrum, and Ai, parameter of voice speech dectaction, which is changed according to time. In the standard speech design section, 19 phonemes or syllables are trained by DHMM and designed as a standard speech. In the recognition section, phomeme stream are recognized by the Viterbi algorithm.In the lexical decoder section, finally recognized continuous digits are outputed. This experiment shiwed the recognition rate of 85.1% using data spoken 7 times of 21 classes of 7 continuous digits which are combinated all of the occurence, spoken by 10 man.

  • PDF

Lexical Discovery and Consolidation Strategies of Proficient and Less Proficient EFL Vocational High School Learners

  • Chon, Yuah Vicky;Kim, You-Hee
    • English Language & Literature Teaching
    • /
    • v.17 no.3
    • /
    • pp.27-56
    • /
    • 2011
  • The analysis on the use of lexical discovery and consolidation strategies that have been researched within the area of vocabulary learning strategies (VLS) have not sufficiently drawn the interest of EFL practitioners with regard to vocational high school learners. The results, however, are expected to have implications for the design of vocabulary tasks and instructional materials for EFL learners. The present study investigates EFL vocational high school learners' use of lexical discovery and consolidation strategies with questionnaires, where the use of the learners' lexical discovery strategies were further validated with the think-aloud methodology by asking samples of proficient and less proficient learners to report on their reading process while reading L2 texts that had not been exposed to the learners. The results indicated that there were significant differences between the two groups of learners in the employment of 11 of the strategies which were in the categories of determination, social, memory, and metacognitive strategies, but not for cognitive strategies. The pattern of strategies indicated that different lexical discovery and consolidation strategies were employed relatively more by one proficiency group than another. The study suggests some implications for how strategy-based instruction can be implemented in EFL classrooms.

  • PDF