• 제목/요약/키워드: Lexical analysis based on corpus

검색결과 20건 처리시간 0.019초

A Corpus-based Lexical Analysis of the Speech Texts: A Collocational Approach

  • Kim, Nahk-Bohk
    • 영어어문교육
    • /
    • 제15권3호
    • /
    • pp.151-170
    • /
    • 2009
  • Recently speech texts have been increasingly used for English education because of their various advantages as language teaching and learning materials. The purpose of this paper is to analyze speech texts in a corpus-based lexical approach, and suggest some productive methods which utilize English speaking or writing as the main resource for the course, along with introducing the actual classroom adaptations. First, this study shows that a speech corpus has some unique features such as different selections of pronouns, nouns, and lexical chunks in comparison to a general corpus. Next, from a collocational perspective, the study demonstrates that the speech corpus consists of a wide variety of collocations and lexical chunks which a number of linguists describe (Lewis, 1997; McCarthy, 1990; Willis, 1990). In other words, the speech corpus suggests that speech texts not only have considerable lexical potential that could be exploited to facilitate chunk-learning, but also that learners are not very likely to unlock this potential autonomously. Based on this result, teachers can develop a learners' corpus and use it by chunking the speech text. This new approach of adapting speech samples as important materials for college students' speaking or writing ability should be implemented as shown in samplers. Finally, to foster learner's productive skills more communicatively, a few practical suggestions are made such as chunking and windowing chunks of speech and presentation, and the pedagogical implications are discussed.

  • PDF

Buckeye corpus에 나타난 탄설음화 현상 분석 (A study of flaps in American English based on the Buckeye Corpus)

  • 황병후;강석한
    • 말소리와 음성과학
    • /
    • 제10권3호
    • /
    • pp.9-18
    • /
    • 2018
  • This paper presents an acoustic and phonological study of the alveolar flaps in American English. Based on the Buckeye Corpus, the flapping tokens produced by twenty men are analyzed at both lexical and post-lexical levels. The data, analyzed with Pratt speech analysis, include duration, F2 and F3 in voicing during the flap, as well as duration, F1, F2, F3, and f0 in the adjacent vowels. The results provide evidence on two issues: (1) The different ways in which voiced and voiceless alveolar stops give rise to neutralized flapping stops by following lexical and post-lexical levels, (2) The extent to which the vowel features (height, frontness, and tenseness) affect flapping sounds. The results show that flaps are affected by pre-consonantal vowel features at the lexical as well as post-lexical levels. Unlike previous studies, this study uses the Praat method to distinguish flapped from unflapped tokens in the Buckeye Corpus and examines connections between the lexical and post-lexical levels.

코퍼스를 기반으로 한 어휘 과제가 고등학생의 영어 어휘 학습과 태도에 미치는 영향 (The effects of corpus-based vocabulary tasks on high school students' English vocabulary learning and attitude)

  • 이현진;이은주
    • 영어어문교육
    • /
    • 제16권4호
    • /
    • pp.239-265
    • /
    • 2010
  • This study investigates the effects of corpus-based vocabulary tasks on the acquisition of English vocabulary in an attempt to explore the influence of corpus use on EFL pedagogy. For this to be realized, a total of 40 Korean high school students participated in the study over a 4-week period. An experimental group used a set of corpus-based tasks for vocabulary learning, whereas a control group carried out a traditional task (i.e., the L1-L2 translation) for vocabulary learning. To assess learning gains, the students were asked to complete the pre- and post-treatment tests measuring the word form, meaning, and use aspects of target lexical items. Results of the study indicate that in the experimental group the corpus-based vocabulary tasks were beneficial for the learning of word forms and use. In particular, corpus-based benefits were greatest in the low-proficiency EFL learners' collocational aspects of vocabulary use. On the other hand, in the control group, the traditional vocabulary tasks benefited the meaning aspects of target vocabulary items the most. In addition, survey results revealed that most students were positive about the corpus-based learning experience although some expressed reservations about the heavy cognitive load and the time-consuming nature of the analysis of corpus data primarily due to learners' lack of language proficiency.

  • PDF

Predicting CEFR Levels in L2 Oral Speech, Based on Lexical and Syntactic Complexity

  • Hu, Xiaolin
    • 아시아태평양코퍼스연구
    • /
    • 제2권1호
    • /
    • pp.35-45
    • /
    • 2021
  • With the wide spread of the Common European Framework of Reference (CEFR) scales, many studies attempt to apply them in routine teaching and rater training, while more evidence regarding criterial features at different CEFR levels are still urgently needed. The current study aims to explore complexity features that distinguish and predict CEFR proficiency levels in oral performance. Using a quantitative/corpus-based approach, this research analyzed lexical and syntactic complexity features over 80 transcriptions (includes A1, A2, B1 CEFR levels, and native speakers), based on an interview test, Standard Speaking Test (SST). ANOVA and correlation analysis were conducted to exclude insignificant complexity indices before the discriminant analysis. In the result, distinctive differences in complexity between CEFR speaking levels were observed, and with a combination of six major complexity features as predictors, 78.8% of the oral transcriptions were classified into the appropriate CEFR proficiency levels. It further confirms the possibility of predicting CEFR level of L2 learners based on their objective linguistic features. This study can be helpful as an empirical reference in language pedagogy, especially for L2 learners' self-assessment and teachers' prediction of students' proficiency levels. Also, it offers implications for the validation of the rating criteria, and improvement of rating system.

A Study on the Diachronic Evolution of Ancient Chinese Vocabulary Based on a Large-Scale Rough Annotated Corpus

  • Yuan, Yiguo;Li, Bin
    • 아시아태평양코퍼스연구
    • /
    • 제2권2호
    • /
    • pp.31-41
    • /
    • 2021
  • This paper makes a quantitative analysis of the diachronic evolution of ancient Chinese vocabulary by constructing and counting a large-scale rough annotated corpus. The texts from Si Ku Quan Shu (a collection of Chinese ancient books) are automatically segmented to obtain ancient Chinese vocabulary with time information, which is used to the statistics on word frequency, standardized type/token ratio and proportion of monosyllabic words and dissyllabic words. Through data analysis, this study has the following four findings. Firstly, the high-frequency words in ancient Chinese are stable to a certain extent. Secondly, there is no obvious dissyllabic trend in ancient Chinese vocabulary. Moreover, the Northern and Southern Dynasties (420-589 AD) and Yuan Dynasty (1271-1368 AD) are probably the two periods with the most abundant vocabulary in ancient Chinese. Finally, the unique words with high frequency in each dynasty are mainly official titles with real power. These findings break away from qualitative methods used in traditional researches on Chinese language history and instead uses quantitative methods to draw macroscopic conclusions from large-scale corpus.

한국어 어휘 중의성 해소에서 어휘 확률에 대한 효과적인 평가 방법 (An Effective Estimation method for Lexical Probabilities in Korean Lexical Disambiguation)

  • 이하규
    • 한국정보처리학회논문지
    • /
    • 제3권6호
    • /
    • pp.1588-1597
    • /
    • 1996
  • 본 논문은 한국어 어휘 중의성 해소(lexical disambiguation)에서 어휘 확률 (lexical probability) 평가방법에 대해 기술하고 있다. 통계적 접근 방법의 어휘 중 의성 해소에서는 일반적으로 말뭉치(corpus)로부터 추출된 통계 자료에 기초하여 어 휘 확률과 문맥 확률(contextual probability)을 평가한다. 한국어는 어절별로 띄어 쓰기가 이루어지므로 어절 단위로 어휘 확률을 적용하는 것이 바람직하다. 하지만 한 국어는 어절의 다양성이 심하기 때문에 상당히 큰 말뭉치를 사용하더라도 어절 단위 로는 어휘 확률을 직접 평가할 수 없는 경우가 다소 있다. 이러한 문제점을 극복하기 위해 본 연구에서는 어휘 분석 측면에서 어절의 유사성을 정의하고 이에 기반을 둔 한국어 어휘 확률 평가 방법을 제안한다. 이 방법에서는 어떤 어절에 대해 어휘 확률 을 직접 평가할 수 없는 경우 이와 어휘 분석이 유사한 어절들을 통해 간접적으로 평 가한다. 실험결과 제안된 접근방법이 한국어 어휘 중의성 해소에 효과적인 것으로 나 타나고 있다.

  • PDF

통계정보에 기반을 둔 한국어 어휘중의성해소 (Korean Lexical Disambiguation Based on Statistical Information)

  • 박하규;김영택
    • 한국통신학회논문지
    • /
    • 제19권2호
    • /
    • pp.265-275
    • /
    • 1994
  • 어휘중의성 해소는 음성 인식/생성, 정보 검색, 발뭉치 태킹 등 자연언어 처리에서 가장 기초가 되는 분야 중의 하나이다. 본 논문은 말뭉치로부터 추출된 통계정보를 이용하는 한국어 어휘중의성해소 기법에 대해 기술한다. 이 기법에서는 좀더 정밀한 중의성해소를 위해 품사태그 대신 형태소분석 결과에 해당하는 토큰태그를 사용하고 있다. 본 논문에서 제안한 어휘선택함수는 어미나 조사의 호응 관계등 한국어의 어휘적 특성을 잘 반영하기 때문에 상당히 높은 정확성을 보여준다. 그리고 활용분야에 적합하게 사용될 수 있도록 유일선택 방식과 다중선택 방식이라는 두가지 중의성해소 방식을 지원하고 있다.

  • PDF

한국어 연결어미 '-면서'와 중국어 대응표현의 대조연구 -한·중 병렬 말뭉치를 기반으로 (A Comparative Study on Korean Connective Morpheme '-myenseo' to the Chinese expression - based on Korean-Chinese parallel corpus)

  • YI, CHAO
    • 비교문화연구
    • /
    • 제37권
    • /
    • pp.309-334
    • /
    • 2014
  • This study is based on the Korean-Chinese parallel corpus, utilizing the Korean connective morpheme '-myenseo' and contrasting with the Chinese expression. Korean learners often struggle with the use of Korean Connective Morpheme especially when there is a lexical gap between their mother language. '-myenseo' is of the most use Korean Connective Morpheme, it usually contrast to the Chinese coordinating conjunction. But according to the corpus, the contrastive Chinese expression to '-myenseo' is more than coordinating conjunction. So through this study, can help the Chinese Korean language learners learn easier while studying '-myenseo', because the variety Chinese expression are found from the parallel corpus that related to '-myenseo'. In this study, firstly discussed the semantic features and syntactic characteristics of '-myenseo'. The significant semantic features of '-myenseo' are 'simultaneous' and 'conflict'. So in this chapter the study use examples of usage to analyse the specific usage of '-myenseo'. And then this study analyse syntactic characteristics of '-myenseo' through the subject constraint, predicate constraints, temporal constraints, mood constraints, negatives constraints. then summarize them into a table. And the most important part of this study is Chapter 4. In this chapter, it contrasted the Korean connective morpheme '-myenseo' to the Chinese expression by analysing the Korean-Chinese parallel corpus. As a result of the analysis, the frequency of the Chinese expression that contrasted to '-myenseo' is summarized into

    . It can see from the table that the most common Chinese expression comparative to '-myenseo' is non-marker patterns. That means the connection of sentence in Korean can use connective morpheme what is a clarifying linguistic marker, but in Chinese it often connect the sentence by their intrinsic logical relationships. So the conclusion of this chapter is that '-myenseo' can be comparative to Chinese conjunction, expression, non-marker patterns and liberal translation patterns, which are more than Chinese conjunction that discovered before. In the last Chapter, as the conclusion part of this study, it summarized and suggest the limitations and the future research direction.

  • A Machine Learning Approach to Korean Language Stemming

    • Cho, Se-hyeong
      • 한국지능시스템학회논문지
      • /
      • 제11권6호
      • /
      • pp.549-557
      • /
      • 2001
    • Morphological analysis and POS tagging require a dictionary for the language at hand . In this fashion though it is impossible to analyze a language a dictionary. We also have difficulty if significant portion of the vocabulary is new or unknown . This paper explores the possibility of learning morphology of an agglutinative language. in particular Korean language, without any prior lexical knowledge of the language. We use unsupervised learning in that there is no instructor to guide the outcome of the learner, nor any tagged corpus. Here are the main characteristics of the approach: First. we use only raw corpus without any tags attached or any dictionary. Second, unlike many heuristics that are theoretically ungrounded, this method is based on statistical methods , which are widely accepted. The method is currently applied only to Korean language but since it is essentially language-neutral it can easily be adapted to other agglutinative languages.

    • PDF

    Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

    • Modi, Deepa;Nain, Neeta;Nehra, Maninder
      • Journal of Multimedia Information System
      • /
      • 제5권3호
      • /
      • pp.147-154
      • /
      • 2018
    • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.