• Title/Summary/Keyword: Text corpus

Search Result 243, Processing Time 0.053 seconds

Usage analysis of vocabulary in Korean high school English textbooks using multiple corpora (코퍼스를 통한 고등학교 영어교과서의 어휘 분석)

  • Kim, Young-Mi;Suh, Jin-Hee
    • English Language & Literature Teaching
    • /
    • v.12 no.4
    • /
    • pp.139-157
    • /
    • 2006
  • As the Communicative Approach has become the norm in foreign language teaching, the objectives of teaching English in school have changed radically in Korea. The focus in high school English textbooks has shifted from mere mastery of structures to communicative proficiency. This paper will study five polysemous words which appear in twelve high school English textbooks used in Korea. The twelve text books are incorporated into a single corpus and analyzed to classify the usage of the selected words. Then the usage of each word was compared with that of three other corpora based sources: the BNC(British National Corpus) Sampler, ICE Singapore(International Corpus of English for Singapore) and Collins COBUILD learner's dictionary which is based on the corpus, "The Bank of English". The comparisons carried out as part of this study will demonstrate that Korean text books do not always supply the full range of meanings of polysemous words.

  • PDF

Quantifying L2ers' phraseological competence and text quality in L2 English writing (L2 영어 학습자들의 연어 사용 능숙도와 텍스트 질 사이의 수치화)

  • Kwon, Junhyeok;Kim, Jaejun;Kim, Yoolae;Park, Myung-Kwan;Song, Sanghoun
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.281-284
    • /
    • 2017
  • On the basis of studies that show multi-word combinations, that is the field of phraseology, this study aims to examine relationship between the quality of text and phraseological competence in L2 English writing, following Yves Bestegen et al. (2014). Using two different association scores, t-score and Mutual Information(MI), which are opposite ways of measuring phraseological competence, in terms of scoring frequency and infrequency, bigrams from L2 writers' text scored based on a reference corpus, GloWbE (Corpus of Global Web based English). On a cross-sectional approach, we propose that the quality of the essays and the mean MI score of the bigram extracted from YELC, Yonsei English Learner Corpus, correlated to each other. The negative scores of bigrams are also correlated with the quality of the essays in the way that these bigrams are absent from the reference corpus, that is mostly ungrammatical. It indicates that increase in the proportion of the negative scored bigrams debases the quality of essays. The conclusion shows the quality of the essays scored by MI and t-score on cross-sectional approach, and application to teaching method and assessment for second language writing proficiency.

  • PDF

Quantifying L2ers' phraseological competence and text quality in L2 English writing (L2 영어 학습자들의 연어 사용 능숙도와 텍스트 질 사이의 수치화)

  • Kwon, Junhyeok;Kim, Jaejun;Kim, Yoolae;Park, Myung-Kwan;Song, Sanghoun
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.281-284
    • /
    • 2017
  • On the basis of studies that show multi-word combinations, that is the field of phraseology, this study aims to examine relationship between the quality of text and phraseological competence in L2 English writing, following Yves Bestegen et al. (2014). Using two different association scores, t-score and Mutual Information(MI), which are opposite ways of measuring phraseological competence, in terms of scoring frequency and infrequency, bigrams from L2 writers' text scored based on a reference corpus, GloWbE (Corpus of Global Web based English). On a cross-sectional approach, we propose that the quality of the essays and the mean MI score of the bigram extracted from YELC, Yonsei English Learner Corpus, correlated to each other. The negative scores of bigrams are also correlated with the quality of the essays in the way that these bigrams are absent from the reference corpus, that is mostly ungrammatical. It indicates that increase in the proportion of the negative scored bigrams debases the quality of essays. The conclusion shows the quality of the essays scored by MI and t-score on cross-sectional approach, and application to teaching method and assessment for second language writing proficiency.

  • PDF

Effects of Torilis Fructus Extract on the Relaxation of Corpus Cavernosum (음경해면체 이완작용에 미치는 사상자(蛇床子)의 효과)

  • Kim, Ho Hyun;Ahn, Sang Hyun;Park, Sun Young
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.32 no.1
    • /
    • pp.24-29
    • /
    • 2018
  • In order to define the effect of Torilis Fructus(TF) extract which has been used for the treatment of erectile dysfunction, experiments were carried out by organ bath study, histochemical and immunohistochemical methods. First, in the organ bath study, when TF extract was administered to the maxillary contracted corpus cavernosum by PE ($10^{-6}M$), there was a significant relaxation effect on corpus cavernosum at concentration of 1, $3mg/m{\ell}$. Compared with the absence of $\text\tiny{L}$-NNA pretreatmen, pretreatment of $\text\tiny{L}$-NNA was inhibited the relaxation effect of penile corpus cavernosum. In the immunohistochemical study, the eNOS positive reaction was significantly increased, and the PDE5 positive reaction was significantly decreased due to the administration of TF extract. Therefore, it show that the TF enhances the production of eNOS and NO, inhibits PDE5 which blocks the action of increased cGMP, relaxes the corpus cavernosum. So TF relaxes the corpus cavernosum and it can be used as a safer erectile dysfunction treatment.

Text Classification on Social Network Platforms Based on Deep Learning Models

  • YA, Chen;Tan, Juan;Hoekyung, Jung
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • The natural language on social network platforms has a certain front-to-back dependency in structure, and the direct conversion of Chinese text into a vector makes the dimensionality very high, thereby resulting in the low accuracy of existing text classification methods. To this end, this study establishes a deep learning model that combines a big data ultra-deep convolutional neural network (UDCNN) and long short-term memory network (LSTM). The deep structure of UDCNN is used to extract the features of text vector classification. The LSTM stores historical information to extract the context dependency of long texts, and word embedding is introduced to convert the text into low-dimensional vectors. Experiments are conducted on the social network platforms Sogou corpus and the University HowNet Chinese corpus. The research results show that compared with CNN + rand, LSTM, and other models, the neural network deep learning hybrid model can effectively improve the accuracy of text classification.

Generative probabilistic model with Dirichlet prior distribution for similarity analysis of research topic

  • Milyahilu, John;Kim, Jong Nam
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.595-602
    • /
    • 2020
  • We propose a generative probabilistic model with Dirichlet prior distribution for topic modeling and text similarity analysis. It assigns a topic and calculates text correlation between documents within a corpus. It also provides posterior probabilities that are assigned to each topic of a document based on the prior distribution in the corpus. We then present a Gibbs sampling algorithm for inference about the posterior distribution and compute text correlation among 50 abstracts from the papers published by IEEE. We also conduct a supervised learning to set a benchmark that justifies the performance of the LDA (Latent Dirichlet Allocation). The experiments show that the accuracy for topic assignment to a certain document is 76% for LDA. The results for supervised learning show the accuracy of 61%, the precision of 93% and the f1-score of 96%. A discussion for experimental results indicates a thorough justification based on probabilities, distributions, evaluation metrics and correlation coefficients with respect to topic assignment.

Corpus Annotation for the Linguistic Analysis of Reference Relations between Event and Spatial Expressions in Text (텍스트 내 사건-공간 표현 간 참조 관계 분석을 위한 말뭉치 주석)

  • Chung, Jin-Woo;Lee, Hee-Jin;Park, Jong C.
    • Language and Information
    • /
    • v.18 no.2
    • /
    • pp.141-168
    • /
    • 2014
  • Recognizing spatial information associated with events expressed in natural language text is essential not only for the interpretation of such events and but also for the understanding of the relations among them. However, spatial information is rarely mentioned as compared to events and the association between event and spatial expressions is also highly implicit in a text. This would make it difficult to automate the extraction of spatial information associated with events from the text. In this paper, we give a linguistic analysis of how spatial expressions are associated with event expressions in a text. We first present issues in annotating narrative texts with reference relations between event and spatial expressions, and then discuss surface-level linguistic characteristics of such relations based on the annotated corpus to give a helpful insight into developing an automated recognition method.

  • PDF

Sentence-Chain Based Seq2seq Model for Corpus Expansion

  • Chung, Euisok;Park, Jeon Gue
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.455-466
    • /
    • 2017
  • This study focuses on a method for sequential data augmentation in order to alleviate data sparseness problems. Specifically, we present corpus expansion techniques for enhancing the coverage of a language model. Recent recurrent neural network studies show that a seq2seq model can be applied for addressing language generation issues; it has the ability to generate new sentences from given input sentences. We present a method of corpus expansion using a sentence-chain based seq2seq model. For training the seq2seq model, sentence chains are used as triples. The first two sentences in a triple are used for the encoder of the seq2seq model, while the last sentence becomes a target sequence for the decoder. Using only internal resources, evaluation results show an improvement of approximately 7.6% relative perplexity over a baseline language model of Korean text. Additionally, from a comparison with a previous study, the sentence chain approach reduces the size of the training data by 38.4% while generating 1.4-times the number of n-grams with superior performance for English text.

Vocabulary Coverage Improvement for Embedded Continuous Speech Recognition Using Part-of-Speech Tagged Corpus (품사 부착 말뭉치를 이용한 임베디드용 연속음성인식의 어휘 적용률 개선)

  • Lim, Min-Kyu;Kim, Kwang-Ho;Kim, Ji-Hwan
    • MALSORI
    • /
    • no.67
    • /
    • pp.181-193
    • /
    • 2008
  • In this paper, we propose a vocabulary coverage improvement method for embedded continuous speech recognition (CSR) using a part-of-speech (POS) tagged corpus. We investigate 152 POS tags defined in Lancaster-Oslo-Bergen (LOB) corpus and word-POS tag pairs. We derive a new vocabulary through word addition. Words paired with some POS tags have to be included in vocabularies with any size, but the vocabulary inclusion of words paired with other POS tags varies based on the target size of vocabulary. The 152 POS tags are categorized according to whether the word addition is dependent of the size of the vocabulary. Using expert knowledge, we classify POS tags first, and then apply different ways of word addition based on the POS tags paired with the words. The performance of the proposed method is measured in terms of coverage and is compared with those of vocabularies with the same size (5,000 words) derived from frequency lists. The coverage of the proposed method is measured as 95.18% for the test short message service (SMS) text corpus, while those of the conventional vocabularies cover only 93.19% and 91.82% of words appeared in the same SMS text corpus.

  • PDF

A Study on the Use of Stopword Corpus for Cleansing Unstructured Text Data (비정형 텍스트 데이터 정제를 위한 불용어 코퍼스의 활용에 관한 연구)

  • Lee, Won-Jo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.891-897
    • /
    • 2022
  • In big data analysis, raw text data mostly exists in various unstructured data forms, so it becomes a structured data form that can be analyzed only after undergoing heuristic pre-processing and computer post-processing cleansing. Therefore, in this study, unnecessary elements are purified through pre-processing of the collected raw data in order to apply the wordcloud of R program, which is one of the text data analysis techniques, and stopwords are removed in the post-processing process. Then, a case study of wordcloud analysis was conducted, which calculates the frequency of occurrence of words and expresses words with high frequency as key issues. In this study, to improve the problems of the "nested stopword source code" method, which is the existing stopword processing method, using the word cloud technique of R, we propose the use of "general stopword corpus" and "user-defined stopword corpus" and conduct case analysis. The advantages and disadvantages of the proposed "unstructured data cleansing process model" are comparatively verified and presented, and the practical application of word cloud visualization analysis using the "proposed external corpus cleansing technique" is presented.