• 제목/요약/키워드: text corpus

검색결과 244건 처리시간 0.021초

코퍼스를 통한 고등학교 영어교과서의 어휘 분석 (Usage analysis of vocabulary in Korean high school English textbooks using multiple corpora)

  • 김영미;서진희
    • 영어어문교육
    • /
    • 제12권4호
    • /
    • pp.139-157
    • /
    • 2006
  • As the Communicative Approach has become the norm in foreign language teaching, the objectives of teaching English in school have changed radically in Korea. The focus in high school English textbooks has shifted from mere mastery of structures to communicative proficiency. This paper will study five polysemous words which appear in twelve high school English textbooks used in Korea. The twelve text books are incorporated into a single corpus and analyzed to classify the usage of the selected words. Then the usage of each word was compared with that of three other corpora based sources: the BNC(British National Corpus) Sampler, ICE Singapore(International Corpus of English for Singapore) and Collins COBUILD learner's dictionary which is based on the corpus, "The Bank of English". The comparisons carried out as part of this study will demonstrate that Korean text books do not always supply the full range of meanings of polysemous words.

  • PDF

L2 영어 학습자들의 연어 사용 능숙도와 텍스트 질 사이의 수치화 (Quantifying L2ers' phraseological competence and text quality in L2 English writing)

  • 권준혁;김재준;김유래;박명관;송상헌
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2017년도 제29회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.281-284
    • /
    • 2017
  • On the basis of studies that show multi-word combinations, that is the field of phraseology, this study aims to examine relationship between the quality of text and phraseological competence in L2 English writing, following Yves Bestegen et al. (2014). Using two different association scores, t-score and Mutual Information(MI), which are opposite ways of measuring phraseological competence, in terms of scoring frequency and infrequency, bigrams from L2 writers' text scored based on a reference corpus, GloWbE (Corpus of Global Web based English). On a cross-sectional approach, we propose that the quality of the essays and the mean MI score of the bigram extracted from YELC, Yonsei English Learner Corpus, correlated to each other. The negative scores of bigrams are also correlated with the quality of the essays in the way that these bigrams are absent from the reference corpus, that is mostly ungrammatical. It indicates that increase in the proportion of the negative scored bigrams debases the quality of essays. The conclusion shows the quality of the essays scored by MI and t-score on cross-sectional approach, and application to teaching method and assessment for second language writing proficiency.

  • PDF

L2 영어 학습자들의 연어 사용 능숙도와 텍스트 질 사이의 수치화 (Quantifying L2ers' phraseological competence and text quality in L2 English writing)

  • 권준혁;김재준;김유래;박명관;송상헌
    • 한국어정보학회:학술대회논문집
    • /
    • 한국어정보학회 2017년도 제29회 한글및한국어정보처리학술대회
    • /
    • pp.281-284
    • /
    • 2017
  • On the basis of studies that show multi-word combinations, that is the field of phraseology, this study aims to examine relationship between the quality of text and phraseological competence in L2 English writing, following Yves Bestegen et al. (2014). Using two different association scores, t-score and Mutual Information(MI), which are opposite ways of measuring phraseological competence, in terms of scoring frequency and infrequency, bigrams from L2 writers' text scored based on a reference corpus, GloWbE (Corpus of Global Web based English). On a cross-sectional approach, we propose that the quality of the essays and the mean MI score of the bigram extracted from YELC, Yonsei English Learner Corpus, correlated to each other. The negative scores of bigrams are also correlated with the quality of the essays in the way that these bigrams are absent from the reference corpus, that is mostly ungrammatical. It indicates that increase in the proportion of the negative scored bigrams debases the quality of essays. The conclusion shows the quality of the essays scored by MI and t-score on cross-sectional approach, and application to teaching method and assessment for second language writing proficiency.

  • PDF

음경해면체 이완작용에 미치는 사상자(蛇床子)의 효과 (Effects of Torilis Fructus Extract on the Relaxation of Corpus Cavernosum)

  • 김호현;안상현;박선영
    • 동의생리병리학회지
    • /
    • 제32권1호
    • /
    • pp.24-29
    • /
    • 2018
  • In order to define the effect of Torilis Fructus(TF) extract which has been used for the treatment of erectile dysfunction, experiments were carried out by organ bath study, histochemical and immunohistochemical methods. First, in the organ bath study, when TF extract was administered to the maxillary contracted corpus cavernosum by PE ($10^{-6}M$), there was a significant relaxation effect on corpus cavernosum at concentration of 1, $3mg/m{\ell}$. Compared with the absence of $\text\tiny{L}$-NNA pretreatmen, pretreatment of $\text\tiny{L}$-NNA was inhibited the relaxation effect of penile corpus cavernosum. In the immunohistochemical study, the eNOS positive reaction was significantly increased, and the PDE5 positive reaction was significantly decreased due to the administration of TF extract. Therefore, it show that the TF enhances the production of eNOS and NO, inhibits PDE5 which blocks the action of increased cGMP, relaxes the corpus cavernosum. So TF relaxes the corpus cavernosum and it can be used as a safer erectile dysfunction treatment.

Text Classification on Social Network Platforms Based on Deep Learning Models

  • YA, Chen;Tan, Juan;Hoekyung, Jung
    • Journal of information and communication convergence engineering
    • /
    • 제21권1호
    • /
    • pp.9-16
    • /
    • 2023
  • The natural language on social network platforms has a certain front-to-back dependency in structure, and the direct conversion of Chinese text into a vector makes the dimensionality very high, thereby resulting in the low accuracy of existing text classification methods. To this end, this study establishes a deep learning model that combines a big data ultra-deep convolutional neural network (UDCNN) and long short-term memory network (LSTM). The deep structure of UDCNN is used to extract the features of text vector classification. The LSTM stores historical information to extract the context dependency of long texts, and word embedding is introduced to convert the text into low-dimensional vectors. Experiments are conducted on the social network platforms Sogou corpus and the University HowNet Chinese corpus. The research results show that compared with CNN + rand, LSTM, and other models, the neural network deep learning hybrid model can effectively improve the accuracy of text classification.

Generative probabilistic model with Dirichlet prior distribution for similarity analysis of research topic

  • Milyahilu, John;Kim, Jong Nam
    • 한국멀티미디어학회논문지
    • /
    • 제23권4호
    • /
    • pp.595-602
    • /
    • 2020
  • We propose a generative probabilistic model with Dirichlet prior distribution for topic modeling and text similarity analysis. It assigns a topic and calculates text correlation between documents within a corpus. It also provides posterior probabilities that are assigned to each topic of a document based on the prior distribution in the corpus. We then present a Gibbs sampling algorithm for inference about the posterior distribution and compute text correlation among 50 abstracts from the papers published by IEEE. We also conduct a supervised learning to set a benchmark that justifies the performance of the LDA (Latent Dirichlet Allocation). The experiments show that the accuracy for topic assignment to a certain document is 76% for LDA. The results for supervised learning show the accuracy of 61%, the precision of 93% and the f1-score of 96%. A discussion for experimental results indicates a thorough justification based on probabilities, distributions, evaluation metrics and correlation coefficients with respect to topic assignment.

텍스트 내 사건-공간 표현 간 참조 관계 분석을 위한 말뭉치 주석 (Corpus Annotation for the Linguistic Analysis of Reference Relations between Event and Spatial Expressions in Text)

  • 정진우;이희진;박종철
    • 한국언어정보학회지:언어와정보
    • /
    • 제18권2호
    • /
    • pp.141-168
    • /
    • 2014
  • Recognizing spatial information associated with events expressed in natural language text is essential not only for the interpretation of such events and but also for the understanding of the relations among them. However, spatial information is rarely mentioned as compared to events and the association between event and spatial expressions is also highly implicit in a text. This would make it difficult to automate the extraction of spatial information associated with events from the text. In this paper, we give a linguistic analysis of how spatial expressions are associated with event expressions in a text. We first present issues in annotating narrative texts with reference relations between event and spatial expressions, and then discuss surface-level linguistic characteristics of such relations based on the annotated corpus to give a helpful insight into developing an automated recognition method.

  • PDF

Sentence-Chain Based Seq2seq Model for Corpus Expansion

  • Chung, Euisok;Park, Jeon Gue
    • ETRI Journal
    • /
    • 제39권4호
    • /
    • pp.455-466
    • /
    • 2017
  • This study focuses on a method for sequential data augmentation in order to alleviate data sparseness problems. Specifically, we present corpus expansion techniques for enhancing the coverage of a language model. Recent recurrent neural network studies show that a seq2seq model can be applied for addressing language generation issues; it has the ability to generate new sentences from given input sentences. We present a method of corpus expansion using a sentence-chain based seq2seq model. For training the seq2seq model, sentence chains are used as triples. The first two sentences in a triple are used for the encoder of the seq2seq model, while the last sentence becomes a target sequence for the decoder. Using only internal resources, evaluation results show an improvement of approximately 7.6% relative perplexity over a baseline language model of Korean text. Additionally, from a comparison with a previous study, the sentence chain approach reduces the size of the training data by 38.4% while generating 1.4-times the number of n-grams with superior performance for English text.

품사 부착 말뭉치를 이용한 임베디드용 연속음성인식의 어휘 적용률 개선 (Vocabulary Coverage Improvement for Embedded Continuous Speech Recognition Using Part-of-Speech Tagged Corpus)

  • 임민규;김광호;김지환
    • 대한음성학회지:말소리
    • /
    • 제67호
    • /
    • pp.181-193
    • /
    • 2008
  • In this paper, we propose a vocabulary coverage improvement method for embedded continuous speech recognition (CSR) using a part-of-speech (POS) tagged corpus. We investigate 152 POS tags defined in Lancaster-Oslo-Bergen (LOB) corpus and word-POS tag pairs. We derive a new vocabulary through word addition. Words paired with some POS tags have to be included in vocabularies with any size, but the vocabulary inclusion of words paired with other POS tags varies based on the target size of vocabulary. The 152 POS tags are categorized according to whether the word addition is dependent of the size of the vocabulary. Using expert knowledge, we classify POS tags first, and then apply different ways of word addition based on the POS tags paired with the words. The performance of the proposed method is measured in terms of coverage and is compared with those of vocabularies with the same size (5,000 words) derived from frequency lists. The coverage of the proposed method is measured as 95.18% for the test short message service (SMS) text corpus, while those of the conventional vocabularies cover only 93.19% and 91.82% of words appeared in the same SMS text corpus.

  • PDF

비정형 텍스트 데이터 정제를 위한 불용어 코퍼스의 활용에 관한 연구 (A Study on the Use of Stopword Corpus for Cleansing Unstructured Text Data)

  • 이원조
    • 문화기술의 융합
    • /
    • 제8권6호
    • /
    • pp.891-897
    • /
    • 2022
  • 빅데이터 분석에서 원시 텍스트 데이터는 대부분 다양한 비정형 데이터 형태로 존재하기 때문에 휴리스틱 전처리 정제와 컴퓨터를 이용한 후처리 정제과정을 거쳐야 분석이 가능한 정형 데이터 형태가 된다. 따라서 본 연구에서는 텍스트 데이터 분석 기법의 하나인 R 프로그램의 워드클라우드를 적용하기 위해서 수집된 원시 데이터 전처리를 통해 불필요한 요소들을 정제하고 후처리 과정에서 불용어를 제거한다. 그리고 단어들의 출현 빈도수를 계산하고 출현빈도가 높은 단어들을 핵심 이슈들로 표현해 주는 워드클라우드 분석의 사례 연구를 하였다. 이번 연구는 R의워드클라우드 기법으로 기존의 불용어 처리 방법인 "내포된 불용어 소스코드" 방법의 문제점을 개선하기 위하여 "일반적인 불용어 코퍼스"와 "사용자 정의 불용어 코퍼스"의 활용 방안을 제안하고 사례 분석을 통해서 제안된 "비정형 데이터 정제과정 모델"의 장단점을 비교 검증하여 제시하고 "제안된 외부 코퍼스 정제기법"을 이용한 워드클라우드 시각화 분석의 실무적용에 대한 효용성을 제시한다.