• 제목/요약/키워드: Corpus-based Study

검색결과 204건 처리시간 0.022초

A Corpus Analysis of Temporal Adverbs and Verb Tenses Cooccurrence in Spanish, English, and Chinese

  • Cheng, An Chung;Lu, Hui-Chuan
    • 아시아태평양코퍼스연구
    • /
    • 제3권2호
    • /
    • pp.1-16
    • /
    • 2022
  • This study investigates the cooccurrence between temporal adverbs and grammatical tenses in Spanish and contrasts temporal specifications across Spanish, English, and Chinese. Based on a monolingual Spanish corpus and a trilingual parallel corpus, the study identified the top ten frequent single-word temporal adverbs collocating with grammatical tenses in Spanish. It also contrasted the cooccurrence of temporal adverbs and verb tenses in three languages. The results show that aun 'still', hoy 'today', and ahora 'now' collocate with the present tense at more than 80%. Ayer 'yesterday' and finalmente 'finally' cooccurring with the simple past tense are at 84% and 69%, respectively. Then, mientras 'meanwhile' collocates with the past imperfect at 55%, the highest of all. Mañana 'tomorrow' cooccurs with the future and present tenses at 34%. Other adverbs, ya 'already', siempre 'always', and nuevamete 'again', do not present a strong cooccurrence tendency with a tense overall. The contrastive analysis of the trilingual parallel corpus shows a comprehensive view of temporal specifications in the three languages. However, no clear one-to-one mapping pattern of the cooccurrence across the three languages can be concluded, which provides helpful insights for second language instruction with natural language data rather than intuition. Future research with larger corpora is needed.

코퍼스를 통한 고등학교 영어교과서의 어휘 분석 (Usage analysis of vocabulary in Korean high school English textbooks using multiple corpora)

  • 김영미;서진희
    • 영어어문교육
    • /
    • 제12권4호
    • /
    • pp.139-157
    • /
    • 2006
  • As the Communicative Approach has become the norm in foreign language teaching, the objectives of teaching English in school have changed radically in Korea. The focus in high school English textbooks has shifted from mere mastery of structures to communicative proficiency. This paper will study five polysemous words which appear in twelve high school English textbooks used in Korea. The twelve text books are incorporated into a single corpus and analyzed to classify the usage of the selected words. Then the usage of each word was compared with that of three other corpora based sources: the BNC(British National Corpus) Sampler, ICE Singapore(International Corpus of English for Singapore) and Collins COBUILD learner's dictionary which is based on the corpus, "The Bank of English". The comparisons carried out as part of this study will demonstrate that Korean text books do not always supply the full range of meanings of polysemous words.

  • PDF

성대특성 보간에 의한 합성음의 음질향상 - 음성코퍼스 내 개구간 비 보간을 위한 기초연구 - (Synthetic Speech Quality Improvement By Glottal parameter Interpolation - Preliminary study on open quotient interpolation in the speech corpus -)

  • 배재현;오영환
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.63-66
    • /
    • 2005
  • For the Large Corpus based TTS the consistency of the speech corpus is very important. It is because the inconsistency of the speech quality in the corpus may result in a distortion at the concatenation point. And because of this inconsistency, large corpus must be tuned repeatedly One of the reasons for the inconsistency of the speech corpus is the different glottal characteristics of the speech sentence in the corpus. In this paper, we adjusted the glottal characteristics of the speech in the corpus to prevent this distortion. And the experimental results are showed.

  • PDF

『동의보감사전』 편찬을 위한 표제어 추출에 관한 연구 - 코퍼스 분석방법을 바탕으로 - (Study on Extraction of Headwords for Compilation of 「Donguibogam Dictionary」 - Based on Corpus-based Analysis -)

  • 정지훈;김도훈;김동율
    • 한국의사학회지
    • /
    • 제29권1호
    • /
    • pp.47-54
    • /
    • 2016
  • This article attempts to extract headwords for complication of "Donguibogam Dictionary" with Corpus-based Analysis. The computerized original text of Donguibogam is changed into a text file by a program 'EM Editor'. Chinese characters of high frequency of exposure among Chinese characters of Donguibogam are extracted by a Corpus-based analytical program 'AntConc'. Two-syllable, three-syllable, four-syllable, and five-syllable words including each Chinese characters of high frequency are extracted through n-cluster, one of functions of AntConc. Lastly, The output that is meaningful as a word is sorted. As a result, words that often appear in Donguibogam can be sorted in this article, and the names of books, medical herbs, disease symptoms, and prescriptions often appear especially. This way to extract headwords by this Corpus-based Analysis can suggest better headwords list for "Donguibogam Dictionary" in the future.

Buckeye corpus에 나타난 탄설음화 현상 분석 (A study of flaps in American English based on the Buckeye Corpus)

  • 황병후;강석한
    • 말소리와 음성과학
    • /
    • 제10권3호
    • /
    • pp.9-18
    • /
    • 2018
  • This paper presents an acoustic and phonological study of the alveolar flaps in American English. Based on the Buckeye Corpus, the flapping tokens produced by twenty men are analyzed at both lexical and post-lexical levels. The data, analyzed with Pratt speech analysis, include duration, F2 and F3 in voicing during the flap, as well as duration, F1, F2, F3, and f0 in the adjacent vowels. The results provide evidence on two issues: (1) The different ways in which voiced and voiceless alveolar stops give rise to neutralized flapping stops by following lexical and post-lexical levels, (2) The extent to which the vowel features (height, frontness, and tenseness) affect flapping sounds. The results show that flaps are affected by pre-consonantal vowel features at the lexical as well as post-lexical levels. Unlike previous studies, this study uses the Praat method to distinguish flapped from unflapped tokens in the Buckeye Corpus and examines connections between the lexical and post-lexical levels.

GNI Corpus Version 1.0: Annotated Full-Text Corpus of Genomics & Informatics to Support Biomedical Information Extraction

  • Oh, So-Yeon;Kim, Ji-Hyeon;Kim, Seo-Jin;Nam, Hee-Jo;Park, Hyun-Seok
    • Genomics & Informatics
    • /
    • 제16권3호
    • /
    • pp.75-77
    • /
    • 2018
  • Genomics & Informatics (NLM title abbreviation: Genomics Inform) is the official journal of the Korea Genome Organization. Text corpus for this journal annotated with various levels of linguistic information would be a valuable resource as the process of information extraction requires syntactic, semantic, and higher levels of natural language processing. In this study, we publish our new corpus called GNI Corpus version 1.0, extracted and annotated from full texts of Genomics & Informatics, with NLTK (Natural Language ToolKit)-based text mining script. The preliminary version of the corpus could be used as a training and testing set of a system that serves a variety of functions for future biomedical text mining.

An Attempt to Measure the Familiarity of Specialized Japanese in the Nursing Care Field

  • Haihong Huang;Hiroyuki Muto;Toshiyuki Kanamaru
    • 아시아태평양코퍼스연구
    • /
    • 제4권2호
    • /
    • pp.57-74
    • /
    • 2023
  • Having a firm grasp of technical terms is essential for learners of Japanese for Specific Purposes (JSP). This research aims to analyze Japanese nursing care vocabulary based on objective corpus-based frequency and subjectively rated word familiarity. For this purpose, we constructed a text corpus centered on the National Examination for Certified Care Workers to extract nursing care keywords. The Log-Likelihood Ratio (LLR) was used as the statistical criterion for keyword identification, giving a list of 300 keywords as target words for a further word recognition survey. The survey involved 115 participants of whom 51 were certified care workers (CW group) and 64 were individuals from the general public (GP group). These participants rated the familiarity of the target keywords through crowdsourcing. Given the limited sample size, Bayesian linear mixed models were utilized to determine word familiarity rates. Our study conducted a comparative analysis of word familiarity between the CW group and the GP group, revealing key terms that are crucial for professionals but potentially unfamiliar to the general public. By focusing on these terms, instructors can bridge the knowledge gap more efficiently.

사전과 말뭉치를 이용한 한국어 단어 중의성 해소 (Korean Word Sense Disambiguation using Dictionary and Corpus)

  • 정한조;박병화
    • 지능정보연구
    • /
    • 제21권1호
    • /
    • pp.1-13
    • /
    • 2015
  • 빅데이터 및 오피니언 마이닝 분야가 대두됨에 따라 정보 검색/추출, 특히 비정형 데이터에서의 정보 검색/추출 기술의 중요성이 나날이 부각되어지고 있다. 또한 정보 검색 분야에서는 이용자의 의도에 맞는 결과를 제공할 수 있는 검색엔진의 성능향상을 위한 다양한 연구들이 진행되고 있다. 이러한 정보 검색/추출 분야에서 자연어처리 기술은 비정형 데이터 분석/처리 분야에서 중요한 기술이고, 자연어처리에 있어서 하나의 단어가 여러개의 모호한 의미를 가질 수 있는 단어 중의성 문제는 자연어처리의 성능을 향상시키기 위해 우선적으로 해결해야하는 문제점들의 하나이다. 본 연구는 단어 중의성 해소 방법에 사용될 수 있는 말뭉치를 많은 시간과 노력이 요구되는 수동적인 방법이 아닌, 사전들의 예제를 활용하여 자동적으로 생성할 수 있는 방법을 소개한다. 즉, 기존의 수동적인 방법으로 의미 태깅된 세종말뭉치에 표준국어대사전의 예제를 자동적으로 태깅하여 결합한 말뭉치를 사용한 단어 중의성 해소 방법을 소개한다. 표준국어대사전에서 단어 중의성 해소의 주요 대상인 전체 명사 (265,655개) 중에 중의성 해소의 대상이 되는 중의어 (29,868개)의 각 센스 (93,522개)와 연관된 속담, 용례 문장 (56,914개)들을 결합 말뭉치에 추가하였다. 품사 및 센스가 같이 태깅된 세종말뭉치의 약 79만개의 문장과 표준국어대사전의 약 5.7만개의 문장을 각각 또는 병합하여 교차검증을 사용하여 실험을 진행하였다. 실험 결과는 결합 말뭉치를 사용하였을 때 정확도와 재현율에 있어서 향상된 결과가 발견되었다. 본 연구의 결과는 인터넷 검색엔진 등의 검색결과의 성능향상과 오피니언 마이닝, 텍스트 마이닝과 관련한 자연어 분석/처리에 있어서 문장의 내용을 보다 명확히 파악하는데 도움을 줄 수 있을 것으로 기대되어진다.

A Corpus-Based Longitudinal Study of Diction in Chinese and British News Reports on Chang'e Project

  • Lu, Rong;Xie, Xue;Qi, Jiashuang;Ali, Afida Mohamad;Zhao, Jie
    • 아시아태평양코퍼스연구
    • /
    • 제3권1호
    • /
    • pp.1-20
    • /
    • 2022
  • As a milestone progression in China's space exploration history, Chang'e Project has attracted a lot of media attention since its first launching. This study aims to examine and compare the similarities and differences between the Chinese media and the British media in using nouns, verbs, and adjectives to report the Chang'e Project. After categorising the documents based on specific project phases, we created two diachronic corpora to explore the linguistic shifts and similarities and differences of diction employed by the Chinese and British media on the Chang'e Project ideology. This longitudinal study was performed with Lancsbox and the CLAWS web tagger through critical discourse analysis as the theoretical framework. The findings of the current study showed that the Chang'e Project coverage in both media increased on an annual basis, especially after 2019. In contrast to the objectivity and positivity in the Chinese Media, the British Media seemed to be more subjective with more appraisal adjectives in the news reports. Nonetheless, both countries were trying to be objective and formal in choosing nouns and verbs. Ideology-wise, the Chinese news media reports portrayed more positivity on domestic circumstances while the British counterpart was typically more critical. Notably, the study outcomes could catalyse future research on the Chang'e Project and facilitate diplomatic policies.

Sentence-Chain Based Seq2seq Model for Corpus Expansion

  • Chung, Euisok;Park, Jeon Gue
    • ETRI Journal
    • /
    • 제39권4호
    • /
    • pp.455-466
    • /
    • 2017
  • This study focuses on a method for sequential data augmentation in order to alleviate data sparseness problems. Specifically, we present corpus expansion techniques for enhancing the coverage of a language model. Recent recurrent neural network studies show that a seq2seq model can be applied for addressing language generation issues; it has the ability to generate new sentences from given input sentences. We present a method of corpus expansion using a sentence-chain based seq2seq model. For training the seq2seq model, sentence chains are used as triples. The first two sentences in a triple are used for the encoder of the seq2seq model, while the last sentence becomes a target sequence for the decoder. Using only internal resources, evaluation results show an improvement of approximately 7.6% relative perplexity over a baseline language model of Korean text. Additionally, from a comparison with a previous study, the sentence chain approach reduces the size of the training data by 38.4% while generating 1.4-times the number of n-grams with superior performance for English text.