• 제목/요약/키워드: Corpus-based Study

검색결과 205건 처리시간 0.028초

대화형 코퍼스의 설계 및 구조적 문서화에 관한 연구 (A Study in Design and Construction of Structured Documents for Dialogue Corpus)

  • 강창규;남명우;양옥렬
    • 한국콘텐츠학회논문지
    • /
    • 제4권4호
    • /
    • pp.1-10
    • /
    • 2004
  • 음성인식의 연구 대상은 낭독음성에서 대화음성으로 발전해가고 있다. 이를 위해서는 대량의 대화코퍼스가 필요하다. 그러나 아직 충분한 양의 대화코퍼스가 구축되어 있지 못하며 코퍼스의 주석 정보 또한 복잡하고 다양하게 표현하고 있어 효율적인 활용이 어렵다. 따라서 본 논문에서는 TEI를 기반으로 하여 대화 영역을 텔레뱅킹으로 설정하고 대화코퍼스를 구축하여 구축된 대화코퍼스의 주석 정보를 XML(extensible Markup Language)로 표준화할 수 있도록 DTD (Document Type Definition) 정의하고 저장 시스템을 설계하였다.

  • PDF

Text Classification on Social Network Platforms Based on Deep Learning Models

  • YA, Chen;Tan, Juan;Hoekyung, Jung
    • Journal of information and communication convergence engineering
    • /
    • 제21권1호
    • /
    • pp.9-16
    • /
    • 2023
  • The natural language on social network platforms has a certain front-to-back dependency in structure, and the direct conversion of Chinese text into a vector makes the dimensionality very high, thereby resulting in the low accuracy of existing text classification methods. To this end, this study establishes a deep learning model that combines a big data ultra-deep convolutional neural network (UDCNN) and long short-term memory network (LSTM). The deep structure of UDCNN is used to extract the features of text vector classification. The LSTM stores historical information to extract the context dependency of long texts, and word embedding is introduced to convert the text into low-dimensional vectors. Experiments are conducted on the social network platforms Sogou corpus and the University HowNet Chinese corpus. The research results show that compared with CNN + rand, LSTM, and other models, the neural network deep learning hybrid model can effectively improve the accuracy of text classification.

영어 완화 표지와 한국어 종결어미 비교 - 영어권 학습자를 위한 문법 설명 - (English Hedge Expressions and Korean Endings: Grammar Explanation for English-Speaking Leaners of Korean)

  • 김영아
    • 한국어교육
    • /
    • 제25권1호
    • /
    • pp.1-27
    • /
    • 2014
  • This study investigates how common English hedge expressions such as 'I think' and 'I guess' appear in Korean, with the aim of providing explicit explanation for English-speaking leaners of Korean. Based on a contrastive analysis of spoken English and Korean corpus, this study argues three points: Firstly, 'I guess' appears with a wider variety of modalities in Korean than 'I think'. Secondly, this study has found that Korean textbooks contain inappropriate use of registers regarding the English translations of '-geot -gat-': although these markers are used in spoken Korean, they were translated into written English. Therefore, this study suggests that '-geot -gat-' be translated into 'I think' in spoken English, and into 'it seems' in the case of written English and narratives. Lastly, the contrastive analysis has shown that when 'I think' is used with deontic modalities such as 'I think I have to', Korean use '-a-ya-get-': the use of hedge marker 'I think' with 'I have to', which shows obligation or speaker's volition turns the deontic modalities into expressions of speaker's opinion.

Using Corpora for the Study of Word-Formation: A Case Study in English Negative Prefixation

  • Kwon, Heok-Seung
    • 한국영어학회지:영어학
    • /
    • 제1권3호
    • /
    • pp.369-386
    • /
    • 2001
  • This paper will show that traditional approaches to the derivation of different negative words have been of an essentially hypothetical nature, based on either linguists' intuitions or rather scant evidence, and that native-speaker dictionary entries show meaning potentials (rather than meanings) which are in fact linguistic and cognitive prototypes. The purpose of this paper is to demonstrate that using a large corpus of natural language can provide better answers to questions about word-formation (i.e., with particular reference to negative prefixation) than any other source of information.

  • PDF

Radiologic Determination of Corpus Callosum Injury in Patients with Mild Traumatic Brain Injury and Associated Clinical Characteristics

  • Kim, Dong Shin;Choi, Hyuk Jai;Yang, Jin Seo;Cho, Yong Jun;Kang, Suk Hyung
    • Journal of Korean Neurosurgical Society
    • /
    • 제58권2호
    • /
    • pp.131-136
    • /
    • 2015
  • Objective : To investigate the incidence of corpus callosum injury (CCI) in patients with mild traumatic brain injury (TBI) using brain MRI. We also performed a review of the clinical characteristics associated with this injury. Methods : A total of 356 patients in the study were diagnosed with TBI, with 94 patients classified as having mild TBI. We included patients with mild TBI for further evaluation if they had normal findings via brain computed tomography (CT) scans and also underwent brain MRI in the acute phase following trauma. As assessed by brain MRI, CCI was defined as a high-signal lesion in T2 sagittal images and a corresponding low-signal lesion as determined by axial gradient echo (GRE) imaging. Based on these criteria, we divided patients into two groups for further analysis : Group I (TBI patients with CCI) and Group II (TBI patients without CCI). Results : A total of 56 patients were enrolled in this study (including 16 patients in Group I and 40 patients in Group II). Analysis of clinical symptoms revealed a significant difference in headache severity between groups. Over 50% of patients in Group I experienced prolonged neurological symptoms including dizziness and gait disturbance and were more common in Group I than Group II (dizziness : 37 and 12% in Groups I and II, respectively; gait disturbance : 12 and 0% in Groups I and II, respectively). Conclusion : The incidence of CCI in patients with mild TBI was approximately 29%. We suggest that brain MRI is a useful method to reveal the cause of persistent symptoms and predict clinical prognosis.

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • 제5권3호
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.

빅데이터 기반 어휘연결망분석을 활용한 '창업'과 '기업가정신'의 의미변화연구 (The Study on the Meaning Change of 'Startup' and 'Entrepreneurship' using the Bigdata-based Corpus Network Analysis)

  • 김연종;박상혁
    • 디지털산업정보학회논문지
    • /
    • 제16권4호
    • /
    • pp.75-93
    • /
    • 2020
  • The purpose of this study is to extract keywords for 'startup' and 'entrepreneurship' from Naver news articles in Korea since 1990 and Google news articles in foreign countries, and to understand the changes in the meaning of entrepreneurship and entrepreneurship in each era It is aimed at doing. In summary, first, in terms of the frequency of keywords, venture sprouting is a sample of the entrepreneurial spirit of the government-led and entrepreneurs' chairman, and various technology investments and investments in corporate establishment have been made. It can be seen that training for the development of items and items was carried out, and in the case of the venture re-emergence period, it can be seen that the youth-oriented entrepreneurship and innovation through the development of various educational programs were emphasized. Second, in the result of vocabulary network analysis, the network connection and centrality of keywords in the leap period tended to be stronger than in the germination period, but the re-leap period tended to return to the level of germination. Third, in topic analysis, it can be seen that Naver keyword topics are mostly business-related content related to support, policy, and education, whereas topics through Google News consist of major keywords that are more specifically applicable to practical work.

원자력과학공학 학술 논문에 나타난 기능적 어휘다발 분석 (Functional Lexical Bundles in Nuclear Science and Engineering Research Articles)

  • 남대현
    • 한국콘텐츠학회논문지
    • /
    • 제21권11호
    • /
    • pp.426-435
    • /
    • 2021
  • 본 연구의 목적은 영어로 작성된 원자력과학공학 학술 논문에 나타나는 어휘다발을 담화기능에 따라 분류한 후, 분류된 어휘다발이 일반 학술 논문에 나타나는 어휘뭉치와 비교하여 어떤 특징을 나타내는지 분석하는데 있다. 이를 위해 원자력과학공학 논문의 텍스트를 수집하여 제작한 약 1백만 단어의 코퍼스에서 기능적 어휘 다발을 추출한 후 이를 75만 단어 크기의 일반 학술 논문 코퍼스에 나타난 어휘다발 분포와 빈도를 카이제곱 검정과 표준화 잔차를 사용하여 비교하였다. 그 결과 원자력과학공학 분야에서는 일반 학술 논문과 비교했을 때 저자태도와 관련한 어휘다발이 주로 사용되었고, 어휘다발 사용에 있어서는 다양성이 결여된 어휘다발 사용이 나타나 동일한 타입의 어휘다발을 '재사용'하는 모습을 보여주었다. 이러한 연구결과를 바탕으로 원자력과학공학 학술목적영어 교육에 대한 교육적 함의와 후속연구의 방향에 관하여 제언하였다.

한국어 생의학 개체명 인식 성능 비교와 오류 분석 (Performance Comparison and Error Analysis of Korean Bio-medical Named Entity Recognition)

  • 이재홍
    • 한국전자통신학회논문지
    • /
    • 제19권4호
    • /
    • pp.701-708
    • /
    • 2024
  • 딥러닝 분야에서 트랜스포머 아키텍쳐의 출현은 자연어 처리 연구가 획기적인 발전을 가져왔다. 개체명 인식은 자연어 처리의 한 분야로 정보 검색과 같은 태스크에 중요한 연구 분야이다. 생의학 분야에서도 그 중요성이 강조되나 학습용 한국어 생의학 말뭉치의 부족으로 AI를 활용한 한국어 임상 연구 발전에 제약이 되고 있다. 본 연구에서는 한국어 생의학 개체명 인식을 위해 새로운 생의학 말뭉치를 구축하고 대용량 한국어 말뭉치로 사전 학습된 언어 모델들을 선정하여 전이 학습시켰다. F1-score로 선정된 언어 모델의 개체명 인식 성능과 태그별 인식률을 비교하고 오류 분석을 하였다. 인식 성능에서는 KlueRoBERTa가 상대적인 좋은 성능을 보였다. 태깅 과정의 오류 분석 결과 Disease의 인식 성능은 우수하나 상대적으로 Body와 Treatment는 낮았다. 이는 문맥에 기반하여 제대로 개체명을 분류하지 못하는 과분할과 미분할로 인한 것으로, 잘못된 태깅들을 보완하기 위해서는 보다 정밀한 형태소 분석기와 풍부한 어휘사전 구축이 선행되어야 할 것이다.

기분석 어절 사전과 음절 단위의 확률 모델을 이용한 한국어 형태소 분석기 복제 (Cloning of Korean Morphological Analyzers using Pre-analyzed Eojeol Dictionary and Syllable-based Probabilistic Model)

  • 심광섭
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제22권3호
    • /
    • pp.119-126
    • /
    • 2016
  • 본 논문에서는 어절 단위의 기분석 사전과 음절 단위의 확률 모델을 이용하는 한국어 형태소 분석기가 실용성이 있는지를 검증한다. 이를 위해 기존의 한국어 형태소 분석기 MACH와 KLT2000을 복제하고, 복제된 형태소 분석기의 분석 결과가 MACH와 KLT2000 분석 결과와 얼마나 유사한지 정밀도와 재현율로 평가하는 실험을 하였다. 실험은 1,000만 어절 규모의 세종 말뭉치를 10개의 세트로 나누고 10배수 교차 검증을 하는 방식으로 하였다. MACH의 분석 결과를 정답 집합으로 하고 MACH 복제품의 분석 결과를 평가한 결과 정밀도와 재현율이 각각 97.16%와 98.31%였으며, KLT2000 복제품의 경우에는 정밀도와 재현율이 각각 96.80%와 99.03%였다 분석 속도는 MACH 복제품의 경우 초당 30.8만 어절이며, KLT2000 복제품은 초당 43.6만 어절로 나타났다. 이 실험 결과는 어절 단위의 기분석 사전과 음절 단위의 확률 모델로 만든 한국어 형태소 분석기가 실제 응용에 사용될 수 있을 정도의 성능을 가진다는 것을 보여준다.