• Title/Summary/Keyword: 사전학습 언어모형

Search Result 10, Processing Time 0.03 seconds

Domain-agnostic Pre-trained Language Model for Tabular Data (도메인 변화에 강건한 사전학습 표 언어모형)

  • Cho, Sanghyun;Choi, Jae-Hoon;Kwon, Hyuk-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.346-349
    • /
    • 2021
  • 표 기계독해에서는 도메인에 따라 언어모형에 필요한 지식이나 표의 구조적인 형태가 변화하면서 텍스트 데이터에 비해서 더 큰 성능 하락을 보인다. 본 논문에서는 표 기계독해에서 이러한 도메인의 변화에 강건한 사전학습 표 언어모형 구축을 위한 의미있는 표 데이터 선별을 통한 사전학습 데이터 구축 방법과 적대적인 학습 방법을 제안한다. 추출한 표 데이터에서 구조적인 정보가 없이 웹 문서의 장식을 위해 사용되는 표 데이터 검출을 위해 Heuristic을 통한 규칙을 정의하여 HEAD 데이터를 식별하고 표 데이터를 선별하는 방법을 적용했으며, 구조적인 정보를 가지는 일반적인 표 데이터와 엔티티에 대한 지식 정보를 가지는 인포박스 데이터간의 적대적 학습 방법을 적용했다. 기존의 정제되지 않는 데이터로 학습했을 때와 비교하여 데이터를 정제하였을 때, KorQuAD 표 데이터에서 f1 3.45, EM 4.14가 증가하였으며, Spec 표 질의응답 데이터에서 정제하지 않았을 때와 비교하여 f1 19.38, EM 4.22가 증가한 성능을 보였다.

  • PDF

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.

Test Dataset for validating the meaning of Table Machine Reading Language Model (표 기계독해 언어 모형의 의미 검증을 위한 테스트 데이터셋)

  • YU, Jae-Min;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.164-167
    • /
    • 2022
  • In table Machine comprehension, the knowledge required for language models or the structural form of tables changes depending on the domain, showing a greater performance degradation compared to text data. In this paper, we propose a pre-learning data construction method and an adversarial learning method through meaningful tabular data selection for constructing a pre-learning table language model robust to these domain changes in table machine reading. In order to detect tabular data sed for decoration of web documents without structural information from the extracted table data, a rule through heuristic was defined to identify head data and select table data was applied. An adversarial learning method between tabular data and infobax data with knowledge information about entities was applied. When the data was refined compared to when it was trained with the existing unrefined data, F1 3.45 and EM 4.14 increased in the KorQuAD table data, and F1 19.38, EM 4.22 compared to when the data was not refined in the Spec table QA data showed increased performance.

  • PDF

Table Question Answering based on Pre-trained Language Model using TAPAS (TAPAS를 이용한 사전학습 언어 모델 기반의 표 질의응답)

  • Cho, Sanghyun;Kim, Minho;Kwon, Hyuk-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.87-90
    • /
    • 2020
  • 표 질의응답은 반-정형화된 표 데이터에서 질문에 대한 답을 찾는 문제이다. 본 연구에서는 한국어 표 질의응답을 위한 표 데이터에 적합한 TAPAS를 이용한 언어모델 사전학습 방법과 표에서 정답이 있는 셀을 예측하고 선택된 셀에서 정확한 정답의 경계를 예측하기 위한 표 질의응답 모형을 제안한다. 표 사전학습을 위해서 약 10만 개의 표 데이터를 활용했으며, 텍스트 데이터에 사전학습된 BERT 모델을 이용하여 TAPAS를 사전학습한 모델이 가장 좋은 성능을 보였다. 기계독해 모델을 적용했을 때 EM 46.8%, F1 63.8%로 텍스트 텍스트에 사전학습된 모델로 파인튜닝한 것과 비교하여 EM 6.7%, F1 12.9% 향상된 것을 보였다. 표 질의응답 모델의 경우 TAPAS를 통해 생성된 임베딩을 이용하여 행과 열의 임베딩을 추출하고 TAPAS 임베딩, 행과 열의 임베딩을 결합하여 기계독해 모델을 적용했을 때 EM 63.6%, F1 76.0%의 성능을 보였다.

  • PDF

Bayesian Model based Korean Semantic Role Induction (베이지안 모형 기반 한국어 의미역 유도)

  • Won, Yousung;Lee, Woochul;Kim, Hyungjun;Lee, Yeonsoo
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.111-116
    • /
    • 2016
  • 의미역은 자연어 문장의 서술어와 관련된 논항의 역할을 설명하는 것으로, 주어진 서술어에 대한 논항 인식(Argument Identification) 및 분류(Argument Labeling)의 과정을 거쳐 의미역 결정(Semantic Role Labeling)이 이루어진다. 이를 위해서는 격틀 사전을 이용한 방법이나 말뭉치를 이용한 지도 학습(Supervised Learning) 방법이 주를 이루고 있다. 이때, 격틀 사전 또는 의미역 주석 정보가 부착된 말뭉치를 구축하는 것은 필수적이지만, 이러한 노력을 최소화하기 위해 본 논문에서는 비모수적 베이지안 모델(Nonparametric Bayesian Model)을 기반으로 서술어에 가능한 의미역을 추론하는 비지도 학습(Unsupervised Learning)을 수행한다.

  • PDF

Probing Sentence Embeddings in L2 Learners' LSTM Neural Language Models Using Adaptation Learning

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.13-23
    • /
    • 2022
  • In this study we leveraged a probing method to evaluate how a pre-trained L2 LSTM language model represents sentences with relative and coordinate clauses. The probing experiment employed adapted models based on the pre-trained L2 language models to trace the syntactic properties of sentence embedding vector representations. The dataset for probing was automatically generated using several templates related to different sentence structures. To classify the syntactic properties of sentences for each probing task, we measured the adaptation effects of the language models using syntactic priming. We performed linear mixed-effects model analyses to analyze the relation between adaptation effects in a complex statistical manner and reveal how the L2 language models represent syntactic features for English sentences. When the L2 language models were compared with the baseline L1 Gulordava language models, the analogous results were found for each probing task. In addition, it was confirmed that the L2 language models contain syntactic features of relative and coordinate clauses hierarchically in the sentence embedding representations.

Design and Implementation of Korean Lexical Acquistion Model using Computational Model (계산주의적 모델을 이용한 한국어 어휘습득 모텔 설계 및 구현)

  • Yu, Won-Hee;Park, Ki-Nam;Lyu, Ki-Gon;Lim, Heui-Seok
    • Proceedings of the KAIS Fall Conference
    • /
    • 2007.05a
    • /
    • pp.230-232
    • /
    • 2007
  • 본 논문은 인간의 언어정보처리과정 중 초기 어휘획득(lexical acquisition) 과정을 한국어에 적용시켜 Full-List 모형과 Decomposition 모형의 하이브리드한 형태의 계산주의적 (computational) 어휘정보처리 모델을 구현하고 실험하였다. 실험결과 학습을 통한 언어적 입력의 인간의 어휘획득 과정을 모사(simulate) 할 수 있었고, 특정 문법범주 습득 순서에 대한 이론적 근간을 제시할 수 있었다. 또한 본 연구의 모델에서 자동으로 생성된 Full-List 사전과 Decomposition 사전을 통해 인간의 대뇌 심성표상(mental representation) 형태를 유추할 수 있는 증거를 보였다.

  • PDF

A Discussion Class Model to Improve English Oral Proficiency for Intermediate Low Learners (중급 하 수준을 위한 영어말하기 능력향상 토론수업모형)

  • Ko, Mi-Sook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.3
    • /
    • pp.537-543
    • /
    • 2016
  • This paper suggests a class model to improve the English oral proficiency for intermediate low English speaking learners. Utilizing the four English skills (reading, writing, listening and speaking), the class model focuses on the learners' schema and discussion strategies. To enhance the learners' motivation and match their cognitive capacity, 10 discussion topics were prepared by surveying the learners. A pilot experiment was conducted to investigate the teaching effects of the discussion class model with 26 college students majoring in English in Seoul. The participants' oral proficiency was measured both before, and after the instructions by OPIc (Oral Proficiency Interview in computer). As a result of the experiment, the percentage of participants whose oral proficiency levels were lower than intermediate mid decreased from 82% to 47%. In addition, the percentage of participants with higher oral proficiency than intermediate low was increased dramatically from 18% to 53%, which supports the claim that through discussion, the class learners' diverse and creative ideas need to be expressed in a formal and intelligible language. Finally, through the findings of the study, the possibility of a discussion class can be expected, regardless of the learners' low level of oral proficiency.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

The Effect of CPS-based Scratch EPL on Problem Solving Ability and Programming Attitude (CPS에 기반한 스크래치 EPL이 문제해결력과 프로그래밍 태도에 미치는 효과)

  • Cho, Seong-Hwan;Song, Jeong-Beom;Kim, Seong-Sik;Lee, Kyung-Hwa
    • Journal of The Korean Association of Information Education
    • /
    • v.12 no.1
    • /
    • pp.77-88
    • /
    • 2008
  • A programming education has favorable influence on creative / logical thinking and problem solving ability of students. However, students typically have to spend too much effort in learning basic grammar and the usage model of programming languages, which negatively affect their eagerness in learning. In this respect, we proposed to apply the 'Scratch' using the Creative Problem Solving(CPS) Teaching Model; Scratch is an easy-to-learn and intuitive Educational Programming Language(EPL) that helps improving the problem solving ability of the class. Then we verified the effect of Scratch EPL through the design of both pretest and posttest for a subject group. In summary, the CPS based Scratch EPL was shown to significantly improve the problem solving ability and also help them develop favorable attitude in programming.

  • PDF