• Title/Summary/Keyword: Language model

Search Result 2,763, Processing Time 0.025 seconds

Similar Contents Recommendation Model Based On Contents Meta Data Using Language Model (언어모델을 활용한 콘텐츠 메타 데이터 기반 유사 콘텐츠 추천 모델)

  • Donghwan Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.27-40
    • /
    • 2023
  • With the increase in the spread of smart devices and the impact of COVID-19, the consumption of media contents through smart devices has significantly increased. Along with this trend, the amount of media contents viewed through OTT platforms is increasing, that makes contents recommendations on these platforms more important. Previous contents-based recommendation researches have mostly utilized metadata that describes the characteristics of the contents, with a shortage of researches that utilize the contents' own descriptive metadata. In this paper, various text data including titles and synopses that describe the contents were used to recommend similar contents. KLUE-RoBERTa-large, a Korean language model with excellent performance, was used to train the model on the text data. A dataset of over 20,000 contents metadata including titles, synopses, composite genres, directors, actors, and hash tags information was used as training data. To enter the various text features into the language model, the features were concatenated using special tokens that indicate each feature. The test set was designed to promote the relative and objective nature of the model's similarity classification ability by using the three contents comparison method and applying multiple inspections to label the test set. Genres classification and hash tag classification prediction tasks were used to fine-tune the embeddings for the contents meta text data. As a result, the hash tag classification model showed an accuracy of over 90% based on the similarity test set, which was more than 9% better than the baseline language model. Through hash tag classification training, it was found that the language model's ability to classify similar contents was improved, which demonstrated the value of using a language model for the contents-based filtering.

A BERT-Based Automatic Scoring Model of Korean Language Learners' Essay

  • Lee, Jung Hee;Park, Ji Su;Shon, Jin Gon
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.282-291
    • /
    • 2022
  • This research applies a pre-trained bidirectional encoder representations from transformers (BERT) handwriting recognition model to predict foreign Korean-language learners' writing scores. A corpus of 586 answers to midterm and final exams written by foreign learners at the Intermediate 1 level was acquired and used for pre-training, resulting in consistent performance, even with small datasets. The test data were pre-processed and fine-tuned, and the results were calculated in the form of a score prediction. The difference between the prediction and actual score was then calculated. An accuracy of 95.8% was demonstrated, indicating that the prediction results were strong overall; hence, the tool is suitable for the automatic scoring of Korean written test answers, including grammatical errors, written by foreigners. These results are particularly meaningful in that the data included written language text produced by foreign learners, not native speakers.

A Language Model Approach to "The Vegetarian" (채식주의자: 랭귀지 모델 접근)

  • Kim, Jaejun;Kwon, Junhyeok;Kim, Yoolae;Park, Myung-Kwan;Song, Sanghoun
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.260-263
    • /
    • 2017
  • This paper is to broaden the possible spectrums of analyzing the Korean-written novel "The Vegetarian" by using the computational linguistics program. Through the use of language model, which was usually used in bi-gram analysis in corpus linguistics, to the International Man Booker award winning novel, the characteristics of "The Vegetarian" is investigated by comparing it to the English-written novel "A Little Life".

  • PDF

Reducing Toxic Response Generation in Conversational Models using Plug and Play Language Model (Plug and Play Language Model을 활용한 대화 모델의 독성 응답 생성 감소)

  • Kim, Byeong-Joo;Lee, Geun-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.433-438
    • /
    • 2021
  • 대화 시스템은 크게 사용자와 시스템이 특정 목적 혹은 자유 주제에 대해 대화를 진행하는 것으로 구분된다. 최근 자유주제 대화 시스템(Open-Domain Dialogue System)에 대한 연구가 활발히 진행됨에 따라 자유 주제를 기반으로 하는 상담 대화, 일상 대화 시스템의 독성 발화 제어 생성에 대한 연구의 중요성이 더욱 커지고 있다. 이에 본 논문에서는 대화 모델의 독성 응답 생성을 제어하기 위해 일상 대화 데이터셋으로 학습된 BART 모델에 Plug-and-Play Language Model 방법을 적용한다. 공개된 독성 대화 분류 데이터셋으로 학습된 독성 응답 분류기를 PPLM의 어트리뷰트(Attribute) 모델로 활용하여 대화 모델의 독성 응답 생성을 감소시키고 그 차이를 실험을 통해 정량적으로 비교한다. 실험 결과 어트리뷰트 모델을 활용한 모든 실험에서 독성 응답 생성이 감소함을 확인하였다.

  • PDF

Instruction Tuning for Controlled Text Generation in Korean Language Model (Instruction Tuning을 통한 한국어 언어 모델 문장 생성 제어)

  • Jinhee Jang;Daeryong Seo;Donghyeon Jeon;Inho Kang;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.289-294
    • /
    • 2023
  • 대형 언어 모델(Large Language Model)은 방대한 데이터와 파라미터를 기반으로 문맥 이해에서 높은 성능을 달성하였지만, Human Alignment를 위한 문장 생성 제어 연구는 아직 활발한 도전 과제로 남아있다. 본 논문에서는 Instruction Tuning을 통한 문장 생성 제어 실험을 진행한다. 자연어 처리 도구를 사용하여 단일 혹은 다중 제약 조건을 포함하는 Instruction 데이터 셋을 자동으로 구축하고 한국어 언어 모델인 Polyglot-Ko 모델에 fine-tuning 하여 모델 생성이 제약 조건을 만족하는지 검증하였다. 실험 결과 4개의 제약 조건에 대해 평균 0.88의 accuracy를 보이며 효과적인 문장 생성 제어가 가능함을 확인하였다.

  • PDF

Conversation Dataset Generation and Improve Search Performance via Large Language Model (Large Language Model을 통한 대화 데이터셋 자동 생성 및 검색 성능 향상)

  • Hyeongjun Choi;Beomseok Hong;Wonseok Choi;Youngsub Han;Byoung-Ki Jeon;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.295-300
    • /
    • 2023
  • 대화 데이터와 같은 데이터는 사람이 수작업으로 작성해야 하기 때문에 데이터셋 구축에 시간과 비용이 크게 발생한다. 현재 대두되고 있는 Large Language Model은 이러한 대화 생성에서 보다 자연스러운 대화 생성이 가능하다는 이점이 존재한다. 이번 연구에서는 LLM을 통해 사람이 만든 적은 양의 데이터셋을 Fine-tuning 하여 위키백과 문서로부터 데이터셋을 만들어내고, 이를 통해 문서 검색 모델의 성능을 향상시켰다. 그 결과 학습 데이터와 같은 문서집합에서 MRR 3.7%p, 위키백과 전체에서 MRR 4.5%p의 성능 향상을 확인했다.

  • PDF

Keyword Based Conversation Generation using Large Language Model (Large Language Model을 활용한 키워드 기반 대화 생성)

  • Juhwan Lee;Tak-Sung Heo;Jisu Kim;Minsu Jeong;Kyounguk Lee;Kyungsun Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.19-24
    • /
    • 2023
  • 자연어 처리 분야에서 데이터의 중요성이 더욱 강조되고 있으며, 특히 리소스가 부족한 도메인에서 데이터 부족 문제를 극복하는 방법으로 데이터 증강이 큰 주목을 받고 있다. 이 연구는 대규모 언어 모델(Large Language Model, LLM)을 활용한 키워드 기반 데이터 증강 방법을 제안하고자 한다. 구체적으로 한국어에 특화된 LLM을 활용하여 주어진 키워드를 기반으로 특정 주제에 관한 대화 내용을 생성하고, 이를 통해 대화 주제를 분류하는 분류 모델의 성능 향상을 입증했다. 이 연구 결과는 LLM을 활용한 데이터 증강의 유의미성을 입증하며, 리소스가 부족한 상황에서도 이를 활용할 수 있는 방법을 제시한다.

  • PDF

Alzheimer's disease recognition from spontaneous speech using large language models

  • Jeong-Uk Bang;Seung-Hoon Han;Byung-Ok Kang
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.96-105
    • /
    • 2024
  • We propose a method to automatically predict Alzheimer's disease from speech data using the ChatGPT large language model. Alzheimer's disease patients often exhibit distinctive characteristics when describing images, such as difficulties in recalling words, grammar errors, repetitive language, and incoherent narratives. For prediction, we initially employ a speech recognition system to transcribe participants' speech into text. We then gather opinions by inputting the transcribed text into ChatGPT as well as a prompt designed to solicit fluency evaluations. Subsequently, we extract embeddings from the speech, text, and opinions by the pretrained models. Finally, we use a classifier consisting of transformer blocks and linear layers to identify participants with this type of dementia. Experiments are conducted using the extensively used ADReSSo dataset. The results yield a maximum accuracy of 87.3% when speech, text, and opinions are used in conjunction. This finding suggests the potential of leveraging evaluation feedback from language models to address challenges in Alzheimer's disease recognition.

A Experimental Study on the Usefulness of Structure Hints in the Leaf Node Language Model-Based XML Document Retrieval (단말노드 언어모델 기반의 XML문서검색에서 구조 제한의 유용성에 관한 실험적 연구)

  • Jung, Young-Mi
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.1 s.63
    • /
    • pp.209-226
    • /
    • 2007
  • XML documents format on the Web provides a mechanism to impose their content and logical structure information. Therefore, an XML processor provides access to their content and structure. The purpose of this study is to investigate the usefulness of structural hints in the leaf node language model-based XML document retrieval. In order to this purpose, this experiment tested the performances of the leaf node language model-based XML retrieval system to compare the queries for a topic containing only content-only constraints and both content constrains and structure constraints. A newly designed and implemented leaf node language model-based XML retrieval system was used. And we participated in the ad-hoc track of INEX 2005 and conducted an experiment using a large-scale XML test collection provided by INEX 2005.

Range Detection of Wa/Kwa Parallel Noun Phrase by Alignment method (정렬기법을 활용한 와/과 병렬명사구 범위 결정)

  • Choe, Yong-Seok;Sin, Ji-Ae;Choe, Gi-Seon;Kim, Gi-Tae;Lee, Sang-Tae
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2008.10a
    • /
    • pp.90-93
    • /
    • 2008
  • In natural language, it is common that repetitive constituents in an expression are to be left out and it is necessary to figure out the constituents omitted at analyzing the meaning of the sentence. This paper is on recognition of boundaries of parallel noun phrases by figuring out constituents omitted. Recognition of parallel noun phrases can greatly reduce complexity at the phase of sentence parsing. Moreover, in natural language information retrieval, recognition of noun with modifiers can play an important role in making indexes. We propose an unsupervised probabilistic model that identifies parallel cores as well as boundaries of parallel noun phrases conjoined by a conjunctive particle. It is based on the idea of swapping constituents, utilizing symmetry (two or more identical constituents are repeated) and reversibility (the order of constituents is changeable) in parallel structure. Semantic features of the modifiers around parallel noun phrase, are also used the probabilistic swapping model. The model is language-independent and in this paper presented on parallel noun phrases in Korean language. Experiment shows that our probabilistic model outperforms symmetry-based model and supervised machine learning based approaches.

  • PDF