• Title/Summary/Keyword: large-language model

Search Result 276, Processing Time 0.03 seconds

A Proposal of Evaluation of Large Language Models Built Based on Research Data (연구데이터 관점에서 본 거대언어모델 품질 평가 기준 제언)

  • Na-eun Han;Sujeong Seo;Jung-ho Um
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.3
    • /
    • pp.77-98
    • /
    • 2023
  • Large Language Models (LLMs) are becoming the major trend in the natural language processing field. These models were built based on research data, but information such as types, limitations, and risks of using research data are unknown. This research would present how to analyze and evaluate the LLMs that were built with research data: LLaMA or LLaMA base models such as Alpaca of Stanford, Vicuna of the large model systems organization, and ChatGPT from OpenAI from the perspective of research data. This quality evaluation focuses on the validity, functionality, and reliability of Data Quality Management (DQM). Furthermore, we adopted the Holistic Evaluation of Language Models (HELM) to understand its evaluation criteria and then discussed its limitations. This study presents quality evaluation criteria for LLMs using research data and future development directions.

Large Language Models: A Guide for Radiologists

  • Sunkyu Kim;Choong-kun Lee;Seung-seob Kim
    • Korean Journal of Radiology
    • /
    • v.25 no.2
    • /
    • pp.126-133
    • /
    • 2024
  • Large language models (LLMs) have revolutionized the global landscape of technology beyond natural language processing. Owing to their extensive pre-training on vast datasets, contemporary LLMs can handle tasks ranging from general functionalities to domain-specific areas, such as radiology, without additional fine-tuning. General-purpose chatbots based on LLMs can optimize the efficiency of radiologists in terms of their professional work and research endeavors. Importantly, these LLMs are on a trajectory of rapid evolution, wherein challenges such as "hallucination," high training cost, and efficiency issues are addressed, along with the inclusion of multimodal inputs. In this review, we aim to offer conceptual knowledge and actionable guidance to radiologists interested in utilizing LLMs through a succinct overview of the topic and a summary of radiology-specific aspects, from the beginning to potential future directions.

Framework for evaluating code generation ability of large language models

  • Sangyeop Yeo;Yu-Seung Ma;Sang Cheol Kim;Hyungkook Jun;Taeho Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.106-117
    • /
    • 2024
  • Large language models (LLMs) have revolutionized various applications in natural language processing and exhibited proficiency in generating programming code. We propose a framework for evaluating the code generation ability of LLMs and introduce a new metric, pass-ratio@n, which captures the granularity of accuracy according to the pass rate of test cases. The framework is intended to be fully automatic to handle the repetitive work involved in generating prompts, conducting inferences, and executing the generated codes. A preliminary evaluation focusing on the prompt detail, problem publication date, and difficulty level demonstrates the successful integration of our framework with the LeetCode coding platform and highlights the applicability of the pass-ratio@n metric.

Coding Helper for Python Beginners based on the Large Language Model(LLM) (대규모 언어 모델(LLM) 기반의 파이썬 입문자를 위한 코딩 도우미)

  • Se-Hoon Lee;Jeong-Bin Choi;Yong-Tae Baek;Sun-Ho Yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.389-390
    • /
    • 2023
  • 본 논문에서는 파이썬 코딩 플랫폼에서의 LLM(Large Language Models)을 로직 및 문법 에러 확인, 디버깅 도구로 활용할 수 있는 시스템을 제안한다. 이 시스템은 사용자가 코딩 플랫폼에서 작성한 파이썬 코드와 함께 발생한 에러 문구 및 프롬프트를 LLM 모델에 입력함으로써 로직(문법) 에러를 식별하고 디버깅에 활용할 수 있다. 특히, 입문자를 고려해 프롬프트를 제한하여 사용의 편의성을 높인다. 이를 통해 파이썬 코딩 교육에서 입문자들의 학습 과정을 원활하게 진행할 수 있으며, 파이썬 코딩에 대한 진입 장벽을 낮출 수 있다.

  • PDF

Korean and Multilingual Language Models Study for Cross-Lingual Post-Training (XPT) (Cross-Lingual Post-Training (XPT)을 위한 한국어 및 다국어 언어모델 연구)

  • Son, Suhyune;Park, Chanjun;Lee, Jungseob;Shim, Midan;Lee, Chanhee;Park, Kinam;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.77-89
    • /
    • 2022
  • It has been proven through many previous researches that the pretrained language model with a large corpus helps improve performance in various natural language processing tasks. However, there is a limit to building a large-capacity corpus for training in a language environment where resources are scarce. Using the Cross-lingual Post-Training (XPT) method, we analyze the method's efficiency in Korean, which is a low resource language. XPT selectively reuses the English pretrained language model parameters, which is a high resource and uses an adaptation layer to learn the relationship between the two languages. This confirmed that only a small amount of the target language dataset in the relationship extraction shows better performance than the target pretrained language model. In addition, we analyze the characteristics of each model on the Korean language model and the Korean multilingual model disclosed by domestic and foreign researchers and companies.

Zero-shot Korean Sentiment Analysis with Large Language Models: Comparison with Pre-trained Language Models

  • Soon-Chan Kwon;Dong-Hee Lee;Beak-Cheol Jang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.43-50
    • /
    • 2024
  • This paper evaluates the Korean sentiment analysis performance of large language models like GPT-3.5 and GPT-4 using a zero-shot approach facilitated by the ChatGPT API, comparing them to pre-trained Korean models such as KoBERT. Through experiments utilizing various Korean sentiment analysis datasets in fields like movies, gaming, and shopping, the efficiency of these models is validated. The results reveal that the LMKor-ELECTRA model displayed the highest performance based on F1-score, while GPT-4 particularly achieved high accuracy and F1-scores in movie and shopping datasets. This indicates that large language models can perform effectively in Korean sentiment analysis without prior training on specific datasets, suggesting their potential in zero-shot learning. However, relatively lower performance in some datasets highlights the limitations of the zero-shot based methodology. This study explores the feasibility of using large language models for Korean sentiment analysis, providing significant implications for future research in this area.

Query Normalization Using P-tuning of Large Pre-trained Language Model (Large Pre-trained Language Model의 P-tuning을 이용한 질의 정규화)

  • Suh, Soo-Bin;In, Soo-Kyo;Park, Jin-Seong;Nam, Kyeong-Min;Kim, Hyeon-Wook;Moon, Ki-Yoon;Hwang, Won-Yo;Kim, Kyung-Duk;Kang, In-Ho
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.396-401
    • /
    • 2021
  • 초거대 언어모델를 활용한 퓨샷(few shot) 학습법은 여러 자연어 처리 문제에서 좋은 성능을 보였다. 하지만 데이터를 활용한 추가 학습으로 문제를 추론하는 것이 아니라, 이산적인 공간에서 퓨샷 구성을 통해 문제를 정의하는 방식은 성능 향상에 한계가 존재한다. 이를 해결하기 위해 초거대 언어모델의 모수 전체가 아닌 일부를 추가 학습하거나 다른 신경망을 덧붙여 연속적인 공간에서 추론하는 P-tuning과 같은 데이터 기반 추가 학습 방법들이 등장하였다. 본 논문에서는 문맥에 따른 질의 정규화 문제를 대화형 음성 검색 서비스에 맞게 직접 정의하였고, 초거대 언어모델을 P-tuning으로 추가 학습한 경우 퓨샷 학습법 대비 정확도가 상승함을 보였다.

  • PDF

A Study on Korean Pause Prediction based Large Language Model (대규모 언어 모델 기반 한국어 휴지 예측 연구)

  • Jeongho Na;Joung Lee;Seung-Hoon Na;Jeongbeom Jeong;Maengsik Choi;Chunghee Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.14-18
    • /
    • 2023
  • 본 연구는 한국어 음성-텍스트 데이터에서 보편적으로 나타난 휴지의 실현 양상을 분석하고, 이를 토대로 데이터셋을 선별해 보편적이고 규격화된 한국어 휴지 예측을 위한 모델을 제안하였다. 이를 위해 전문적인 발성 훈련을 받은 성우 등의 발화가 녹음된 음성-텍스트 데이터셋을 수집하고 MFA와 같은 음소 정렬기를 사용해 휴지를 라벨링하는 등의 전처리를 하고, 다양한 화자의 발화에서 공통적으로 나타난 휴지를 선별해 학습데이터셋을 구축하였다. 구축된 데이터셋을 바탕으로 LLM 중 하나인 KULLM 모델을 미세 조정하고 제안한 모델의 휴지 예측 성능을 평가하였다.

  • PDF

LLaMA2 Models with Feedback for Improving Document-Grounded Dialogue System (피드백 기법을 이용한 LLama2 모델 기반의 Zero-Shot 문서 그라운딩된 대화 시스템 성능 개선)

  • Min-Kyo Jung;Beomseok Hong;Wonseok Choi;Youngsub Han;Byoung-Ki Jeon;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.275-280
    • /
    • 2023
  • 문서 그라운딩된 대화 시스템의 응답 성능 개선을 위한 방법론을 제안한다. 사전 학습된 거대 언어 모델 LLM(Large Language Model)인 Llama2 모델에 Zero-Shot In-Context learning을 적용하여 대화 마지막 유저 질문에 대한 응답을 생성하는 태스크를 수행하였다. 본 연구에서 제안한 응답 생성은 검색된 top-1 문서와 대화 기록을 참조해 초기 응답을 생성하고, 생성된 초기 응답을 기반으로 검색된 문서를 대상으로 재순위화를 수행한다. 이 후, 특정 순위의 상위 문서들을 이용해 최종 응답을 생성하는 과정으로 이루어진다. 검색된 상위 문서를 이용하는 응답 생성 방식을 Baseline으로 하여 본 연구에서 제안한 방식과 비교하였다. 그 결과, 본 연구에서 제안한 방식이 검색된 결과에 기반한 실험에서 Baseline 보다 F1, Bleu, Rouge, Meteor Score가 향상한 것을 확인 하였다.

  • PDF

A Study of Fine Tuning Pre-Trained Korean BERT for Question Answering Performance Development (사전 학습된 한국어 BERT의 전이학습을 통한 한국어 기계독해 성능개선에 관한 연구)

  • Lee, Chi Hoon;Lee, Yeon Ji;Lee, Dong Hee
    • Journal of Information Technology Services
    • /
    • v.19 no.5
    • /
    • pp.83-91
    • /
    • 2020
  • Language Models such as BERT has been an important factor of deep learning-based natural language processing. Pre-training the transformer-based language models would be computationally expensive since they are consist of deep and broad architecture and layers using an attention mechanism and also require huge amount of data to train. Hence, it became mandatory to do fine-tuning large pre-trained language models which are trained by Google or some companies can afford the resources and cost. There are various techniques for fine tuning the language models and this paper examines three techniques, which are data augmentation, tuning the hyper paramters and partly re-constructing the neural networks. For data augmentation, we use no-answer augmentation and back-translation method. Also, some useful combinations of hyper parameters are observed by conducting a number of experiments. Finally, we have GRU, LSTM networks to boost our model performance with adding those networks to BERT pre-trained model. We do fine-tuning the pre-trained korean-based language model through the methods mentioned above and push the F1 score from baseline up to 89.66. Moreover, some failure attempts give us important lessons and tell us the further direction in a good way.