• Title/Summary/Keyword: Large Language Models

Search Result 166, Processing Time 0.019 seconds

Large Language Models-based Feature Extraction for Short-Term Load Forecasting (거대언어모델 기반 특징 추출을 이용한 단기 전력 수요량 예측 기법)

  • Jaeseung Lee;Jehyeok Rew
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.51-65
    • /
    • 2024
  • Accurate electrical load forecasting is important to the effective operation of power systems in smart grids. With the recent development in machine learning, artificial intelligence-based models for predicting power demand are being actively researched. However, since existing models get input variables as numerical features, the accuracy of the forecasting model may decrease because they do not reflect the semantic relationship between these features. In this paper, we propose a scheme for short-term load forecasting by using features extracted through the large language models for input data. We firstly convert input variables into a sentence-like prompt format. Then, we use the large language model with frozen weights to derive the embedding vectors that represent the features of the prompt. These vectors are used to train the forecasting model. Experimental results show that the proposed scheme outperformed models based on numerical data, and by visualizing the attention weights in the large language models on the prompts, we identified the information that significantly influences predictions.

A Comparative Study on Discrimination Issues in Large Language Models (거대언어모델의 차별문제 비교 연구)

  • Wei Li;Kyunghwa Hwang;Jiae Choi;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.125-144
    • /
    • 2023
  • Recently, the use of Large Language Models (LLMs) such as ChatGPT has been increasing in various fields such as interactive commerce and mobile financial services. However, LMMs, which are mainly created by learning existing documents, can also learn various human biases inherent in documents. Nevertheless, there have been few comparative studies on the aspects of bias and discrimination in LLMs. The purpose of this study is to examine the existence and extent of nine types of discrimination (Age, Disability status, Gender identity, Nationality, Physical appearance, Race ethnicity, Religion, Socio-economic status, Sexual orientation) in LLMs and suggest ways to improve them. For this purpose, we utilized BBQ (Bias Benchmark for QA), a tool for identifying discrimination, to compare three large-scale language models including ChatGPT, GPT-3, and Bing Chat. As a result of the evaluation, a large number of discriminatory responses were observed in the mega-language models, and the patterns differed depending on the mega-language model. In particular, problems were exposed in elder discrimination and disability discrimination, which are not traditional AI ethics issues such as sexism, racism, and economic inequality, and a new perspective on AI ethics was found. Based on the results of the comparison, this paper describes how to improve and develop large-scale language models in the future.

A Proposal of Evaluation of Large Language Models Built Based on Research Data (연구데이터 관점에서 본 거대언어모델 품질 평가 기준 제언)

  • Na-eun Han;Sujeong Seo;Jung-ho Um
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.3
    • /
    • pp.77-98
    • /
    • 2023
  • Large Language Models (LLMs) are becoming the major trend in the natural language processing field. These models were built based on research data, but information such as types, limitations, and risks of using research data are unknown. This research would present how to analyze and evaluate the LLMs that were built with research data: LLaMA or LLaMA base models such as Alpaca of Stanford, Vicuna of the large model systems organization, and ChatGPT from OpenAI from the perspective of research data. This quality evaluation focuses on the validity, functionality, and reliability of Data Quality Management (DQM). Furthermore, we adopted the Holistic Evaluation of Language Models (HELM) to understand its evaluation criteria and then discussed its limitations. This study presents quality evaluation criteria for LLMs using research data and future development directions.

Analysis of Prompt Engineering Methodologies and Research Status to Improve Inference Capability of ChatGPT and Other Large Language Models (ChatGPT 및 거대언어모델의 추론 능력 향상을 위한 프롬프트 엔지니어링 방법론 및 연구 현황 분석)

  • Sangun Park;Juyoung Kang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.287-308
    • /
    • 2023
  • After launching its service in November 2022, ChatGPT has rapidly increased the number of users and is having a significant impact on all aspects of society, bringing a major turning point in the history of artificial intelligence. In particular, the inference ability of large language models such as ChatGPT is improving at a rapid pace through prompt engineering techniques. This reasoning ability can be considered as an important factor for companies that want to adopt artificial intelligence into their workflows or for individuals looking to utilize it. In this paper, we begin with an understanding of in-context learning that enables inference in large language models, explain the concept of prompt engineering, inference with in-context learning, and benchmark data. Moreover, we investigate the prompt engineering techniques that have rapidly improved the inference performance of large language models, and the relationship between the techniques.

Large Language Models: A Guide for Radiologists

  • Sunkyu Kim;Choong-kun Lee;Seung-seob Kim
    • Korean Journal of Radiology
    • /
    • v.25 no.2
    • /
    • pp.126-133
    • /
    • 2024
  • Large language models (LLMs) have revolutionized the global landscape of technology beyond natural language processing. Owing to their extensive pre-training on vast datasets, contemporary LLMs can handle tasks ranging from general functionalities to domain-specific areas, such as radiology, without additional fine-tuning. General-purpose chatbots based on LLMs can optimize the efficiency of radiologists in terms of their professional work and research endeavors. Importantly, these LLMs are on a trajectory of rapid evolution, wherein challenges such as "hallucination," high training cost, and efficiency issues are addressed, along with the inclusion of multimodal inputs. In this review, we aim to offer conceptual knowledge and actionable guidance to radiologists interested in utilizing LLMs through a succinct overview of the topic and a summary of radiology-specific aspects, from the beginning to potential future directions.

Framework for evaluating code generation ability of large language models

  • Sangyeop Yeo;Yu-Seung Ma;Sang Cheol Kim;Hyungkook Jun;Taeho Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.106-117
    • /
    • 2024
  • Large language models (LLMs) have revolutionized various applications in natural language processing and exhibited proficiency in generating programming code. We propose a framework for evaluating the code generation ability of LLMs and introduce a new metric, pass-ratio@n, which captures the granularity of accuracy according to the pass rate of test cases. The framework is intended to be fully automatic to handle the repetitive work involved in generating prompts, conducting inferences, and executing the generated codes. A preliminary evaluation focusing on the prompt detail, problem publication date, and difficulty level demonstrates the successful integration of our framework with the LeetCode coding platform and highlights the applicability of the pass-ratio@n metric.

Iterative Feedback-based Personality Persona Generation for Diversifying Linguistic Patterns in Large Language Models (대규모 언어 모델의 언어 패턴 다양화를 위한 반복적 피드백 기반 성격 페르소나 생성법)

  • Taeho Hwang;Hoyun Song;Jisu Shin;Sukmin Cho;Jong C. Park
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.454-460
    • /
    • 2023
  • 대규모 언어 모델(Large Language Models, LLM)의 발전과 더불어 대량의 학습 데이터로부터 기인한 LLM의 편향성에 관심이 집중하고 있다. 최근 선행 연구들에서는 LLM이 이러한 경향성을 탈피하고 다양한 언어 패턴을 생성하게 하기 위하여 LLM에 여러가지 페르소나를 부여하는 방법을 제안하고 있다. 일부에서는 사람의 성격을 설명하는 성격 5 요인 이론(Big 5)을 이용하여 LLM에 다양한 성격 특성을 가진 페르소나를 부여하는 방법을 제안하였고, 페르소나 간의 성격의 차이가 다양한 양상의 언어 사용 패턴을 이끌어낼 수 있음을 보였다. 그러나 제한된 횟수의 입력만으로 목표하는 성격의 페르소나를 생성하려 한 기존 연구들은 세밀히 서로 다른 성격을 가진 페르소나를 생성하는 데에 한계가 있었다. 본 연구에서는 페르소나 부여 과정에서 피드백을 반복하여 제공함으로써 세세한 성격의 차이를 가진 페르소나를 생성하는 방법론을 제안한다. 본 연구의 실험과 분석을 통해, 제안하는 방법론으로 형성된 성격 페르소나가 다양한 언어 패턴을 효과적으로 만들어 낼 수 있음을 확인했다.

  • PDF