• Title/Summary/Keyword: 학습된편향

Search Result 3, Processing Time 0.02 seconds

FubaoLM : Automatic Evaluation based on Chain-of-Thought Distillation with Ensemble Learning (FubaoLM : 연쇄적 사고 증류와 앙상블 학습에 의한 대규모 언어 모델 자동 평가)

  • Huiju Kim;Donghyeon Jeon;Ohjoon Kwon;Soonhwan Kwon;Hansu Kim;Inkwon Lee;Dohyeon Kim;Inho Kang
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.448-453
    • /
    • 2023
  • 대규모 언어 모델 (Large Language Model, LLM)을 인간의 선호도 관점에서 평가하는 것은 기존의 벤치마크 평가와는 다른 도전적인 과제이다. 이를 위해, 기존 연구들은 강력한 LLM을 평가자로 사용하여 접근하였지만, 높은 비용 문제가 부각되었다. 또한, 평가자로서 LLM이 사용하는 주관적인 점수 기준은 모호하여 평가 결과의 신뢰성을 저해하며, 단일 모델에 의한 평가 결과는 편향될 가능성이 있다. 본 논문에서는 엄격한 기준을 활용하여 편향되지 않은 평가를 수행할 수 있는 평가 프레임워크 및 평가자 모델 'FubaoLM'을 제안한다. 우리의 평가 프레임워크는 심층적인 평가 기준을 통해 다수의 강력한 한국어 LLM을 활용하여 연쇄적 사고(Chain-of-Thought) 기반 평가를 수행한다. 이러한 평가 결과를 다수결로 통합하여 편향되지 않은 평가 결과를 도출하며, 지시 조정 (instruction tuning)을 통해 FubaoLM은 다수의 LLM으로 부터 평가 지식을 증류받는다. 더 나아가 본 논문에서는 전문가 기반 평가 데이터셋을 구축하여 FubaoLM 효과성을 입증한다. 우리의 실험에서 앙상블된 FubaoLM은 GPT-3.5 대비 16% 에서 23% 향상된 절대 평가 성능을 가지며, 이항 평가에서 인간과 유사한 선호도 평가 결과를 도출한다. 이를 통해 FubaoLM은 비교적 적은 비용으로도 높은 신뢰성을 유지하며, 편향되지 않은 평가를 수행할 수 있음을 보인다.

  • PDF

A Study on Impacts of De-identification on Machine Learning's Biased Knowledge (머신러닝 편향성 관점에서 비식별화의 영향분석에 대한 연구)

  • Soohyeon Ha;Jinsong Kim;Yeeun Son;Gaeun Won;Yujin Choi;Soyeon Park;Hyung-Jong Kim;Eunsung Kang
    • Journal of the Korea Society for Simulation
    • /
    • v.33 no.2
    • /
    • pp.27-35
    • /
    • 2024
  • We aimed to shed light on the issue of perpetuating societal disparities by analyzing the impact of inherent biases present in datasets used for training artificial intelligence models on the predictions generated by Artificial Intelligence(AI). Therefore, to examine the influence of data bias on AI models, we constructed an original dataset containing biases related to gender wage gaps and subsequently created a de-identified dataset. Additionally, by utilizing the decision tree algorithm, we compared the outputs of AI models trained on both the original and de-identified datasets, aiming to analyze how data de-identification affects the biases in the results produced by artificial intelligence models. Through this, our goal was to highlight the significant role of data de-identification not only in safeguarding individual privacy but also in addressing biases within the data.

A Comparative Study on Discrimination Issues in Large Language Models (거대언어모델의 차별문제 비교 연구)

  • Wei Li;Kyunghwa Hwang;Jiae Choi;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.125-144
    • /
    • 2023
  • Recently, the use of Large Language Models (LLMs) such as ChatGPT has been increasing in various fields such as interactive commerce and mobile financial services. However, LMMs, which are mainly created by learning existing documents, can also learn various human biases inherent in documents. Nevertheless, there have been few comparative studies on the aspects of bias and discrimination in LLMs. The purpose of this study is to examine the existence and extent of nine types of discrimination (Age, Disability status, Gender identity, Nationality, Physical appearance, Race ethnicity, Religion, Socio-economic status, Sexual orientation) in LLMs and suggest ways to improve them. For this purpose, we utilized BBQ (Bias Benchmark for QA), a tool for identifying discrimination, to compare three large-scale language models including ChatGPT, GPT-3, and Bing Chat. As a result of the evaluation, a large number of discriminatory responses were observed in the mega-language models, and the patterns differed depending on the mega-language model. In particular, problems were exposed in elder discrimination and disability discrimination, which are not traditional AI ethics issues such as sexism, racism, and economic inequality, and a new perspective on AI ethics was found. Based on the results of the comparison, this paper describes how to improve and develop large-scale language models in the future.