• Title/Summary/Keyword: 인공지능 편향성

Search Result 7, Processing Time 0.026 seconds

Recommendations for the Construction of a Quslity-Controlled Stress Measurement Dataset (품질이 관리된 스트레스 측정용 테이터셋 구축을 위한 제언)

  • Tai Hoon KIM;In Seop NA
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.44-51
    • /
    • 2024
  • The construction of a stress measurement detaset plays a curcial role in various modern applications. In particular, for the efficient training of artificial intelligence models for stress measurement, it is essential to compare various biases and construct a quality-controlled dataset. In this paper, we propose the construction of a stress measurement dataset with quality management through the comparison of various biases. To achieve this, we introduce strss definitions and measurement tools, the process of building an artificial intelligence stress dataset, strategies to overcome biases for quality improvement, and considerations for stress data collection. Specifically, to manage dataset quality, we discuss various biases such as selection bias, measurement bias, causal bias, confirmation bias, and artificial intelligence bias that may arise during stress data collection. Through this paper, we aim to systematically understand considerations for stress data collection and various biases that may occur during the construction of a stress dataset, contributing to the construction of a dataset with guaranteed quality by overcoming these biases.

Measurement of Political Polarization in Korean Language Model by Quantitative Indicator (한국어 언어 모델의 정치 편향성 검증 및 정량적 지표 제안)

  • Jeongwook Kim;Gyeongmin Kim;Imatitikua Danielle Aiyanyo;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.16-21
    • /
    • 2022
  • 사전학습 말뭉치는 위키백과 문서 뿐만 아니라 인터넷 커뮤니티의 텍스트 데이터를 포함한다. 이는 언어적 관념 및 사회적 편향된 정보를 포함하므로 사전학습된 언어 모델과 파인튜닝한 언어 모델은 편향성을 내포한다. 이에 따라 언어 모델의 중립성을 평가할 수 있는 지표의 필요성이 대두되었으나, 아직까지 언어 인공지능 모델의 정치적 중립성에 대해 정량적으로 평가할 수 있는 척도는 존재하지 않는다. 본 연구에서는 언어 모델의 정치적 편향도를 정량적으로 평가할 수 있는 지표를 제시하고 한국어 언어 모델에 대해 평가를 수행한다. 실험 결과, 위키피디아로 학습된 언어 모델이 가장 정치 중립적인 경향성을 나타내었고, 뉴스 댓글과 소셜 리뷰 데이터로 학습된 언어 모델의 경우 정치 보수적, 그리고 뉴스 기사를 기반으로 학습된 언어 모델에서 정치 진보적인 경향성을 나타냈다. 또한, 본 논문에서 제안하는 평가 방법의 안정성 검증은 각 언어 모델의 정치적 편향 평가 결과가 일관됨을 입증한다.

  • PDF

ColBERT with Adversarial Language Adaptation for Multilingual Information Retrieval (다국어 정보 검색을 위한 적대적 언어 적응을 활용한 ColBERT)

  • Jonghwi Kim;Yunsu Kim;Gary Geunbae Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.239-244
    • /
    • 2023
  • 신경망 기반의 다국어 및 교차 언어 정보 검색 모델은 타겟 언어로 된 학습 데이터가 필요하지만, 이는 고자원 언어에 치중되어있다. 본 논문에서는 이를 해결하기 위해 영어 학습 데이터와 한국어-영어 병렬 말뭉치만을 이용한 효과적인 다국어 정보 검색 모델 학습 방법을 제안한다. 언어 예측 태스크와 경사 반전 계층을 활용하여 인코더가 언어에 구애 받지 않는 벡터 표현을 생성하도록 학습 방법을 고안하였고, 이를 한국어가 포함된 다국어 정보 검색 벤치마크에 대해 실험하였다. 본 실험 결과 제안 방법이 다국어 사전학습 모델과 영어 데이터만을 이용한 베이스라인보다 높은 성능을 보임을 실험적으로 확인하였다. 또한 교차 언어 정보 검색 실험을 통해 현재 검색 모델이 언어 편향성을 가지고 있으며, 성능에 직접적인 영향을 미치는 것을 보였다.

  • PDF

Automatic Classification and Vocabulary Analysis of Political Bias in News Articles by Using Subword Tokenization (부분 단어 토큰화 기법을 이용한 뉴스 기사 정치적 편향성 자동 분류 및 어휘 분석)

  • Cho, Dan Bi;Lee, Hyun Young;Jung, Won Sup;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • In the political field of news articles, there are polarized and biased characteristics such as conservative and liberal, which is called political bias. We constructed keyword-based dataset to classify bias of news articles. Most embedding researches represent a sentence with sequence of morphemes. In our work, we expect that the number of unknown tokens will be reduced if the sentences are constituted by subwords that are segmented by the language model. We propose a document embedding model with subword tokenization and apply this model to SVM and feedforward neural network structure to classify the political bias. As a result of comparing the performance of the document embedding model with morphological analysis, the document embedding model with subwords showed the highest accuracy at 78.22%. It was confirmed that the number of unknown tokens was reduced by subword tokenization. Using the best performance embedding model in our bias classification task, we extract the keywords based on politicians. The bias of keywords was verified by the average similarity with the vector of politicians from each political tendency.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

A Study on the Potential Use of ChatGPT in Public Design Policy Decision-Making (공공디자인 정책 결정에 ChatGPT의 활용 가능성에 관한연구)

  • Son, Dong Joo;Yoon, Myeong Han
    • Journal of Service Research and Studies
    • /
    • v.13 no.3
    • /
    • pp.172-189
    • /
    • 2023
  • This study investigated the potential contribution of ChatGPT, a massive language and information model, in the decision-making process of public design policies, focusing on the characteristics inherent to public design. Public design utilizes the principles and approaches of design to address societal issues and aims to improve public services. In order to formulate public design policies and plans, it is essential to base them on extensive data, including the general status of the area, population demographics, infrastructure, resources, safety, existing policies, legal regulations, landscape, spatial conditions, current state of public design, and regional issues. Therefore, public design is a field of design research that encompasses a vast amount of data and language. Considering the rapid advancements in artificial intelligence technology and the significance of public design, this study aims to explore how massive language and information models like ChatGPT can contribute to public design policies. Alongside, we reviewed the concepts and principles of public design, its role in policy development and implementation, and examined the overview and features of ChatGPT, including its application cases and preceding research to determine its utility in the decision-making process of public design policies. The study found that ChatGPT could offer substantial language information during the formulation of public design policies and assist in decision-making. In particular, ChatGPT proved useful in providing various perspectives and swiftly supplying information necessary for policy decisions. Additionally, the trend of utilizing artificial intelligence in government policy development was confirmed through various studies. However, the usage of ChatGPT also unveiled ethical, legal, and personal privacy issues. Notably, ethical dilemmas were raised, along with issues related to bias and fairness. To practically apply ChatGPT in the decision-making process of public design policies, first, it is necessary to enhance the capacities of policy developers and public design experts to a certain extent. Second, it is advisable to create a provisional regulation named 'Ordinance on the Use of AI in Policy' to continuously refine the utilization until legal adjustments are made. Currently, implementing these two strategies is deemed necessary. Consequently, employing massive language and information models like ChatGPT in the public design field, which harbors a vast amount of language, holds substantial value.

The Ethics of Robots and Humans in the Post-Human Age (포스트휴먼 시대의 로봇과 인간의 윤리)

  • You, Eun-Soon;Cho, Mi-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.3
    • /
    • pp.592-600
    • /
    • 2018
  • As the field of robots is evolving to intelligent robots that can replace even humans' mental or emotional labor, 'robot ethics' needed in relationship between humans and robots is becoming a crucial issue these days. The purpose of this study is to consider the ethics of robots and humans that is essential in this post-human age. It will deal with the followings as the main contents. First, with the cases of developing ethics software intended to make robots practice ethics, the authors begin this research being conscious about the matter of whether robots can really judge what is right or wrong only with the ethics codes entered forcibly. Second, regarding robot ethics, we should consider unethicality that might arise from learning data internalizing human biasness and also reflect ethical differences between countries or between cultures, that is, ethical relativism. Third, robot ethics should not be just about ethics codes intended for robots but reflect the new concept of 'human ethics' that allows humans and robots to coevolve.