• Title/Summary/Keyword: 인공지능 편향성

Search Result 12, Processing Time 0.022 seconds

A Study on Impacts of De-identification on Machine Learning's Biased Knowledge (머신러닝 편향성 관점에서 비식별화의 영향분석에 대한 연구)

  • Soohyeon Ha;Jinsong Kim;Yeeun Son;Gaeun Won;Yujin Choi;Soyeon Park;Hyung-Jong Kim;Eunsung Kang
    • Journal of the Korea Society for Simulation
    • /
    • v.33 no.2
    • /
    • pp.27-35
    • /
    • 2024
  • We aimed to shed light on the issue of perpetuating societal disparities by analyzing the impact of inherent biases present in datasets used for training artificial intelligence models on the predictions generated by Artificial Intelligence(AI). Therefore, to examine the influence of data bias on AI models, we constructed an original dataset containing biases related to gender wage gaps and subsequently created a de-identified dataset. Additionally, by utilizing the decision tree algorithm, we compared the outputs of AI models trained on both the original and de-identified datasets, aiming to analyze how data de-identification affects the biases in the results produced by artificial intelligence models. Through this, our goal was to highlight the significant role of data de-identification not only in safeguarding individual privacy but also in addressing biases within the data.

Recommendations for the Construction of a Quslity-Controlled Stress Measurement Dataset (품질이 관리된 스트레스 측정용 테이터셋 구축을 위한 제언)

  • Tai Hoon KIM;In Seop NA
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.44-51
    • /
    • 2024
  • The construction of a stress measurement detaset plays a curcial role in various modern applications. In particular, for the efficient training of artificial intelligence models for stress measurement, it is essential to compare various biases and construct a quality-controlled dataset. In this paper, we propose the construction of a stress measurement dataset with quality management through the comparison of various biases. To achieve this, we introduce strss definitions and measurement tools, the process of building an artificial intelligence stress dataset, strategies to overcome biases for quality improvement, and considerations for stress data collection. Specifically, to manage dataset quality, we discuss various biases such as selection bias, measurement bias, causal bias, confirmation bias, and artificial intelligence bias that may arise during stress data collection. Through this paper, we aim to systematically understand considerations for stress data collection and various biases that may occur during the construction of a stress dataset, contributing to the construction of a dataset with guaranteed quality by overcoming these biases.

Research on institutional improvement measures to strengthen artificial intelligence ethics (인공지능 윤리 강화를 위한 제도적 개선방안 연구)

  • Gun-Sang Cha
    • Convergence Security Journal
    • /
    • v.24 no.2
    • /
    • pp.63-70
    • /
    • 2024
  • With the development of artificial intelligence technology, our lives are changing in innovative ways, but at the same time, new ethical issues are emerging. In particular, issues of discrimination due to algorithm and data bias, deep fakes, and personal information leakage issues are judged to be social priorities that must be resolved as artificial intelligence services expand. To this end, this paper examines the concept of artificial intelligence and ethical issues from the perspective of artificial intelligence ethics, and includes each country's ethical guidelines, laws, artificial intelligence impact assessment system, artificial intelligence certification system, and the current status of technologies related to artificial intelligence algorithm transparency to prevent this. We would like to examine and suggest institutional improvement measures to strengthen artificial intelligence ethics.

Measurement of Political Polarization in Korean Language Model by Quantitative Indicator (한국어 언어 모델의 정치 편향성 검증 및 정량적 지표 제안)

  • Jeongwook Kim;Gyeongmin Kim;Imatitikua Danielle Aiyanyo;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.16-21
    • /
    • 2022
  • 사전학습 말뭉치는 위키백과 문서 뿐만 아니라 인터넷 커뮤니티의 텍스트 데이터를 포함한다. 이는 언어적 관념 및 사회적 편향된 정보를 포함하므로 사전학습된 언어 모델과 파인튜닝한 언어 모델은 편향성을 내포한다. 이에 따라 언어 모델의 중립성을 평가할 수 있는 지표의 필요성이 대두되었으나, 아직까지 언어 인공지능 모델의 정치적 중립성에 대해 정량적으로 평가할 수 있는 척도는 존재하지 않는다. 본 연구에서는 언어 모델의 정치적 편향도를 정량적으로 평가할 수 있는 지표를 제시하고 한국어 언어 모델에 대해 평가를 수행한다. 실험 결과, 위키피디아로 학습된 언어 모델이 가장 정치 중립적인 경향성을 나타내었고, 뉴스 댓글과 소셜 리뷰 데이터로 학습된 언어 모델의 경우 정치 보수적, 그리고 뉴스 기사를 기반으로 학습된 언어 모델에서 정치 진보적인 경향성을 나타냈다. 또한, 본 논문에서 제안하는 평가 방법의 안정성 검증은 각 언어 모델의 정치적 편향 평가 결과가 일관됨을 입증한다.

  • PDF

ColBERT with Adversarial Language Adaptation for Multilingual Information Retrieval (다국어 정보 검색을 위한 적대적 언어 적응을 활용한 ColBERT)

  • Jonghwi Kim;Yunsu Kim;Gary Geunbae Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.239-244
    • /
    • 2023
  • 신경망 기반의 다국어 및 교차 언어 정보 검색 모델은 타겟 언어로 된 학습 데이터가 필요하지만, 이는 고자원 언어에 치중되어있다. 본 논문에서는 이를 해결하기 위해 영어 학습 데이터와 한국어-영어 병렬 말뭉치만을 이용한 효과적인 다국어 정보 검색 모델 학습 방법을 제안한다. 언어 예측 태스크와 경사 반전 계층을 활용하여 인코더가 언어에 구애 받지 않는 벡터 표현을 생성하도록 학습 방법을 고안하였고, 이를 한국어가 포함된 다국어 정보 검색 벤치마크에 대해 실험하였다. 본 실험 결과 제안 방법이 다국어 사전학습 모델과 영어 데이터만을 이용한 베이스라인보다 높은 성능을 보임을 실험적으로 확인하였다. 또한 교차 언어 정보 검색 실험을 통해 현재 검색 모델이 언어 편향성을 가지고 있으며, 성능에 직접적인 영향을 미치는 것을 보였다.

  • PDF

Automatic Classification and Vocabulary Analysis of Political Bias in News Articles by Using Subword Tokenization (부분 단어 토큰화 기법을 이용한 뉴스 기사 정치적 편향성 자동 분류 및 어휘 분석)

  • Cho, Dan Bi;Lee, Hyun Young;Jung, Won Sup;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • In the political field of news articles, there are polarized and biased characteristics such as conservative and liberal, which is called political bias. We constructed keyword-based dataset to classify bias of news articles. Most embedding researches represent a sentence with sequence of morphemes. In our work, we expect that the number of unknown tokens will be reduced if the sentences are constituted by subwords that are segmented by the language model. We propose a document embedding model with subword tokenization and apply this model to SVM and feedforward neural network structure to classify the political bias. As a result of comparing the performance of the document embedding model with morphological analysis, the document embedding model with subwords showed the highest accuracy at 78.22%. It was confirmed that the number of unknown tokens was reduced by subword tokenization. Using the best performance embedding model in our bias classification task, we extract the keywords based on politicians. The bias of keywords was verified by the average similarity with the vector of politicians from each political tendency.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

A Study on Information Bias Perceived by Users of AI-driven News Recommendation Services: Focusing on the Establishment of Ethical Principles for AI Services (AI 자동 뉴스 추천 서비스 사용자가 인지하는 정보 편향성에 대한 연구: AI 서비스의 윤리 원칙 수립을 중심으로)

  • Minjung Park;Sangmi Chai
    • Knowledge Management Research
    • /
    • v.25 no.3
    • /
    • pp.47-71
    • /
    • 2024
  • AI-driven news recommendation systems are widely used today, providing personalized news consumption experiences. However, there are significant concerns that these systems might increase users' information bias by mainly showing information from limited perspectives. This lack of diverse information access can prevent users from forming well-rounded viewpoints on specific issues, leading to social problems like Filter bubbles or Echo chambers. These issues can deepen social divides and information inequality. This study aims to explore how AI-based news recommendation services affect users' perceived information bias and to create a foundation for ethical principles in AI services. Specifically, the study looks at the impact of ethical principles like accountability, the right to explanation, the right to choose, and privacy protection on users' perceptions of information bias in AI news systems. The findings emphasize the need for AI service providers to strengthen ethical standards to improve service quality and build user trust for long-term use. By identifying which ethical principles should be prioritized in the design and implementation of AI services, this study aims to help develop corporate ethical frameworks, internal policies, and national AI ethics guidelines.

A Study on College Students' Perceptions of ChatGPT (ChatGPT에 대한 대학생의 인식에 관한 연구)

  • Rhee, Jung-uk;Kim, Hee Ra;Shin, Hye Won
    • Journal of Korean Home Economics Education Association
    • /
    • v.35 no.4
    • /
    • pp.1-12
    • /
    • 2023
  • At a time when interest in the educational use of ChatGPT is increasing, it is necessary to investigate the perception of ChatGPT among college students. A survey was conducted to compare the current status of internet and interactive artificial intelligence use and perceptions of ChatGPT after using it in the following courses in Spring 2023; 'Family Life and Culture', 'Fashion and Museums', and 'Fashion in Movies' in the first semester of 2023. We also looked at comparative analysis reports and reflection diaries. Information for coursework was mainly obtained through internet searches and articles, but only 9.84% used interactive AI, showing that its application to learning is still insufficient. ChatGPT was first used in the Spring semester of 2023, and ChatGPT was mainly used among conversational AI. ChatGPT is a bit lacking in terms of information accuracy and reliability, but it is convenient because it allows students to find information while interacting easily and quickly, and the satisfaction level was high, so there was a willingness to use ChatGPT more actively in the future. Regarding the impact of ChatGPT on education, students said that it was positive that they were self-directed and that they set up a cooperative class process to verify information through group discussions and problem-solving attitudes through questions. However, problems were recognized that lowered trust, such as plagiarism, copyright, data bias, lack of up-to-date data learning, and generation of inaccurate or incorrect information, which need to be improved.

Gender Bias Mitigation in Gender Prediction Using Zero-shot Classification (제로샷 분류를 활용한 성별 편향 완화 성별 예측 방법)

  • Yeonhee Kim;Byoungju Choi;Jongkil Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.509-512
    • /
    • 2024
  • 자연어 처리 기술은 인간 언어의 이해와 처리에서 큰 진전을 이루었으나, 학습 데이터에 내재한 성별 편향이 모델의 예측 정확도와 신뢰성을 저하하는 주요한 문제로 남아 있다. 특히 성별 예측에서 이러한 편향은 더욱 두드러진다. 제로샷 분류 기법은 기존에 학습되지 않은 새로운 클래스를 효과적으로 예측할 수 있는 기술로, 학습 데이터의 제한적인 의존성을 극복하고 다양한 언어 및 데이터 제한 상황에서도 효율적으로 작동한다. 본 논문은 성별 클래스 확장과 데이터 구조 개선을 통해 성별 편향을 최소화한 새로운 데이터셋을 구축하고, 이를 제로샷 분류 기법을 통해 학습시켜 성별 편향성이 완화된 새로운 성별 예측 모델을 제안한다. 이 연구는 다양한 언어로 구성된 자연어 데이터를 추가 학습하여 성별 예측에 최적화된 모델을 개발하고, 제한된 데이터 환경에서도 모델의 유연성과 범용성을 입증한다.