• Title/Summary/Keyword: AI Bias

Search Result 55, Processing Time 0.021 seconds

Learning fair prediction models with an imputed sensitive variable: Empirical studies

  • Kim, Yongdai;Jeong, Hwichang
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.251-261
    • /
    • 2022
  • As AI has a wide range of influence on human social life, issues of transparency and ethics of AI are emerging. In particular, it is widely known that due to the existence of historical bias in data against ethics or regulatory frameworks for fairness, trained AI models based on such biased data could also impose bias or unfairness against a certain sensitive group (e.g., non-white, women). Demographic disparities due to AI, which refer to socially unacceptable bias that an AI model favors certain groups (e.g., white, men) over other groups (e.g., black, women), have been observed frequently in many applications of AI and many studies have been done recently to develop AI algorithms which remove or alleviate such demographic disparities in trained AI models. In this paper, we consider a problem of using the information in the sensitive variable for fair prediction when using the sensitive variable as a part of input variables is prohibitive by laws or regulations to avoid unfairness. As a way of reflecting the information in the sensitive variable to prediction, we consider a two-stage procedure. First, the sensitive variable is fully included in the learning phase to have a prediction model depending on the sensitive variable, and then an imputed sensitive variable is used in the prediction phase. The aim of this paper is to evaluate this procedure by analyzing several benchmark datasets. We illustrate that using an imputed sensitive variable is helpful to improve prediction accuracies without hampering the degree of fairness much.

Learning Method of Data Bias employing MachineLearningforKids: Case of AI Baseball Umpire (머신러닝포키즈를 활용한 데이터 편향 인식 학습: AI야구심판 사례)

  • Kim, Hyo-eun
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.4
    • /
    • pp.273-284
    • /
    • 2022
  • The goal of this paper is to propose the use of machine learning platforms in education to train learners to recognize data biases. Learners can cultivate the ability to recognize when learners deal with AI data and systems when they want to prevent damage caused by data bias. Specifically, this paper presents a method of data bias education using MachineLearningforKids, focusing on the case of AI baseball referee. Learners take the steps of selecting a specific topic, reviewing prior research, inputting biased/unbiased data on a machine learning platform, composing test data, comparing the results of machine learning, and present implications. Learners can learn that AI data bias should be minimized and the impact of data collection and selection on society. This learning method has the significance of promoting the ease of problem-based self-directed learning, the possibility of combining with coding education, and the combination of humanities and social topics with artificial intelligence literacy.

Experience Way of Artificial Intelligence PLAY Educational Model for Elementary School Students

  • Lee, Kibbm;Moon, Seok-Jae
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.232-237
    • /
    • 2020
  • Given the recent pace of development and expansion of Artificial Intelligence (AI) technology, the influence and ripple effects of AI technology on the whole of our lives will be very large and spread rapidly. The National Artificial Intelligence R&D Strategy, published in 2019, emphasizes the importance of artificial intelligence education for K-12 students. It also mentions STEM education, AI convergence curriculum, and budget for supporting the development of teaching materials and tools. However, it is necessary to create a new type of curriculum at a time when artificial intelligence curriculum has never existed before. With many attempts and discussions going very fast in all countries on almost the same starting line. Also, there is no suitable professor for K-12 students, and it is difficult to make K-12 students understand the concept of AI. In particular, it is difficult to teach elementary school students through professional programming in AI education. It is also difficult to learn tools that can teach AI concepts. In this paper, we propose an educational model for elementary school students to improve their understanding of AI through play or experience. This an experiential education model that combineds exploratory learning and discovery learning using multi-intelligence and the PLAY teaching-learning model to undertand the importance of data training or data required for AI education. This educational model is designed to learn how a computer that knows only binary numbers through UA recognizes images. Through code.org, students were trained to learn AI robots and configured to understand data bias like play. In addition, by learning images directly on a computer through TeachableMachine, a tool capable of supervised learning, to understand the concept of dataset, learning process, and accuracy, and proposed the process of AI inference.

A Conceptual Architecture for Ethic-Friendly AI

  • Oktian, Yustus-Eko;Brian, Stanley;Lee, Sang-Gon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.4
    • /
    • pp.9-17
    • /
    • 2022
  • The state-of-the-art AI systems pose many ethical issues ranging from massive data collection to bias in algorithms. In response, this paper proposes a more ethic-friendly AI architecture by combining Federated Learning(FL) and Blockchain. We discuss the importance of each issues and provide requirements for an ethical AI system to show how our solutions can achieve more ethical paradigms. By committing to our design, adopters can perform AI services more ethically.

Genetic Control of Learning and Prediction: Application to Modeling of Plasma Etch Process Data (학습과 예측의 유전 제어: 플라즈마 식각공정 데이터 모델링에의 응용)

  • Uh, Hyung-Soo;Gwak, Kwan-Woong;Kim, Byung-Whan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.315-319
    • /
    • 2007
  • A technique to model plasma processes was presented. This was accomplished by combining the backpropagation neural network (BPNN) and genetic algorithm (GA). Particularly, the GA was used to optimize five training factor effects by balancing the training and test errors. The technique was evaluated with the plasma etch data, characterized by a face-centered Box Wilson experiment. The etch outputs modeled include Al etch rate, AI selectivity, DC bias, and silica profile angle. Scanning electron microscope was used to quantify the etch outputs. For comparison, the etch outputs were modeled in a conventional fashion. GABPNN models demonstrated a considerable improvement of more than 25% for all etch outputs only but he DC bias. About 40% improvements were even achieved for the profile angle and AI etch rate. The improvements demonstrate that the presented technique is effective to improving BPNN prediction performance.

Research on Utilization of AI in the Media Industry: Focusing on Social Consensus of Pros and Cons in the Journalism Sector (미디어 산업 AI 활용성에 관한 고찰 : 저널리즘 분야 적용의 주요 쟁점을 중심으로)

  • Jeonghyeon Han;Hajin Yoo;Minjun Kang;Hanjin Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.713-722
    • /
    • 2024
  • This study highlights the impact of Artificial Intelligence (AI) technology on journalism, discussing its utility and addressing major ethical concerns. Broadcasting companies and media institutions, such as the Bloomberg, Guardian, WSJ, WP, NYT, globally are utilizing AI for innovation in news production, data analysis, and content generation. Accordingly, the ecosystem of AI journalism will be analyzed in terms of scale, economic feasibility, diversity, and value enhancement of major media AI service types. Through the previous literature review, this study identifies key ethical and social issues in AI journalism as well. It aims to bridge societal and technological concerns by exploring mutual development directions for AI technology and the media industry. Additionally, it advocates for the necessity of integrated guidelines and advanced AI literacy through social consensus in addressing these issues.

A Study on the Optimal goods by Using Experimental Design in Marketing Research (시장조사에서 실험계획에 의한 최적상품 결정에 관한 사례연구)

  • Kim, Gwan-Rae
    • Journal of Korean Society for Quality Management
    • /
    • v.15 no.2
    • /
    • pp.69-73
    • /
    • 1987
  • The aim of this study is to find out the optimal goods for the marketing research through analysing the factor effecting the marketing survey by using the experimental design method. The decisive effecting factors in relation with the marketing survey were investigated as follows; 1. A row effect (Ai; i = 1, 2, ... n) is the design sorts of woman-clothes bias. 2. A column effect (Bi; i = 1, 2, ... n) is the woman-consumer bias. In this paper the experimental design, execution and statistical analys is were conducted to find out the optimal goods for marketing research.

  • PDF

A Comparative Study on Discrimination Issues in Large Language Models (거대언어모델의 차별문제 비교 연구)

  • Wei Li;Kyunghwa Hwang;Jiae Choi;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.125-144
    • /
    • 2023
  • Recently, the use of Large Language Models (LLMs) such as ChatGPT has been increasing in various fields such as interactive commerce and mobile financial services. However, LMMs, which are mainly created by learning existing documents, can also learn various human biases inherent in documents. Nevertheless, there have been few comparative studies on the aspects of bias and discrimination in LLMs. The purpose of this study is to examine the existence and extent of nine types of discrimination (Age, Disability status, Gender identity, Nationality, Physical appearance, Race ethnicity, Religion, Socio-economic status, Sexual orientation) in LLMs and suggest ways to improve them. For this purpose, we utilized BBQ (Bias Benchmark for QA), a tool for identifying discrimination, to compare three large-scale language models including ChatGPT, GPT-3, and Bing Chat. As a result of the evaluation, a large number of discriminatory responses were observed in the mega-language models, and the patterns differed depending on the mega-language model. In particular, problems were exposed in elder discrimination and disability discrimination, which are not traditional AI ethics issues such as sexism, racism, and economic inequality, and a new perspective on AI ethics was found. Based on the results of the comparison, this paper describes how to improve and develop large-scale language models in the future.

STADIUM: Species-Specific tRNA Adaptive Index Compendium

  • Yoon, Jonghwan;Chung, Yeun-Jun;Lee, Minho
    • Genomics & Informatics
    • /
    • v.16 no.4
    • /
    • pp.28.1-28.6
    • /
    • 2018
  • Due to the increasing interest in synonymous codons, several codon bias-related terms were introduced. As one measure of them, the tRNA adaptation index (tAI) was invented about a decade ago. The tAI is a measure of translational efficiency for a gene and is calculated based on the abundance of intracellular tRNA and the binding strength between a codon and a tRNA. The index has been widely used in various fields of molecular evolution, genetics, and pharmacology. Afterwards, an improved version of the index, named specific tRNA adaptation index (stAI), was developed by adapting tRNA copy numbers in species. Although a subsequently developed webserver (stAIcalc) provided tools that calculated stAI values, it was not available to access pre-calculated values. In addition to about 100 species in stAIcalc, we calculated stAI values for whole coding sequences in 148 species. To enable easy access to this index, we constructed a novel web database, named STADIUM (Species-specific tRNA adaptive index compendium). STADIUM provides not only the stAI value of each gene but also statistics based on pathway-based classification. The database is expected to help researchers who have interests in codon optimality and the role of synonymous codons. STADIUM is freely available at http://stadium.pmrc.re.kr.

Implications for Memory Reference Analysis and System Design to Execute AI Workloads in Personal Mobile Environments (개인용 모바일 환경의 AI 워크로드 수행을 위한 메모리 참조 분석 및 시스템 설계 방안)

  • Seokmin Kwon;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.31-36
    • /
    • 2024
  • Recently, mobile apps that utilize AI technologies are increasing. In the personal mobile environment, performance degradation may occur during the training phase of large AI workload due to limitations in memory capacity. In this paper, we extract memory reference traces of AI workloads and analyze their characteristics. From this analysis, we observe that AI workloads can cause frequent storage access due to weak temporal locality and irregular popularity bias during memory write operations, which can degrade the performance of mobile devices. Based on this observation, we discuss ways to efficiently manage memory write operations of AI workloads using persistent memory-based swap devices. Through simulation experiments, we show that the system architecture proposed in this paper can improve the I/O time of mobile systems by more than 80%.