• Title/Summary/Keyword: AI Fairness

Search Result 27, Processing Time 0.029 seconds

The Regulation of AI: Striking the Balance Between Innovation and Fairness

  • Kwang-min Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.9-22
    • /
    • 2023
  • In this paper, we propose a balanced approach to AI regulation, focused on harnessing the potential benefits of artificial intelligence while upholding fairness and ethical responsibility. With the increasing integration of AI systems into daily life, it is essential to develop regulations that prevent harmful biases and the unfair disadvantage of certain demographics. Our approach involves analyzing regulatory frameworks and case studies in AI applications to ensure responsible development and application. We aim to contribute to ongoing discussions around AI regulation, helping to establish policies that balance innovation with fairness, thereby driving economic progress and societal advancement in the age of artificial intelligence.

Price Fairness Perception on the AI Algorithm Pricing of Fashion Online Platform (패션 온라인 플랫폼의 AI 알고리즘 가격설정에 대한 가격 공정성 지각)

  • Jeong, Ha-eok;Choo, Ho Jung;Yoon, Namhee
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.45 no.5
    • /
    • pp.892-906
    • /
    • 2021
  • This study explores the effects of providing information on the price fairness perception and intention of continuous use in an online fashion platform, given a price difference due to AI algorithm pricing. We investigated the moderating roles of price inequality (loss vs. gain) and technology insecurity. The experiments used four stimuli based on price inequality (loss vs. gain) and information provision (provided or not) on price inequality. We developed a mock website and offered a scenario on the product presentation based on an AI algorithm pricing. Participants in their 20s and 30s were randomly allocated to one of the stimuli. To test the hypotheses, a total of 257 responses were analyzed using Process Macro 3.4. According to the results, price fairness perception mediated between information provision and continuous use intention when consumers saw the price inequality as a gain. When the consumers perceived high technology insecurity, information provision affected the intention of continuous use mediated by price fairness perception.

Exploratory Analysis of AI-based Policy Decision-making Implementation

  • SunYoung SHIN
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.203-214
    • /
    • 2024
  • This study seeks to provide implications for domestic-related policies through exploratory analysis research to support AI-based policy decision-making. The following should be considered when establishing an AI-based decision-making model in Korea. First, we need to understand the impact that the use of AI will have on policy and the service sector. The positive and negative impacts of AI use need to be better understood, guided by a public value perspective, and take into account the existence of different levels of governance and interests across public policy and service sectors. Second, reliability is essential for implementing innovative AI systems. In most organizations today, comprehensive AI model frameworks to enable and operationalize trust, accountability, and transparency are often insufficient or absent, with limited access to effective guidance, key practices, or government regulations. Third, the AI system is accountable. The OECD AI Principles set out five value-based principles for responsible management of trustworthy AI: inclusive growth, sustainable development and wellbeing, human-centered values and fairness values and fairness, transparency and explainability, robustness, security and safety, and accountability. Based on this, we need to build an AI-based decision-making system in Korea, and efforts should be made to build a system that can support policies by reflecting this. The limiting factor of this study is that it is an exploratory study of existing research data, and we would like to suggest future research plans by collecting opinions from experts in related fields. The expected effect of this study is analytical research on artificial intelligence-based decision-making systems, which will contribute to policy establishment and research in related fields.

Learning fair prediction models with an imputed sensitive variable: Empirical studies

  • Kim, Yongdai;Jeong, Hwichang
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.251-261
    • /
    • 2022
  • As AI has a wide range of influence on human social life, issues of transparency and ethics of AI are emerging. In particular, it is widely known that due to the existence of historical bias in data against ethics or regulatory frameworks for fairness, trained AI models based on such biased data could also impose bias or unfairness against a certain sensitive group (e.g., non-white, women). Demographic disparities due to AI, which refer to socially unacceptable bias that an AI model favors certain groups (e.g., white, men) over other groups (e.g., black, women), have been observed frequently in many applications of AI and many studies have been done recently to develop AI algorithms which remove or alleviate such demographic disparities in trained AI models. In this paper, we consider a problem of using the information in the sensitive variable for fair prediction when using the sensitive variable as a part of input variables is prohibitive by laws or regulations to avoid unfairness. As a way of reflecting the information in the sensitive variable to prediction, we consider a two-stage procedure. First, the sensitive variable is fully included in the learning phase to have a prediction model depending on the sensitive variable, and then an imputed sensitive variable is used in the prediction phase. The aim of this paper is to evaluate this procedure by analyzing several benchmark datasets. We illustrate that using an imputed sensitive variable is helpful to improve prediction accuracies without hampering the degree of fairness much.

Engineering Students' Ethical Sensitivity on Artificial Intelligence Robots (공학전공 대학생의 AI 로봇에 대한 윤리적 민감성)

  • Lee, Hyunok;Ko, Yeonjoo
    • Journal of Engineering Education Research
    • /
    • v.25 no.6
    • /
    • pp.23-37
    • /
    • 2022
  • This study evaluated the engineering students' ethical sensitivity to an AI emotion recognition robot scenario and explored its characteristics. For data collection, 54 students (27 majoring in Convergence Electronic Engineering and 27 majoring in Computer Software) were asked to list five factors regarding the AI robot scenario. For the analysis of ethical sensitivity, it was checked whether the students acknowledged the AI ethical principles in the AI robot scenario, such as safety, controllability, fairness, accountability, and transparency. We also categorized students' levels as either informed or naive based on whether or not they infer specific situations and diverse outcomes and feel a responsibility to take action as engineers. As a result, 40.0% of students' responses contained the AI ethical principles. These include safety 57.1%, controllability 10.7%, fairness 20.5%, accountability 11.6%, and transparency 0.0%. More students demonstrated ethical sensitivity at a naive level (76.8%) rather than at the informed level (23.2%). This study has implications for presenting an ethical sensitivity evaluation tool that can be utilized professionally in educational fields and applying it to engineering students to illustrate specific cases with varying levels of ethical sensitivity.

ML-based Interactive Data Visualization System for Diversity and Fairness Issues

  • Min, Sey;Kim, Jusub
    • International Journal of Contents
    • /
    • v.15 no.4
    • /
    • pp.1-7
    • /
    • 2019
  • As the recent developments of artificial intelligence, particularly machine-learning, impact every aspect of society, they are also increasingly influencing creative fields manifested as new artistic tools and inspirational sources. However, as more artists integrate the technology into their creative works, the issues of diversity and fairness are also emerging in the AI-based creative practice. The data dependency of machine-learning algorithms can amplify the social injustice existing in the real world. In this paper, we present an interactive visualization system for raising the awareness of the diversity and fairness issues. Rather than resorting to education, campaign, or laws on those issues, we have developed a web & ML-based interactive data visualization system. By providing the interactive visual experience on the issues in interesting ways as the form of web content which anyone can access from anywhere, we strive to raise the public awareness of the issues and alleviate the important ethical problems. In this paper, we present the process of developing the ML-based interactive visualization system and discuss the results of this project. The proposed approach can be applied to other areas requiring attention to the issues.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

On Power Splitting under User-Fairness for Correlated Superposition Coding NOMA in 5G System

  • Chung, Kyuhyuk
    • International journal of advanced smart convergence
    • /
    • v.9 no.2
    • /
    • pp.68-75
    • /
    • 2020
  • Non-orthogonal multiple access (NOMA) has gained the significant attention in the fifth generation (5G) mobile communication, which enables the advanced smart convergence of the artificial intelligence (AI), the internet of things (IoT), and many of the state-of-the-art technologies. Recently, correlated superposition coding (SC) has been proposed in NOMA, to achieve the near-perfect successive interference cancellation (SIC) bit-error rate (BER) performance for the stronger channel users, and to mitigate the severe BER performance degradation for the weaker channel users. In the correlated SC NOMA scheme, the stronger channel user BER performance is even better than the perfect SIC BER performance, for some range of the power allocation factor. However, such excessively good BER performance is not good for the user-fairness, i.e., the more power to the weaker channel user and the less power to the stronger channel user, because the excessively good BER performance of the stronger channel user results in the worse BER performance of the weaker channel user. Therefore, in this paper, we propose the power splitting to establish the user-fairness between both users. First, we derive a closed-form expression for the power splitting factor. Then it is shown that in terms of BER performance, the user-fairness is established between the two users. In result, the power splitting scheme could be considered in correlated SC NOMA for the user-fairness.

Development and Validation of Ethical Awareness Scale for AI Technology (인공지능기술 윤리성 인식 척도개발 연구)

  • Kim, Doeyon;Ko, Younghwa
    • Journal of Digital Convergence
    • /
    • v.20 no.1
    • /
    • pp.71-86
    • /
    • 2022
  • The purpose of this study is to develop and validate a scale to measure the ethical awareness of users who accept artificial intelligence technology or service. To this end, the constructs and properties of AI ethics were identified through literature analysis on AI ethics. Reliability and validity were assessed through a preliminary survey(N=273), after conducting an open-type survey to men and women(N=133) in 10s to 70s nationwide, extracting the first questions, and reviewing them by experts. The results of an online survey conducted on men and women(N=500) were refined by confirmatory factor analysis. Finally, an AI technology ethics scale was developed. The AI technology ethics awareness scale was developed with 16 questions in total of 4 factors (transparency, safety, fairness, accountability) so that general awareness of ethics related to AI technology can be measured by detailed factors. In addition, through follow-up research, it will be possible to reveal the relationship with measurement variables in various fields by using the ethical awareness scale of artificial intelligence technology.

On Power Calculation for First and Second Strong Channel Users in M-user NOMA System

  • Chung, Kyuhyuk
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.49-58
    • /
    • 2020
  • Non-orthogonal multiple access (NOMA) has been recognized as a significant technology in the fifth generation (5G) and beyond mobile communication, which encompasses the advanced smart convergence of the artificial intelligence (AI) and the internet of things (IoT). In NOMA, since the channel resources are shared by many users, it is essential to establish the user fairness. Such fairness is achieved by the power allocation among the users, and in turn, the less power is allocated to the stronger channel users. Especially, the first and second strong channel users have to share the extremely small amount of power. In this paper, we consider the power optimization for the two users with the small power. First, the closed-form expression for the power allocation is derived and then the results are validated by the numerical results. Furthermore, with the derived analytical expression, for the various channel environments, the optimal power allocation is investigated and the impact of the channel gain difference on the power allocation is analyzed.