• Title/Summary/Keyword: Algorithmic Discrimination

Search Result 4, Processing Time 0.022 seconds

Algorithmic Price Discrimination and Negative Word-of-Mouth: The Chain Mediating Role of Deliberate attribution and Negative Emotion

  • Wei-Jia Li;Yue-Jun Wang;Zi-Yang Liu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.229-239
    • /
    • 2023
  • This study aims to explore the impact of algorithmic price discrimination on negative word-of-mouth (NWOM) through the lens of attribution theory. It also examines the mediating roles of intentional attributions and negative emotions, as well as the moderating effect of price sensitivity. For this study, 772 consumers who had purchased flight tickets completed a questionnaire survey, and the collected data were analyzed and tested using SPSS 27.0 and AMOS 24.0 software. The research findings reveal that algorithmic price discrimination has a significant positive impact on intentional attributions, negative emotions, and NWOM. Specifically, deliberate attributions and negative emotions mediate the relationship between algorithmic price discrimination and NWOM, while price sensitivity positively moderates the relationship between negative emotions and NWOM. Therefore, companies should consider disclosing algorithm details transparently in their marketing strategies to mitigate consumers' negative emotions and implement targeted strategies for consumers with different levels of price sensitivity to enhance positive word-of-mouth.

Current Issues with the Big Data Utilization from a Humanities Perspective (인문학적 관점으로 본 빅데이터 활용을 위한 당면 문제)

  • Park, Eun-ha;Jeon, Jin-woo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.6
    • /
    • pp.125-134
    • /
    • 2022
  • This study aims to critically discuss the problems that need to be solved from a humanities perspective in order to utilize big data. It identifies and discusses three research problems that may arise from collecting, processing, and using big data. First, it looks at the fake information circulating with regard to problems with the data itself, specifically looking at article-type advertisements and fake news related to politics. Second, discrimination by the algorithm was cited as a problem with big data processing and its results. This discrimination was seen while searching for engineers on the portal site. Finally, problems related to the invasion of personal related information were seen in three categories: the right to privacy, the right to self-determination of information, and the right to be forgotten. This study is meaningful in that it points out the problems facing in the aspect of big data utilization from the humanities perspective in the era of big data and discusses possible problems in the collection, processing, and use of big data, respectively.

Efficient Eye Location for Biomedical Imaging using Two-level Classifier Scheme

  • Nam, Mi-Young;Wang, Xi;Rhee, Phill-Kyu
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.6
    • /
    • pp.828-835
    • /
    • 2008
  • We present a novel method for eye location by means of a two-level classifier scheme. Locating the eye by machine-inspection of an image or video is an important problem for Computer Vision and is of particular value to applications in biomedical imaging. Our method aims to overcome the significant challenge of an eye-location that is able to maintain high accuracy by disregarding highly variable changes in the environment. A first level of computational analysis processes this image context. This is followed by object detection by means of a two-class discrimination classifier(second algorithmic level).We have tested our eye location system using FERET and BioID database. We compare the performance of two-level classifier with that of non-level classifier, and found it's better performance.

Does Artificial Intelligence Algorithm Discriminate Certain Groups of Humans? (인공지능 알고리즘은 사람을 차별하는가?)

  • Oh, Yoehan;Hong, Sungook
    • Journal of Science and Technology Studies
    • /
    • v.18 no.3
    • /
    • pp.153-216
    • /
    • 2018
  • The contemporary practices of Big-Data based automated decision making algorithms are widely deployed not just because we expect algorithmic decision making might distribute social resources in a more efficient way but also because we hope algorithms might make fairer decisions than the ones humans make with their prejudice, bias, and arbitrary judgment. However, there are increasingly more claims that algorithmic decision making does not do justice to those who are affected by the outcome. These unfair examples bring about new important questions such as how decision making was translated into processes and which factors should be considered to constitute to fair decision making. This paper attempts to delve into a bunch of research which addressed three areas of algorithmic application: criminal justice, law enforcement, and national security. By doing so, it will address some questions about whether artificial intelligence algorithm discriminates certain groups of humans and what are the criteria of a fair decision making process. Prior to the review, factors in each stage of data mining that could, either deliberately or unintentionally, lead to discriminatory results will be discussed. This paper will conclude with implications of this theoretical and practical analysis for the contemporary Korean society.