• Title/Summary/Keyword: PredPol algorithm

Search Result 1, Processing Time 0.017 seconds

Does Artificial Intelligence Algorithm Discriminate Certain Groups of Humans? (인공지능 알고리즘은 사람을 차별하는가?)

  • Oh, Yoehan;Hong, Sungook
    • Journal of Science and Technology Studies
    • /
    • v.18 no.3
    • /
    • pp.153-216
    • /
    • 2018
  • The contemporary practices of Big-Data based automated decision making algorithms are widely deployed not just because we expect algorithmic decision making might distribute social resources in a more efficient way but also because we hope algorithms might make fairer decisions than the ones humans make with their prejudice, bias, and arbitrary judgment. However, there are increasingly more claims that algorithmic decision making does not do justice to those who are affected by the outcome. These unfair examples bring about new important questions such as how decision making was translated into processes and which factors should be considered to constitute to fair decision making. This paper attempts to delve into a bunch of research which addressed three areas of algorithmic application: criminal justice, law enforcement, and national security. By doing so, it will address some questions about whether artificial intelligence algorithm discriminates certain groups of humans and what are the criteria of a fair decision making process. Prior to the review, factors in each stage of data mining that could, either deliberately or unintentionally, lead to discriminatory results will be discussed. This paper will conclude with implications of this theoretical and practical analysis for the contemporary Korean society.