Browse > Article

Does Artificial Intelligence Algorithm Discriminate Certain Groups of Humans?  

Oh, Yoehan (RPI 과학기술학과)
Hong, Sungook (서울대학교 과학사 및 과학철학 협동과정/생명과학부)
Publication Information
Journal of Science and Technology Studies / v.18, no.3, 2018 , pp. 153-216 More about this Journal
Abstract
The contemporary practices of Big-Data based automated decision making algorithms are widely deployed not just because we expect algorithmic decision making might distribute social resources in a more efficient way but also because we hope algorithms might make fairer decisions than the ones humans make with their prejudice, bias, and arbitrary judgment. However, there are increasingly more claims that algorithmic decision making does not do justice to those who are affected by the outcome. These unfair examples bring about new important questions such as how decision making was translated into processes and which factors should be considered to constitute to fair decision making. This paper attempts to delve into a bunch of research which addressed three areas of algorithmic application: criminal justice, law enforcement, and national security. By doing so, it will address some questions about whether artificial intelligence algorithm discriminates certain groups of humans and what are the criteria of a fair decision making process. Prior to the review, factors in each stage of data mining that could, either deliberately or unintentionally, lead to discriminatory results will be discussed. This paper will conclude with implications of this theoretical and practical analysis for the contemporary Korean society.
Keywords
artificial intelligence; algorithm; big data; discrimination; COMPAS algorithm; PredPol algorithm; border control algorithm;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014), 'Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms', In Data and Discrimination: Converting Critical Concerns into Productive: A preconference at the 64th Annual Meeting of the International Communication Association. Seattle, WA, 2014.
2 Hand, D. J. (2006), 'Classifier Technology and the Illusion of Progress', Statistical science, Vol. 21, No.1, pp. 1-14.   DOI
3 Hardt, M., Price, E., and Srebro, N. (2016), 'Equality of Opportunity in Supervised Learning', Advances in Neural Information Processing Systems 29 (NIPS 2016), [https://arxiv.org/abs/1610.02413]
4 The IEEE (Institute of Electrical and Electronics Engineers) Global Initiative on Ethics of Autonomous and Intelligent Systems (2017), 'Ethic ally aligned design: A vision for prioritizing human well-being w ith autonomous and intelligent systems', version 2. Accessed on: Oct. 14, 2018. [http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf]
5 Jasanoff, S. (2017), "Science and democracy", in Hackett, E. J., Amsterdamska, O., Lynch, M., and Wajcman, J. Eds. The handbook of science and technology studies, The Fourth Edition, pp. 259-288, Cambridge, MA: MIT Press.
6 Jeandesboz, J. (2014), 'EU Border Control: Violence, Capture and Apparatus', In Jansen, Y., Celikates, R., and De Bloois, J. eds., The Irregularization of Migration in Europe, London: Rowman & Littlefield International. pp. 87-103,
7 Kahn, C. (2011.11.26.), 'At LAPD, Predicting Crimes Before They Happen', National Public Radio. [https://www.npr.org/2011/11/26/142758000/at-lapd-predicting-crimes-before-they-happen]
8 Kirchner, L. (2017.12.18), 'New York City Moves to Create Accountability for Algorithms', Propublica [https://www.propublica.org/article/new-york-city-moves-to-create-accountability-for-algorithms]
9 Kleinberg, J., Mullainathan, S., and Raghavan, M. (2016.09.11), 'Inherent Trade-Offs in the Fair Determination of Risk Scores', in Proc. 8th Conf. on Innovations in Theoretical Computer Science (ITCS), 2017. [https://arxiv.org/abs/1609.05807]
10 Schlehahn, E., Aichroth, P., Mann, S., Schreiner, R., Lang, U., Shepherd, I. D. H., and Wong, B. L. W. (2015), 'Benefits and Pitfalls of Predictive Policing', Proceedings of 2015 European Intelligence and Security Informatics Conference (EISIC 2015). [https://ieeexplore. ieee.org/document/7379738]
11 SearchEnterpriseAI (2015), 'algorithmic transparency', by Matthew Haughn. https://searchenterpriseai.techtarget.com/definition/algorithmic-transparency
12 SearchEnterpriseAI (2017), 'algorithmic accountability', by Matthew Haughn. https://searchenterpriseai.techtarget.com/definition/algorithmic-accountability
13 Selbst, A. D. (2018), 'Disparate Impact in Big Data Policing', Georgia Law Review, Vol. 52, pp. 109-195.
14 Simonite, T. (2018.11.01), 'When It Comes to Gorillas, Google Photos Rem ains Blind', Wired. [https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind]
15 Sparke, M. B. (2006), 'A Neoliberal Nexus: Economy, Security and the Biopolitics of Citizenship on the Border', Political Geography, Vol. 25, No. 2, pp. 151-180.   DOI
16 Toole, J. L., Eagle, N., and Plotkin, J. B. (2011), 'Spatiotemporal correlations in Criminal Oense Records.' ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 4, pp. 1-38.
17 Rubin, J. (2010.08.21.), 'Stopping Crime Before It Starts', Los Angeles Times. [http://articles.latimes.com/2010/aug/21/local/la-me-predictcrime-20100427-1]
18 Larson, J. and Angwin, L. (2016.07.29), 'Technical Response to Northpointe', ProPublica. [https://www.propublica.org/article/technical-response-to-northpointe]
19 Kroll, J., Huey, J., Barocas, S., Felten, E., Reidenberg, J., Robinson, D., and Yu, H. (2017), 'Accountable algorithms', University of Pennsylvania Law Review, Vol. 165, No.3, pp. 633-705.
20 Kuhn, T. (1962), The Structure of Scientific Revolutions, Chicago: University of Chicago Press.
21 Leeang, M. M., Rutjes, A. W., Reitsma, J. B., Hooft, L., and Bossuyt, P. M. (2013), 'Variation of a Test's Sensitivity and Specicity with Disease Prevalence', Canadian Medical Association Journal, Vol. 185, No. 11, pp. 537-544.   DOI
22 Leslie, S. W. (1993), The Cold War and American science: The military-industrial-academic complex at MIT and Stanford. Columbia University Press;
23 Liu, H. and Motoda., H. (2012), Feature selection for knowledge discovery and data mining. Springer Science & Business Media.
24 Liu, Y. (2017.01.17), 'The Accountability of AI - Case Study: Microsoft's Tay Experiment', Medium. [https://chatbotslife.com/the-accountability-of-ai-case-study-microsofts-tay-experiment-ad577015181f]
25 Maguire, M. (2009), 'The Birth of Biometric security', Anthropology Today, Vol. 25, No. 1, pp. 9-14.   DOI
26 안중호.양지윤 (2006), 기업 거버넌스 측면에서의 IT 거버넌스, 경영정보논총, 제16권, 제1호, 97-119쪽.
27 Ulbricht, L. (2018), 'When Big Data Meet Securitization. Algorithmic Regulation with Passenger Name Records', European Journal for Security Research, Vol. 3, No.2, pp. 1-23.
28 Valverde, M. and Mopas, M. (2004), 'Insecurity and the Dream of Targeted Governance', in W. Larner and W. Walters eds., Global Governmentality: Governing International Spaces, London: Routledge. pp. 245-62.
29 김도훈 (2018), 알고리즘 책임성 논의와 알고리즘에 대한 이해, 정보통신기술진흥센터 (IITP), 주간기술동향, 제16권, 제1호, 제1848호 (2018. 5. 30). 14-28쪽. [http://www.itfind.or.kr/WZIN/jugidong/1848/file4136408682380886323-184802.pdf]
30 에릭 브린욜프슨, 앤드루 맥아피, 이한음 번역 (2014), 제2의 기계 시대 - 인간과 기계의 공생이 시작된다, 서울 : 청림출판. [Brynjolfsson, E., and A. McAfee (2016), The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, New York, NY: W. W. Norton & Company.]
31 오요한 (2018), 알고리즘 조사 활동과 그 제약사항의 설정: 네이버 실시간급상승검색어 알고리즘에 대한 검증 논쟁을 중심으로, 서울대학교 대학원 석사학위논문.
32 강준만 (2014), 왜 흑인이 사는 빈곤층 거주 지역에 붉은 줄을 긋는가? - redlining, 인문학은 언어에서 태어났다, 서울 : 인물과사상사, 288-290쪽.
33 김기태.정재관 (2018), 알고리즘이 이용자에게 미친 영향과 방법론적 쟁점: 알고리즘 편향(bias)과 개인정보 보호(privacy protection)를 중심으로, 정보통신정책연구원, 사이버 커뮤니케이션학회 공동주최 지능정보화 이용자 보호 학술 세미나 '알고리즘 시대 이용자 연구 어떻게 할 것인가' 자료집 (2018.6.14), 17-35쪽.
34 정용찬 (2015). 빅데이터 산업과 데이터 브로커, KISDI Premium Report, 15-04. 진천: 정보통신정책연구원. 1-23쪽.
35 Murphy, E. and Maguire, M. (2015), 'Speed, Time and Security: Anthropological Perspectives on Automated Border Control', Etnofoor, Vol. 27, No. 2, pp. 157-177.
36 Martin, B. and Richards, E. (1995), "Scientific Knowledge, Controversy, and Public Decision Making", In Jasanoff, S., Markle, G. E., Petersen, J. C., and Pinch, T. eds., Handbook of Science and Technology Studies. The Second Edition, pp. 506-526, Cambridge, MA: MIT Press.
37 Michael, M., & Lupton, D. (2016), 'Toward a manifesto for the 'public understanding of big data'.' Public Understanding of Science, Vol. 25, No. 1, pp. 104-116.   DOI
38 Mohler, G. O., Short, M. B., Brantingham, P. J., Schoenberg, F. P., and Tita. G. E. (2011), 'Self-exciting Point Process Modeling of Crime', Journal of the American Statistical Association, Vol. 106. (Issue 493), pp. 100-108.   DOI
39 캐시 오닐, 김정혜 번역 (2017), 대량살상 수학무기: 어떻게 빅데이터는 불평등을 확산하고 민주주의를 위협하는가, 서울 : 흐름출판. [O'Neil, C. (2016), Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, New York, NY: Crown.]
40 Muller, B. J. (2008), 'Travellers, Borders, Dangers: Locating the Political at the Biometric Border', in M. B. Salter Ed., Politics at the Airport, University of Minnesota Press, pp. 127-143.
41 Nelkin, D. (1995), "Science Controversies: The Dynamics of Public Disputes in the United States", In Jasanoff, S., Markle, G. E., Petersen, J. C., and Pinch, T. eds., Handbook of Science and Technology Studies. The Second Edition, pp. 444-456, Cambridge, MA: MIT Press.
42 Noble, S. U. (2018), Algorithms of Oppression: How search engines reinforce racism. New York: NYU Press.
43 Adey, P. (2009), 'Facing Airport Security: Affect, Biopolitics, and the Preemptive Securitisation of the Mobile Body', Environment and Planning D: Society and Space, Vol. 27, No. 2, pp. 274-295.   DOI
44 프랭크 파스콸레, 이시은 번역 (2016), 블랙박스 사회: 당신의 모든것이 수집되고 있다, 안양 : 안티고네. [Pasquale, F. (2015), The Black Box Society: The secret algorithms that control money and information. Cambridge, MA: Harvard University Press.]
45 필립 K. 딕, 조호근 번역 (2015), 마이너리티 리포트, 서울 : 폴라북스 [Dick, P. K. and Triptree Jr., J. (2002), The Minority Report and Other Classic Stories, New York, NY: Citadel.]
46 ACM (Association for Computing Machinery) U.S. Public Policy Council an d ACM Europe Policy Committee (2017), 'Statement on Algori thmic Transparency and Accountability', Accessed on: Oct. 14, 2018. [Online]. Available: https://www.acm.org/binaries/content/assets/public-policy/2017_joint_statement_algorithms.pdf
47 Amoore, L. (2006), 'Biometric Borders: Governing Mobilities in the War on Terror', Political Geography, Vol.25, No. 3, pp. 336-51.   DOI
48 Amoore, L. (2009a), 'Algorithmic War: Everyday Geographies of the War on Terror', Antipode, Vol. 41, No.1, pp. 49-69.   DOI
49 Amoore, L. (2009b), 'Lines of Sight: on the Visualization of Unknown Futures', Citizenship Studies, Vol 13, No. 1, pp. 17-30.   DOI
50 Amoore, L. and Hall, A. (2009), 'Taking People Apart: Digitised Dissection and the Body at the Border', Environment and Planning D: Society and Space, Vol. 27, No. 3, pp. 444-464.   DOI
51 Angwin, J., and Larson, J. (2016.07.29), 'ProPublica Responds to Company's Critique of Machine Bias Story', ProPublica. [https://www.propublica.org/article/propublica-responds-to-companys-critique-of-machine-bias-story]
52 Amoore, L. and de Goede, M. (2005), 'Governance, Risk and Dataveillance in the War on Terror', Crime, Law and Social Change, Vol. 43, No. 2-3, pp. 149-173.   DOI
53 Amoore L. and Raley, R. (2016), 'Securing with Algorithms: Knowledge, Decision, Sovereignty', Security Dialogue, Vol. 48, No. 1, pp.3-10.
54 Angwin, J. and Larson, J. (2015), 'The Tiger Mom Tax: Asians Are Nearly Twice as Likely to Get a Higher Price from Princeton Review', ProPublica. [ https://www.propublica.org/article/asians-nearly-twiceas-likely-to-get-higher-price-from-princeton-review]
55 Angwin, J. and Larwon, J. (2016), 'Bias in Criminal Risk Scores is Mathematically Inevitable, Researchers Say', ProPublica. [ https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say]
56 Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016.05.23.), 'Machine Bias There's software used across the country to predict future criminals. And it's biased against blacks', ProPublica. [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing]
57 Aradau, C. and Blanke, T. (2016), 'Politics of Prediction: Security and the Time/Space of Governmentality in the age of Big Data', European Journal of Social Theory, Vol. 20, No. 3, pp. 373-391.
58 Barocas, S. and Selbst, A. D. (2016), 'Big Data's Disparate Impact', California Law Review, Vol. 104, No. 3, pp. 671-732.
59 Aradau, C. and Blanke, T. (2018), 'Governing Others: Anomaly and the Algorithmic Subject of Security', European Journal of International Security, Vol. 3, No.1, pp. 1-21.
60 Asaro, P. M. (2013), 'The Labor of Surveillance and Bureaucratized Killing: New Subjectivities of Military Drone Operators', Social Semiotics, Vol. 23, No. 2, pp. 196-224.   DOI
61 Barrett, L. (2017), 'Reasonably Suspicious Algorithms: Predictive Policing at the United States Border', NYU Review of Law & Social Change, Vol. 41, pp. 327-363.
62 Ceyhan, A. (2008), 'Technologization of Security: Management of Uncertainty and Risk in the Age of Biometrics', Surveillance & Society, Vol. 5, No. 2, pp. 102-123.
63 Beckett, K., Nyrop, K., & Pfingst, L. (2006), 'Race, drugs, and policing: Understanding disparities in drug delivery arrests.' Criminology, Vol. 44, No. 1, pp. 105-137.   DOI
64 Berk, R. Hoda, H. Shahin, J. Michael K. and Aaron R, (2018), 'Fairness in Criminal Justice Risk Assessments: The State of the Art', Sociological Methods & Research, Vol. 20, No. 10, pp. 1-42.
65 Burrell, J. (2016), 'How the machine 'thinks': Understanding opacity in machine learning algorithms.' Big Data & Society, Vol. 3, No. 1.
66 Crawford, K. (2013.05.10), 'Think Again: Big Data - Why the Rise of Machines isn't All it's Cracked Up to Be', Foreign Policy. [https://foreignpolicy.com/2013/05/10/think-again-big-data]
67 Chakraborty, S., Tomsett, R., Raghavendra, R., Harborne, D., Alzantot, M., Cerutti, F., Srivastavaz, M., Preecey, A., Julieryy, S., Rao, R. M., Kelley, T. D., Brainesx, D., Sensoyk, M., Willis, C. J., & Gurram, P. (2017), 'Interpretability of deep learning models: a survey of results.' In IEEE Smart World Congress 2017 Workshop: DAIS.
68 Constantiou, I. D. and Kallinikos, J. (2015), 'New Games, New Rules: Big Data and the Changing Context of Strategy', Journal of Information Technology, Vol. 30, No. 1, pp. 44-57.   DOI
69 Corbett-Davies, S. Pierson, E. Feller, A. Goel, S. and Hug, A. (2017), 'Algo rithmic Decision Making and Cost of Fairness', KDD '17 (Kno wledge Discovery and Data mining). [https://arxiv.org/abs/1701.08230]
70 Curtis, N. (2016), 'The Explication of the Social: Algorithms, Drones and(counter-) Terror', Journal of Sociology, Vol. 52, No.3, pp. 522-536.   DOI
71 Dieterich, W., Mendoza, C., and Brennan, T. (2016), 'COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity', North pointe Inc. Research Department. Published by Northpointe (2016. 7. 8.). [http://www.northpointeinc.com/northpointe-analysis]
72 d'Alessandro, B., O'Neil, C., & LaGatta, T. (2017), 'Conscientious Classification: A Data Scientist's Guide to Discrimination-Aware Classification.' Big data, Vol. 5, No. 2, pp. 120-134.   DOI
73 Dastin, J. (2018), 'Amazon scraps secret AI recruiting tool that showed bias against women,' Reuters, (2018. 10. 9) https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
74 de Goede, M., Stephanie S., and Hoijtink, M. (2014), 'Performing preemption', Security Dialogue, Vol. 45, pp. 411-422.   DOI
75 Dennis, B. (2007), 'The New Digital Borders of Europe: EU Databases and the Surveillance of Irregular Migrants', International Sociology, Vol. 22, No. 1, pp. 71-92.   DOI
76 Der Derian, J. (2009), Virtuous war: Mapping the military-industrial-mediaentertainment-network. Routledge.
77 Dijstelbloem, H., Meijer, A., and Brom, F. (2011), 'Reclaiming Control over Europe's Technological Borders', in H. Dijstelbloem and A. Meijer eds., Migration and the new Technological Borders of Europe, Palgrave Macmillan, pp. 170-185.
78 Dourish, P. (2016), 'Algorithms and Their Others: Algorithmic Culture in Context', Big Data & Society, Vol 3, No. 2, pp. 1-11.
79 Dressel, J. and H. Farid. (2018), 'The Accuracy, Fairness, and Limits of Predicting Recidivism', Science Advances, Vol. 4, No.1 [DOI: 10.1126/sciadv.aao5580].   DOI
80 Ferguson, A. G. (2015), 'Big Data and Predictive Reasonable Suspicion', University of Pennsylvania Law Review, Vol. 163, No. 2, pp. 327-410.
81 Gelman, A., Fagan, J., & Kiss, A. (2007), 'An analysis of the New York City police department's 'stop-and-frisk' policy in the context of claims of racial bias.' Journal of the American Statistical Association, Vol. 102, No. 479, pp. 813-823.   DOI
82 Vaughan-Williams, N. (2010), 'The UK Border Security Continuum: Virtual Biopolitics and the Simulation of the Sovereign Ban', Environment and Planning D: Society and Space, Vol. 28, pp. 1071-1083.   DOI
83 Walters, W. (2011), 'Rezoning the global: technological zones, technological work and the(un-)making of biometric borders', in V. Squire Ed., The Contested Politics of Mobility: Borderzones and Irregularity, Routledge, pp. 51-73,
84 Ferguson, A. G. (2017), The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. NYU Press.
85 Follis, K. S. (2017), 'Vision and Transterritory: The Borders of Europe', Science, Technology, & Human Values. Vol. 42, No. 6, pp. 1003-1030.   DOI
86 Wilke, C. (2017), "Seeing and Unmaking Civilians in Afghanistan: Visual Technologies and Contested Professional Visions", Science, Technology, & Human Values, Vol. 42, No. 6, pp. 1031-1060.   DOI
87 Williams, B. A., Brooks, C. F., and Shmargad, Y. (2018), 'How Algorithms Discriminate Based on Data They Lack', Journal of Information Policy, Vol. 8, No. 1, pp. 78-115.   DOI
88 Wilson, D. and Weber, L. (2008), 'Surveillance, Risk and Preemption on the Australian Border', Surveillance & Society, Vol. 5, No. 2, pp. 124-141.
89 Foucault, M. (2003), Society Must be Defended: Lectures at the C'ollege de France New York: Picador.
90 Friedler, S., Scheidegger, C., and Venkatasubramanian, S. (2016), 'On the (im)possibility of Fairness', [https://arxiv.org/abs/1609.07236]
91 Gibbs, S. (2015.07.08), 'Women less likely to be Shown Ads for High-paid Jobs on Google, Study Shows', The Guardian. [https://www.theguardian.com/technology/2015/jul/08/women-less-likely-ads-high-paid-jobs-google-study]
92 Gillespie, T. (2014), 'The Relevance of Algorithms', in T. Gillespie, Boczkowski, P.J., and Foot, K.A. eds., Media Technologies: Essays on Communication, Materiality, and Society, pp. 167-193, MIT Press.
93 Glouftsios, G. (2018), 'Governing Circulation Through Technology within EU Border Security Practice-networks', Mobilities, Vol. 13, No.2, pp. 185-199.   DOI
94 Greene, D., Hoffmann, A. L., and Stark, L. (2019), 'Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning', The 52nd Annual Hawaii International Conference on System Sciences (HICSS), Maui, HI.
95 Hacking I. (1995), 'The looping effect of human kinds', In Sperber. D. Premack, D ., and Premack, A. J. eds., Causal cognition: a multidisciplinary debate, Oxford: Oxford University Press, pp. 351-83.
96 Poirier, L., Hidalgo, N., & Goldman, E. (2018), 'Data Design Challenges and Opportunities for NYC Community Boards.' BetaNYC. [https://beta.nyc/publications/data-design-challenges-and-opportunities-for-nyc-community-boards/]
97 Wilson, M. (2017), 'Algorithms(and the) Everyday', Information, Communication & Society, Vol. 20, No, 1, pp. 137-150.   DOI
98 Wirth, N. (1975), Algorithms + Data Structures = Programs. Englewood Cliffs, Prentice-Hall.
99 Perry, W. L., McInnis, B., Price, C. C., Smith, S. C., and Hollywood, J. S. (2013), Predictive policing: The Role of Crime Forecasting in Law Enforcement Operations, Rand Corporation.
100 Potzsch, H. (2015), 'The emergence of Border: Bordering Bodies, Networks, and Machines', Environment and Planning D: Society and Space, Vol. 33, No. 1, pp. 101-118.   DOI
101 Salter M. B. (2006), 'The Global Visa Regime and the Political Technologies of the International Self: Borders, Bodies, Biopolitics', Alternatives: Global, Local, Political, Vol. 31, pp. 167-189.   DOI
102 Weisburd, D. (2008), 'Place-based policing', Ideas in American policing, Vol. 9 (January 2008), pp. 1-15.