1 |
Oxford Dictionary. (n.d.). artificial intelligence. In Oxford English Dictionary. Retrieved October 28, 2021, from https://www.oed.com/viewdictionaryentry/Entry/271625
|
2 |
Otting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human-machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27-39.
DOI
|
3 |
Savage, M. (2019, March 19). Meet Tengai, the job interview robot who won't judge you. BBC News Online. https://www.bbc.com/news/business-47442953
|
4 |
Schein, C., & Gray, K. (2018). The theory of dyadic morality: Reinventing moral judgment by redefining harm. Personality and Social Psychology Review, 22(1), 32-70.
DOI
|
5 |
Schein, C., Ritter, R. S., & Gray, K. (2016). Harm mediates the disgust-immorality link. Emotion, 16(6), 862.
DOI
|
6 |
Tollon, F. (2021). The artificial view: toward a non-anthropocentric account of moral patiency. Ethics and Information Technology, 23(2), 147-155.
DOI
|
7 |
Torrance, S. (2006). The ethical status of artificial agents-with and without consciousness. Ethics of human interaction with robotic, bionic and AI systems: concepts and policies. Istituto Italiano per gli Studi Filosofici, Napoli, 60-66.
|
8 |
Torrance, S. (2008). Ethics and consciousness in artificial agents. Ai & Society, 22(4), 495-521.
DOI
|
9 |
Verma, N., & Dombrowski, L. (2018, April). Confronting social criticisms: Challenges when adopting data-driven policing strategies. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).
|
10 |
Wang, W. (2017). Smartphones as social actors? Social dispositional factors in assessing anthropomorphism. Computers in Human Behavior, 68, 334-344.
DOI
|
11 |
Wang, R., Harper, F. M., & Zhu, H. (2020, April). Factors influencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
|
12 |
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219-232.
DOI
|
13 |
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117.
DOI
|
14 |
Wegner, D. M., & Gray, K. (2017). The mind club: Who thinks, what feels, and why it matters. Penguin.
|
15 |
Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2020). Robots at work: People prefer-and forgive-service robots with perceived feelings. Journal of Applied Psychology.
|
16 |
Artificial intelligence: Go master Lee Se-dol wins against AlphaGo program (2016, March 13). BBC News Online. https://www.bbc.com/news/technology-35797102.
|
17 |
권헌영 (2019). 인공지능(AI)과 법조 분야: 윤리적․규제적 고려사항. 경제규제와 법, 12(2), 69-80.
|
18 |
Adler-Milstein, J., Holmgren, A. J., Kralovec, P., Worzala, C., Searcy, T., & Patel, V. (2017). Electronic health record adoption in US hospitals: the emergence of a digital "advanced use" divide. Journal of the American Medical Informatics Association: JAMIA, 24(6), 1142-1148. https://doi.org/10.1093/jamia/ocx080
DOI
|
19 |
Ajunwa, I., Friedler, S., Scheidegger, C. E., & Venkatasubramanian, S. (2016). Hiring by algorithm: predicting and preventing disparate impact. Available at SSRN.
|
20 |
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica, May 23, 2016.
|
21 |
Asaro, P. M. (2011). 11 A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics. Robot ethics: The ethical and social implications of robotics, 169.
|
22 |
Ayasdi (2018). Ayasdi for Payers: white paper. Ayasdi. https://s3.amazonaws.com/cdn.ayasdi.com/wp-content/uploads/2018/10/05102657/WP-Ayasdi-for-Payers.pdf
|
23 |
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
DOI
|
24 |
Cantarero, K., Szarota, P., Stamkou, E., Navas, M., & Dominguez Espinosa, A. D. C. (2021). The effects of culture and moral foundations on moral judgments: The ethics of authority mediates the relationship between power distance and attitude towards lying to one's supervisor. Current Psychology, 40(2), 675-683.
|
25 |
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017, August). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining (pp. 797-806).
|
26 |
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114.
DOI
|
27 |
Curry, O. S., Chesters, M. J., & Van Lissa, C. J. (2019). Mapping morality with a compass: Testing the theory of 'morality-as-cooperation'with a new questionnaire. Journal of Research in Personality, 78, 106-124.
DOI
|
28 |
Dash, S., Shakyawar, S. K., Sharma, M., & Kaushik, S. (2019). Big data in healthcare: management, analysis and future prospects. Journal of Big Data, 6(1), 1-25.
DOI
|
29 |
Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62.
DOI
|
30 |
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychological review, 114(4), 864.
DOI
|
31 |
Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford review, 5.
|
32 |
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology (Vol. 47, pp. 55-130). Academic Press.
|
33 |
Graham, J., Haidt, J., Motyl, M., Meindl, P., Iskiwitch, C., & Mooijman, M. (2018). Moral foundations theory: On the advantages of moral pluralism over moral monism. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 211-222). The Guilford Press.
|
34 |
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of personality and social psychology, 96(5), 1029.
DOI
|
35 |
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. science, 315(5812), 619-619.
DOI
|
36 |
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108.
DOI
|
37 |
Gray, K., Jenkins, A. C., Heberlein, A. S., & Wegner, D. M. (2011). Distortions of mind perception in psychopathology. Proceedings of the National Academy of Sciences, 108(2), 477-479.
DOI
|
38 |
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130.
DOI
|
39 |
Gray, K., & Wegner, D. M. (2012). Morality takes two: Dyadic morality and mind perception.
|
40 |
Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. mit Press.
|
41 |
Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog?. Journal of personality and social psychology, 65(4), 613.
DOI
|
42 |
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological review, 108(4), 814.
DOI
|
43 |
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage.
|
44 |
HLEG, A. I. (2019). High-level expert group on artificial intelligence: Ethics guidelines for trustworthy AI. European Commission, 09.04.
|
45 |
Hollister, B., & Bonham, V. L. (2018). Should electronic health record-derived social and behavioral data be used in precision medicine research?. AMA journal of ethics, 20(9), 873-880.
|
46 |
Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. Handbook of socialization theory and research, 347, 480.
|
47 |
Larsen, R. R. (2020). Psychopathy as moral blindness: a qualifying exploration of the blindness-analogy in psychopathy theory and research. Philosophical Explorations, 23(3), 214-233.
DOI
|
48 |
Kohlberg, L. (2016). 1. Stages of moral development as a basis for moral education. In C. Beck, B. Crittenden & E. Sullivan (Ed.), Moral Education (pp. 23-92). Toronto: University of Toronto Press. https://doi.org/10.3138/9781442656758-004
DOI
|
49 |
Kuncel, N. R., Klieger, D. M., & Ones, D. S. (2014). In hiring, algorithms beat instinct. Harvard business review, 92(5), p32-32.
|
50 |
Laakasuo, M., Palomaki, J., & Kobis, N. (2021). Moral uncanny valley: a robot's appearance moderates how its decisions are judged. International Journal of Social Robotics, 1-10.
|
51 |
Lee, D. (2016, March 25). Tay: Microsoft issues apology over racist chatbot fiasco. BBC News Online. https://www.bbc.com/news/technology-35902104
|
52 |
Li, M., & Suh, A. (2021, January). Machinelike or Humanlike? A Literature Review of Anthropomorphism in AI-Enabled Technology. In Proceedings of the 54th Hawaii International Conference on System Sciences (p. 4053).
|
53 |
MacDorman, K. F. (2005, July). Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it. In CogSci-2005 workshop: toward social mechanisms of android science (Vol. 106118).
|
54 |
MacDorman, K. F., & Entezari, S. O. (2015). Individual differences predict sensitivity to the uncanny valley. Interaction Studies, 16(2), 141-172.
DOI
|
55 |
MacDorman, K. F., Green, R. D., Ho, C. C., & Koch, C. T. (2009). Too real for comfort? Uncanny responses to computer generated faces. Computers in human behavior, 25(3), 695-710.
DOI
|
56 |
Moosa, M. M., & Ud-Dean, S. M. (2010). Danger avoidance: An evolutionary explanation of uncanny valley. Biological Theory, 5(1), 12-14.
DOI
|
57 |
Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015, March). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 117-124). IEEE.
|
58 |
Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016, March). Which robot am I thinking about? The impact of action and appearance on people's evaluations of a moral robot. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 125-132). IEEE.
|
59 |
Min, J., Kim, S., Park, Y., & Sohn, Y. W. (2018). A Comparative Study of Potential Job Candidates' Perceptions of an AI Recruiter and a Human Recruiter. Journal of the Korea Convergence Society, 9(5), 191-202.
DOI
|
60 |
Mori, M. (1970). Bukimi no tani [the uncanny valley]. Energy, 7, 33-35.
|
61 |
Morse, S. J. (2008). Psychopathy and criminal responsibility. Neuroethics, 1(3), 205-212.
DOI
|
62 |
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103.
|
63 |
Natarajan, M., & Gombolay, M. (2020, March). Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 33-42).
|
64 |
Newborn, M. (2012). Kasparov versus Deep Blue: Computer chess comes of age. Springer Science & Business Media.
|
65 |
O'neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
|