• Title/Summary/Keyword: gender-biases

Search Result 22, Processing Time 0.017 seconds

Networks among the UN SDGs: A Content Analysis of Research Trends (유엔 지속가능발전목표(SDGs) 국제 연구동향 분석: 17개 목표 연결망 분석을 중심으로)

  • Lee, Jinyoung;Sohn, Hyuk-Sang;Yi, Ilcheong
    • International Area Studies Review
    • /
    • v.22 no.2
    • /
    • pp.189-209
    • /
    • 2018
  • The purpose of this study is to identify international research trends of SDGs by analyzing the networks among the 17 goals of the SDGs. The research scope covers the World Development and the Journal of Development Studies which are the top impact journals in the field of international development. The interconnected 17 SDGs are divided into five categories of people, planet, partnership, peace and prosperity. In this study, we analyzed the abstracts of the papers of the above two journals using Atlas.ti, a qualitative analysis software, in order to identify the connections between 17 goals. The findings from the analysis of 730 abstracts published in two journals since 2015 are summarized as follows. First, issues related to gender have featured prominently in both journals. Second, China and India have been the most popular case countries in both journals. In particular south-south cooperation led by China and India has been dealt with by the World Development. Thirdly, both journals have their own biases towards to certain SDGs. For instance, the World Development have not had many articles on SDG 11, 12, 13, 14, 15, 16 and 17. The SDGs closely associated with the environment and climate change such as 6, 12, 13, 14, and 15 have also been sidelined by the Journal of Development Studies. More balanced research paying attention to all the SDGs in an integrated and balanced manner is required to provide evidence and knowledge conducive to realizing the transformative vision of the 2030 Agenda for Sustainable Development.

A Comparative Study on Discrimination Issues in Large Language Models (거대언어모델의 차별문제 비교 연구)

  • Wei Li;Kyunghwa Hwang;Jiae Choi;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.125-144
    • /
    • 2023
  • Recently, the use of Large Language Models (LLMs) such as ChatGPT has been increasing in various fields such as interactive commerce and mobile financial services. However, LMMs, which are mainly created by learning existing documents, can also learn various human biases inherent in documents. Nevertheless, there have been few comparative studies on the aspects of bias and discrimination in LLMs. The purpose of this study is to examine the existence and extent of nine types of discrimination (Age, Disability status, Gender identity, Nationality, Physical appearance, Race ethnicity, Religion, Socio-economic status, Sexual orientation) in LLMs and suggest ways to improve them. For this purpose, we utilized BBQ (Bias Benchmark for QA), a tool for identifying discrimination, to compare three large-scale language models including ChatGPT, GPT-3, and Bing Chat. As a result of the evaluation, a large number of discriminatory responses were observed in the mega-language models, and the patterns differed depending on the mega-language model. In particular, problems were exposed in elder discrimination and disability discrimination, which are not traditional AI ethics issues such as sexism, racism, and economic inequality, and a new perspective on AI ethics was found. Based on the results of the comparison, this paper describes how to improve and develop large-scale language models in the future.