• Title/Summary/Keyword: 인공지능 위험

Search Result 242, Processing Time 0.027 seconds

The Analysis of the Mediating and Moderating Effects of Perceived Risks on the Relationship between Knowledge, Feelings and Acceptance Intention towards AI (인공지능에 대한 지식, 감정, 수용의도 관계에서 위험인식의 매개 및 조절효과 분석)

  • Hwang, SeoI;Nam, YoungJa
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.8
    • /
    • pp.350-358
    • /
    • 2020
  • The objective of this empirical study is to examine the mediating and moderating effects of perceived risks on the relationship between knowledge, feelings and acceptance intention towards AI. Subjects in their teens to forties were surveyed and the final sample comprised 1,969 subjects. Data were analyzed using Mediation using Multiple Regression and Moderated Multiple Regression. Results showed that people's knowledge and feelings towards AI affected their acceptance intention of AI. Results also showed that the perceived risks of AI partially mediated and moderated the relationship between feelings and acceptance intention towards AI and moderated but not mediated the relationship between knowledge and acceptance intention towards AI. Overall, these results suggest that people's perceived risks of AI are associated more strongly with their feelings towards AI than their knowledge towards AI. Implications and directions for future research were discussed in relation to increasing general population's acceptance intention towards AI.

Case Study on Artificial Intelligence and Risk Management - Focusing on RAI Toolkit (인공지능과 위험관리에 대한 사례 연구 - RAI Toolkit을 중심으로)

  • Sunyoung Shin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.115-123
    • /
    • 2024
  • The purpose of this study is to contribute to how the advantages of artificial intelligence (AI) services and the associated limitations can be simultaneously overcome, using the keywords AI and risk management. To achieve this, two cases were introduced: (1) presenting a risk monitoring process utilizing AI and (2) introducing an operational toolkit to minimize the emerging limitations in the development and operation of AI services. Through case analysis, the following implications are proposed. First, as AI services deeply influence our lives, the process are needed to minimize the emerging limitations. Second, for effective risk management monitoring using AI, priority should be given to obtaining suitable and reliable data. Third, to overcome the limitations arising in the development and operation of AI services, the application of a risk management process at each stage of the workflow, requiring continuous monitoring, is essential. This study is a research effort on approaches to minimize limitations provided by advancing artificial intelligence (AI). It can contribute to research on risk management in the future growth and development of the related market, examining ways to mitigate limitations posed by evolving AI technologies.

Analysis of Safety Considerations for Application of Artificial Intelligence in Marine Software Systems (해양 소프트웨어 시스템의 인공지능 적용을 위한 안전 고려사항에 관한 분석)

  • Lee, Changui;Kim, Hyoseung;Lee, Seojeong
    • Journal of Navigation and Port Research
    • /
    • v.46 no.3
    • /
    • pp.269-279
    • /
    • 2022
  • With the development of artificial intelligence, artificial intelligence is being introduced to automate systems throughout the industry. In the maritime industry, artificial intelligence is being applied step by step, through the paradigm of autonomous ships. In line with this trend, ABS and DNV have published guidelines for autonomous vessels. However, there is a possibility that the risk of artificial intelligence has not been sufficiently considered, as the classification guidelines describe the requirements from the perspective of ship operation and marine service. Thus in this study, using the standards established by the ISO/ IEC JTC1/SC42 artificial intelligence division, classification requirements are classified as the causes of risk, and a measure that can evaluate risks through the combination of risk causes and artificial intelligence metrics want to use. Through the combination of the risk causes of artificial intelligence proposed in this study and the characteristics to evaluate them, it is thought that it will be beneficial in defining and identifying the risks arising from the introduction of artificial intelligence into the marine system. It is expected that it will enable the creation of more detailed and specific safety requirements for autonomous ships.

Why should we worry about controlling AI? (우리는 왜 인공지능에 대한 통제를 고민해야 하는가?)

  • Rheey, Sang-hun
    • Journal of Korean Philosophical Society
    • /
    • v.147
    • /
    • pp.261-281
    • /
    • 2018
  • This paper will cover recent discussions on the risks of human being due to the development of artificial intelligence(AI). We will consider AI research as artificial narrow intelligence(ANI), artificial general intelligence(AGI), and artificial super intelligence(ASI). First, we examine the risks of ANI, or weak AI systems. To maximize efficiency, humans will use autonomous AI extensively. At this time, we can predict the risks that can arise by transferring a great deal of authority to autonomous AI and AI's judging and acting without human intervention. Even a sophisticated system, human-made artificial intelligence systems are incomplete, and virus infections or bugs can cause errors. So I think there should be a limit to what I entrust to artificial intelligence. Typically, we do not believe that lethal autonomous weapons systems should be allowed. Strong AI researchers are optimistic about the emergence of artificial general intelligence(AGI) and artificial superintelligence(ASI). Superintelligence is an AI system that surpasses human ability in all respects, so it may act against human interests or harm human beings. So the problem of controlling superintelligence, i.e. control problem is being seriously considered. In this paper, we have outlined how to control superintelligence based on the proposed control schemes. If superintelligence emerges, it is judged that there is no way for humans to completely control superintelligence at this time. But the emergence of superintelligence may be a fictitious assumption. Even in this case, research on control problems is of practical value in setting the direction of future AI research.

Taxonomy and Countermeasures for Generative Artificial Intelligence Crime Threats (생성형 인공지능 관련 범죄 위협 분류 및 대응 방안)

  • Woobeen Park;Minsoo Kim;Yunji Park;Hyejin Ryu;Doowon Jeong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.301-321
    • /
    • 2024
  • Generative artificial intelligence is currently developing rapidly and expanding industrially. The development of generative AI is expected to improve productivity in most industries. However, there is a probability for exploitation of generative AI, and cases that actually lead to crime are emerging. Compared to the fast-growing AI, there is no legislation to regulate the generative AI. In the case of Korea, the crimes and risks related to generative AI has not been clearly classified for legislation. In addition, research on the responsibility for illegal data learned by generative AI or the illegality of the generated data is insufficient in existing research. Therefore, this study attempted to classify crimes related to generative AI for domestic legislation into generative AI for target crimes, generative AI for tool crimes, and other crimes based on ECRM. Furthermore, it suggests technical countermeasures against crime and risk and measures to improve the legal system. This study is significant in that it provides realistic methods by presenting technical countermeasures based on the development stage of AI.

인공지능 센서 기반 선박 접/이안 정보 추출 기술 개발

  • Kim, Dong-Hun;Kim, Han-Geun;Lee, Sang-Min;Kim, Jeong-Min;Park, Byeol-Teo
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2020.11a
    • /
    • pp.128-130
    • /
    • 2020
  • 선박이 지정된 선석에 안전하게 접안하기 위해서는 도선사와 예인선의 도움이 필요하다. 접안 과정에서 선박이 선석까지 남은 거리와 접안 속도는 선박에 탑승한 도선사와 선원들의 육안에 의존하는 경우가 많으므로 안전에 대한 위험 요소가 있다. 본 연구에서는 접안 과정에서의 위험요소를 줄이고, 접안에 도움을 줄 수 있는 인공지능 센서 기반 선박 접·이안 정보 추출 기술을 개발하였다. 개발 기술은 임베디드 시스템과 클라우드 시스템에서 구동되며, 인공 지능 영상처리 기술과 센서 융합 기술을 기반으로 측정한 접안 중인 선박의 운동 정보를 접안 이해 관계자들에게 서비스한다.

  • PDF

Governance research for Artificial intelligence service (인공지능 서비스 거버넌스 연구)

  • Soonduck Yoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.15-21
    • /
    • 2024
  • The purpose of this study is to propose a framework for the introduction and evaluation of artificial intelligence (AI) services not only in general applications but also in public policies. To achieve this, the study explores AI service management and governance toolkits, providing insights into how to introduce AI services in public policies. Firstly, it offers guidelines on the direction of AI service development and what aspects to avoid. Secondly, in the development phase, it recommends using the AI governance toolkit to review content through checklists at each stage of design, development, and deployment. Thirdly, when operating AI services, it emphasizes the importance of adhering to principles related to 1) planning and design, 2) the lifecycle, 3) model construction and validation, 4) deployment and monitoring, and 5) accountability. The governance perspective of AI services is crucial for mitigating risks associated with service provision, and research in risk management aspects should be conducted. While embracing the advantages of AI, proactive measures should be taken to address limitations and risks. Efforts should be made to efficiently formulate policies using AI technology to create high value and provide meaningful societal impacts.

A Study on the Use and Risk of Artificial Intelligence (Focusing on the eproperty appraiser industry) (인공지능의 활용과 위험성에 관한 연구 (감정 평가 산업 중심으로))

  • Hong, Seok-Do;You, Yen-Yoo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.7
    • /
    • pp.81-88
    • /
    • 2022
  • This study is to investigate the perception of domestic appraisers about the possibility of using artificial intelligence (AI) and related risks from the use of AI in the appraisal industry. We conducted a mobile survey of evaluators from February 10 to 18, 2022. We collected survey data from 193 respondents. Frequency analysis and multiple response analysis were performed for basic analysis. When AI is used in the appraisal industry, factor analysis was used to analyze various types of risks. Although appraisers have a positive perception of AI introduction in the appraisal industry, they considered collateral, consulting, and taxation, mainly in areas where AI is likely to be used and replaced, mainly negative effects related to job losses and job replacement. They were more aware of the alternative risks caused by AI in the field of human labor. I was very aware of responsibilities, privacy and security, and the risk of technical errors. However, fairness, transparency, and reliability risks were generally perceived as low risk issues. Existing studies have mainly studied analysis methods that apply AI to mass evaluation models, but this study focused on the use and risk of AI. Understanding industry experts' perceptions of AI utilization will help minimize potential risks when AI is introduced on a large scale.

인공지능 보안 이슈

  • Park, Sohee;Choi, Daeseon
    • Review of KIISC
    • /
    • v.27 no.3
    • /
    • pp.27-32
    • /
    • 2017
  • 머신러닝을 위주로 하는 인공지능 기술이 여러 분야에서 다양하게 적용되고 있다. 머신러닝 기술은 시험 데이터에 대해 높은 성능을 보였지만, 악의적으로 만들어진 데이터에 대해서는 오동작을 하는 경우가 보고되고 있다. 그 외에도 학습데이터 오염시키기, 학습된 모델 탈취 등 새로운 공격 유형이 보고되고 있다. 기계학습에 사용된 훈련데이터에 대한 보안과 프라이버시 또한 중요한 이슈이다. 인공지능 기술의 개발 및 적용에 있어 이러한 위험성에 대한 고려와 대비가 반드시 필요하다.

Analysis of Key Factors in Corporate Adoption of Generative Artificial Intelligence Based on the UTAUT2 Model

  • Yongfeng Hu;Haojie Jiang;Chi Gong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.53-71
    • /
    • 2024
  • Generative Artificial Intelligence (AI) has become the focus of societal attention due to its wide range of applications and profound impact. This paper constructs a comprehensive theoretical model based on the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2), integrating variables such as Personal Innovativeness and Perceived Risk to study the key factors influencing enterprises' adoption of Generative AI. We employed Structural Equation Modeling (SEM) to verify the hypothesized paths and used the Bootstrapping method to test the mediating effect of Behavioral Intention. Additionally, we explored the moderating effect of Perceived Risk through Hierarchical Regression Analysis. The results indicate that Performance Expectancy, Effort Expectancy, Social Influence, Price Value, and Personal Innovativeness have significant positive impacts on Behavioral Intention. Behavioral Intention plays a significant mediating role between these factors and Use Behavior, while Perceived Risk negatively moderates the relationship between Behavioral Intention and Use Behavior. This study provides theoretical and empirical support for how enterprises can effectively adopt Generative AI, offering important practical implications.