• Title/Summary/Keyword: AI Security

Search Result 416, Processing Time 0.029 seconds

Self-Driving and Safety Security Response : Convergence Strategies in the Semiconductor and Electronic Vehicle Industries

  • Dae-Sung Seo
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.25-34
    • /
    • 2024
  • The paper investigates how the semiconductor and electric vehicle industries are addressing safety and security concerns in the era of autonomous driving, emphasizing the prioritization of safety over security for market competitiveness. Collaboration between these sectors is deemed essential for maintaining competitiveness and value. The research suggests solutions such as advanced autonomous driving technologies and enhanced battery safety measures, with the integration of AI chips playing a pivotal role. However, challenges persist, including the limitations of big data and potential errors in semiconductor-related issues. Legacy automotive manufacturers are transitioning towards software-driven cars, leveraging artificial intelligence to mitigate risks associated with safety and security. Conflicting safety expectations and security concerns can lead to accidents, underscoring the continuous need for safety improvements. We analyzed the expansion of electric vehicles as a means to enhance safety within a framework of converging security concerns, with AI chips being instrumental in this process. Ultimately, the paper advocates for informed safety and security decisions to drive technological advancements in electric vehicles, ensuring significant strides in safety innovation.

A Study on the Improvement of Domestic Policies and Guidelines for Secure AI Services (안전한 AI 서비스를 위한 국내 정책 및 가이드라인 개선방안 연구)

  • Jiyoun Kim;Byougjin Seok;Yeog Kim;Changhoon Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.975-987
    • /
    • 2023
  • With the advancement of Artificial Intelligence (AI) technologies, the provision of data-driven AI services that enable automation and intelligence is increasing across industries, raising concerns about the AI security risks that may arise from the use of AI. Accordingly, Foreign countries recognize the need and importance of AI regulation and are focusing on developing related policies and regulations. This movement is also happening in Korea, and AI regulations have not been specified, so it is necessary to compare and analyze existing policy proposals or guidelines to derive common factors and identify complementary points, and discuss the direction of domestic AI regulation. In this paper, we investigate AI security risks that may arise in the AI life cycle and derive six points to be considered in establishing domestic AI regulations through analysis of each risk. Based on this, we analyze AI policy proposals and recommendations in Korea and validate additional issues. In addition, based on a review of the main content of AI laws in the US and EU and the analysis of this paper, we propose measures to improve domestic guidelines and policies in the field of AI.

A Study on Effective Interpretation of AI Model based on Reference (Reference 기반 AI 모델의 효과적인 해석에 관한 연구)

  • Hyun-woo Lee;Tae-hyun Han;Yeong-ji Park;Tae-jin Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.411-425
    • /
    • 2023
  • Today, AI (Artificial Intelligence) technology is widely used in various fields, performing classification and regression tasks according to the purpose of use, and research is also actively progressing. Especially in the field of security, unexpected threats need to be detected, and unsupervised learning-based anomaly detection techniques that can detect threats without adding known threat information to the model training process are promising methods. However, most of the preceding studies that provide interpretability for AI judgments are designed for supervised learning, so it is difficult to apply them to unsupervised learning models with fundamentally different learning methods. In addition, previously researched vision-centered AI mechanism interpretation studies are not suitable for application to the security field that is not expressed in images. Therefore, In this paper, we use a technique that provides interpretability for detected anomalies by searching for and comparing optimization references, which are the source of intrusion attacks. In this paper, based on reference, we propose additional logic to search for data closest to real data. Based on real data, it aims to provide a more intuitive interpretation of anomalies and to promote effective use of an anomaly detection model in the security field.

Cybersecurity Development Status and AI-Based Ship Network Security Device Configuration for MASS

  • Yunja Yoo;Kyoung-Kuk Yoon;David Kwak;Jong-Woo Ahn;Sangwon Park
    • Journal of Navigation and Port Research
    • /
    • v.47 no.2
    • /
    • pp.57-65
    • /
    • 2023
  • In 2017, the International Maritime Organization (IMO) adopted MSC.428 (98), which recommends establishing a cyber-risk management system in Ship Safety Management Systems (SMSs) from January 2021. The 27th International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA) also discussed prioritizing cyber-security (cyber-risk management) in developing systems to support Maritime Autonomous Surface Ship (MASS) operations (IALA guideline on developments in maritime autonomous surface ships). In response to these international discussions, Korea initiated the Korea Autonomous Surface Ship technology development project (KASS project) in 2020. Korea has been carrying out detailed tasks for cybersecurity technology development since 2021. This paper outlines the basic concept of ship network security equipment for supporting MASS ship operation in detailed task of cybersecurity technology development and defines ship network security equipment interface for MASS ship applications.

A Study on Countermeasures Against Adversarial Attacks on AI Models (AI 모델의 적대적 공격 대응 방안에 대한 연구)

  • Jae-Gyung Park;Jun-Seo Chang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.619-620
    • /
    • 2023
  • 본 논문에서는 AI 모델이 노출될 수 있는 적대적 공격을 연구한 논문이다. AI 쳇봇이 적대적 공격에 노출됨에 따라 최근 보안 침해 사례가 다수 발생하고 있다. 이에 대해 본 논문에서는 적대적 공격이 무엇인지 조사하고 적대적 공격에 대응하거나 사전에 방어하는 방안을 연구하고자 한다. 적대적 공격의 종류 4가지와 대응 방안을 조사하고, AI 모델의 보안 중요성을 강조하고 있다. 또한, 이런 적대적 공격을 방어할 수 있도록 대응 방안을 추가로 조사해야 한다고 결론을 내리고 있다.

  • PDF

Top-Level Implementation of AI4SE, SE4AI for the AI-SE convergence in the Defense Acquisition (무기체계 획득에서 인공지능-시스템엔지니어링 융화를 위한 최상위 수준의 AI4SE, SE4AI 구현방안)

  • Min Woo Lee
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.19 no.2
    • /
    • pp.135-144
    • /
    • 2023
  • Artificial Intelligence (AI) is a prominent topic in almost every field. In Korea, Systems Engineering (SE) procedures are applied in Defense Acquisition, and it is anticipated that SE procedures will also be applied to systems incorporating AI capabilities. This study explores the applicability of the concepts "AI4SE (AI for SE)" and "SE4AI (SE for AI)," which have been proposed in the United States, to the Korean context. The research examines the feasibility of applying these concepts, identifies necessary tasks, and proposes implementation strategies. For the AI4SE, many attempts and studies applying AI to SE Processes both Requirements & Architectures Define, System implementation & V&V, and Sustainment. It needs Explainability and Security. For the SE4AI, the Functional AI implementation level, Quality & Security of the Data-set, AI Ethics, and Review policies are needed. Furthermore, it provides perspectives on how these two concepts should ultimately converge and suggests future directions for development.

A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models (AI 모델의 Robustness 향상을 위한 효율적인 Adversarial Attack 생성 방안 연구)

  • Si-on Jeong;Tae-hyun Han;Seung-bum Lim;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.25-36
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology is introduced in various fields, including security, the development of technology is accelerating. However, with the development of AI technology, attack techniques that cleverly bypass malicious behavior detection are also developing. In the classification process of AI models, an Adversarial attack has emerged that induces misclassification and a decrease in reliability through fine adjustment of input values. The attacks that will appear in the future are not new attacks created by an attacker but rather a method of avoiding the detection system by slightly modifying existing attacks, such as Adversarial attacks. Developing a robust model that can respond to these malware variants is necessary. In this paper, we propose two methods of generating Adversarial attacks as efficient Adversarial attack generation techniques for improving Robustness in AI models. The proposed technique is the XAI-based attack technique using the XAI technique and the Reference based attack through the model's decision boundary search. After that, a classification model was constructed through a malicious code dataset to compare performance with the PGD attack, one of the existing Adversarial attacks. In terms of generation speed, XAI-based attack, and reference-based attack take 0.35 seconds and 0.47 seconds, respectively, compared to the existing PGD attack, which takes 20 minutes, showing a very high speed, especially in the case of reference-based attack, 97.7%, which is higher than the existing PGD attack's generation rate of 75.5%. Therefore, the proposed technique enables more efficient Adversarial attacks and is expected to contribute to research to build a robust AI model in the future.

The Empirical Analysis of Factors Affecting the Intention of College Students to Use Generative AI Services (대학생의 생성형 AI 서비스 이용의도에 영향을 미치는 요인에 대한 실증분석)

  • Chang, Soo-jin;Chung, Byoung-gyu
    • Journal of Venture Innovation
    • /
    • v.6 no.4
    • /
    • pp.153-170
    • /
    • 2023
  • Generative AI services, including ChatGPT, were becoming increasingly active. This study aimed to empirically analyze the factors that promoted and hindered the diffusion of such services from a consumer perspective. Accordingly, a research model was developed based on the Value-based Adoption Model (VAM) framework, addressing both benefit and sacrifice factors. Benefits identified included usefulness and enjoyment, while sacrifices were security and hallucination. The study analyzed how these factors affected the intention to use generative AI services. A survey was conducted among college students for empirical analysis, and 200 valid responses were analyzed. The analysis utilized structural equation modeling with AMOS 24. The empirical results showed that usefulness and enjoyment had a significant positive impact on perceived value, while security and hallucination had a significant negative impact. The order of influence on perceived value was usefulness, hallucination, security, and then enjoyment. Perceived value had a significant positive impact on usage intention. Moreover, perceived value was found to mediate the relationship between usefulness, enjoyment, security, hallucination, and the intention to use generative AI services. These findings expanded the research horizon academically by validating the effectiveness of generative AI services based on existing models and demonstrated the continued importance of usefulness in a practical context.

Autonomous Vehicles as Safety and Security Agents in Real-Life Environments

  • Al-Absi, Ahmed Abdulhakim
    • International journal of advanced smart convergence
    • /
    • v.11 no.2
    • /
    • pp.7-12
    • /
    • 2022
  • Safety and security are the topmost priority in every environment. With the aid of Artificial Intelligence (AI), many objects are becoming more intelligent, conscious, and curious of their surroundings. The recent scientific breakthroughs in autonomous vehicular designs and development; powered by AI, network of sensors and the rapid increase of Internet of Things (IoTs) could be utilized in maintaining safety and security in our environments. AI based on deep learning architectures and models, such as Deep Neural Networks (DNNs), is being applied worldwide in the automotive design fields like computer vision, natural language processing, sensor fusion, object recognition and autonomous driving projects. These features are well known for their identification, detective and tracking abilities. With the embedment of sensors, cameras, GPS, RADAR, LIDAR, and on-board computers in many of these autonomous vehicles being developed, these vehicles can properly map their positions and proximity to everything around them. In this paper, we explored in detail several ways in which these enormous features embedded in these autonomous vehicles, such as the network of sensors fusion, computer vision and natural image processing, natural language processing, and activity aware capabilities of these automobiles, could be tapped and utilized in safeguarding our lives and environment.

Improving Adversarial Robustness via Attention (Attention 기법에 기반한 적대적 공격의 강건성 향상 연구)

  • Jaeuk Kim;Myung Gyo Oh;Leo Hyun Park;Taekyoung Kwon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.4
    • /
    • pp.621-631
    • /
    • 2023
  • Adversarial training improves the robustness of deep neural networks for adversarial examples. However, the previous adversarial training method focuses only on the adversarial loss function, ignoring that even a small perturbation of the input layer causes a significant change in the hidden layer features. Consequently, the accuracy of a defended model is reduced for various untrained situations such as clean samples or other attack techniques. Therefore, an architectural perspective is necessary to improve feature representation power to solve this problem. In this paper, we apply an attention module that generates an attention map of an input image to a general model and performs PGD adversarial training upon the augmented model. In our experiments on the CIFAR-10 dataset, the attention augmented model showed higher accuracy than the general model regardless of the network structure. In particular, the robust accuracy of our approach was consistently higher for various attacks such as PGD, FGSM, and BIM and more powerful adversaries. By visualizing the attention map, we further confirmed that the attention module extracts features of the correct class even for adversarial examples.