• Title/Summary/Keyword: 설명가능 인공지능

Search Result 109, Processing Time 0.026 seconds

A Proposal of Sensor-based Time Series Classification Model using Explainable Convolutional Neural Network

  • Jang, Youngjun;Kim, Jiho;Lee, Hongchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.55-67
    • /
    • 2022
  • Sensor data can provide fault diagnosis for equipment. However, the cause analysis for fault results of equipment is not often provided. In this study, we propose an explainable convolutional neural network framework for the sensor-based time series classification model. We used sensor-based time series dataset, acquired from vehicles equipped with sensors, and the Wafer dataset, acquired from manufacturing process. Moreover, we used Cycle Signal dataset, acquired from real world mechanical equipment, and for Data augmentation methods, scaling and jittering were used to train our deep learning models. In addition, our proposed classification models are convolutional neural network based models, FCN, 1D-CNN, and ResNet, to compare evaluations for each model. Our experimental results show that the ResNet provides promising results in the context of time series classification with accuracy and F1 Score reaching 95%, improved by 3% compared to the previous study. Furthermore, we propose XAI methods, Class Activation Map and Layer Visualization, to interpret the experiment result. XAI methods can visualize the time series interval that shows important factors for sensor data classification.

A Study on Efficient AI Model Drift Detection Methods for MLOps (MLOps를 위한 효율적인 AI 모델 드리프트 탐지방안 연구)

  • Ye-eun Lee;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.17-27
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology develops and its practicality increases, it is widely used in various application fields in real life. At this time, the AI model is basically learned based on various statistical properties of the learning data and then distributed to the system, but unexpected changes in the data in a rapidly changing data situation cause a decrease in the model's performance. In particular, as it becomes important to find drift signals of deployed models in order to respond to new and unknown attacks that are constantly created in the security field, the need for lifecycle management of the entire model is gradually emerging. In general, it can be detected through performance changes in the model's accuracy and error rate (loss), but there are limitations in the usage environment in that an actual label for the model prediction result is required, and the detection of the point where the actual drift occurs is uncertain. there is. This is because the model's error rate is greatly influenced by various external environmental factors, model selection and parameter settings, and new input data, so it is necessary to precisely determine when actual drift in the data occurs based only on the corresponding value. There are limits to this. Therefore, this paper proposes a method to detect when actual drift occurs through an Anomaly analysis technique based on XAI (eXplainable Artificial Intelligence). As a result of testing a classification model that detects DGA (Domain Generation Algorithm), anomaly scores were extracted through the SHAP(Shapley Additive exPlanations) Value of the data after distribution, and as a result, it was confirmed that efficient drift point detection was possible.

An Availability of Low Cost Sensors for Machine Fault Diagnosis

  • SON, JONG-DUK
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2012.10a
    • /
    • pp.394-399
    • /
    • 2012
  • In recent years, MEMS sensors show huge attraction in machine condition monitoring, which have advantages in power, size, cost, mobility and flexibility. They can integrate with smart sensors and MEMS sensors are batch product. So the prices are cheap. And the suitability of it for condition monitoring is researched by experimental study. This paper presents a comparative study and performance test of classification of MEMS sensors in target machine fault classification by 3 intelligent classifiers. We attempt to signal validation of MEMS sensor accuracy and reliability and performance comparisons of classifiers are conducted. MEMS accelerometer and MEMS current sensors are employed for experiment test. In addition, a simple feature extraction and cross validation methods were applied to make sure MEMS sensors availabilities. The result of application is good for using fault classification.

  • PDF

Evaluation of Data-based Expansion Joint-gap for Digital Maintenance (디지털 유지관리를 위한 데이터 기반 교량 신축이음 유간 평가 )

  • Jongho Park;Yooseong Shin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.2
    • /
    • pp.1-8
    • /
    • 2024
  • The expansion joint is installed to offset the expansion of the superstructure and must ensure sufficient gap during its service life. In detailed guideline of safety inspection and precise safety diagnosis for bridge, damage due to lack or excessive gap is specified, but there are insufficient standards for determining the abnormal behavior of superstructures. In this study, a data-based maintenance was proposed by continuously monitoring the expansion-gap data of the same expansion joint. A total of 2,756 data were collected from 689 expansion joint, taking into account the effects of season. We have developed a method to evaluate changes in the expansion joint-gap that can analyze the thermal movement through four or more data at the same location, and classified the factors that affect the superstructure behavior and analyze the influence of each factor through deep learning and explainable artificial intelligence(AI). Abnormal behavior of the superstructure was classified into narrowing and functional failure through the expansion joint-gap evaluation graph. The influence factor analysis using deep learning and explainable AI is considered to be reliable because the results can be explained by the existing expansion gap calculation formula and bridge design.

A Case Study on the Effect of the Artificial Intelligence Storytelling(AI+ST) Learning Method (인공지능 스토리텔링(AI+ST) 학습 효과에 관한 사례연구)

  • Yeo, Hyeon Deok;Kang, Hye-Kyung
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.5
    • /
    • pp.495-509
    • /
    • 2020
  • This study is a theoretical research to explore ways to effectively learn AI in the age of intelligent information driven by artificial intelligence (hereinafter referred to as AI). The emphasis is on presenting a teaching method to make AI education accessible not only to students majoring in mathematics, statistics, or computer science, but also to other majors such as humanities and social sciences and the general public. Given the need for 'Explainable AI(XAI: eXplainable AI)' and 'the importance of storytelling for a sensible and intelligent machine(AI)' by Patrick Winston at the MIT AI Institute [33], we can find the significance of research on AI storytelling learning model. To this end, we discuss the possibility through a pilot study targeting general students of an university in Daegu. First, we introduce the AI storytelling(AI+ST) learning method[30], and review the educational goals, the system of contents, the learning methodology and the use of new AI tools in the method. Then, the results of the learners are compared and analyzed, focusing on research questions: 1) Can the AI+ST learning method complement algorithm-driven or developer-centered learning methods? 2) Whether the AI+ST learning method is effective for students and thus help them to develop their AI comprehension, interest and application skills.

SIEM System Performance Enhancement Mechanism Using Active Model Improvement Feedback Technology (능동형 모델 개선 피드백 기술을 활용한 보안관제 시스템 성능 개선 방안)

  • Shin, Youn-Sup;Jo, In-June
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.896-905
    • /
    • 2021
  • In the field of SIEM(Security information and event management), many studies try to use a feedback system to solve lack of completeness of training data and false positives of new attack events that occur in the actual operation. However, the current feedback system requires too much human inputs to improve the running model and even so, those feedback from inexperienced analysts can affect the model performance negatively. Therefore, we propose "active model improving feedback technology" to solve the shortage of security analyst manpower, increasing false positive rates and degrading model performance. First, we cluster similar predicted events during the operation, calculate feedback priorities for those clusters and select and provide representative events from those highly prioritized clusters using XAI (eXplainable AI)-based event visualization. Once these events are feedbacked, we exclude less analogous events and then propagate the feedback throughout the clusters. Finally, these events are incrementally trained by an existing model. To verify the effectiveness of our proposal, we compared three distinct scenarios using PKDD2007 and CSIC2012. As a result, our proposal confirmed a 30% higher performance in all indicators compared to that of the model with no feedback and the current feedback system.

Study on the Selection of Optimal Operation Position Using AI Techniques (인공지능 기법에 의한 최적 운항자세 선정에 관한 연구)

  • Dong-Woo Park
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.681-687
    • /
    • 2023
  • The selection technique for optimal operation position selection technique is used to present the initial bow and stern draft with minimum resistance, for achievingthat is, the optimal fuel consumption efficiency at a given operating displacement and speed. The main purpose of this studypaper is to develop a program to select the optimal operating position with maximum energy efficiency under given operating conditions based on the effective power data of the target ship. This program was written as a Python-based GUI (Graphic User Interface) usingbased on artificial intelligence techniques sucho that ship owners could easily use the GUIit. In the process, tThe introduction of the target ship, the collection of effective power data through computational fluid dynamics (CFD), the learning method of the effective power model using deep learning, and the program for presenting the optimal operation position using the deep neural network (DNN) model were specifically explained. Ships are loaded and unloaded for each operation, which changes the cargo load and changes the displacement. The shipowners wants to know the optimal operating position with minimum resistance, that is, maximum energy efficiency, according to the given speed of each displacement. The developed GUI can be installed on the ship's tablet PC and application and used to determineselect the optimal operating position.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

Solving the Monkey and Banana Problem Using DNA Computing (DNA 컴퓨팅을 이용한 원숭이와 바나나 문제 해결)

  • 박의준;이인희;장병탁
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.2
    • /
    • pp.15-25
    • /
    • 2003
  • The Monkey and Banana Problem is an example commonly used for illustrating simple problem solving. It can be solved by conventional approaches, but this requires a procedural aspect when inferences are processed, and this fact works as a limitation condition in solving complex problems. However, if we use DNA computing methods which are naturally able to realize massive parallel processing. the Monkey and Banana Problem can be solved effectively without weakening the fundamental aims above. In this paper, we design a method of representing the problem using DNA molecules, and show that various solutions are generated through computer-simulations based on the design. The simulation results are obviously interesting in that these are contrary to the fact that the Prolog program for the Monkey and Banana Problem, which was implemented from the conventional point of view, gives us only one optimal solution. That is, DNA computing overcomes the limitations of conventional approaches.

  • PDF

A study on integration of semantic topic based Knowledge model (의미적 토픽 기반 지식모델의 통합에 관한 연구)

  • Chun, Seung-Su;Lee, Sang-Jin;Bae, Sang-Tea
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.181-183
    • /
    • 2012
  • 최근 자연어 및 정형언어 처리, 인공지능 알고리즘 등을 활용한 효율적인 의미 기반 지식모델의 생성과 분석 방법이 제시되고 있다. 이러한 의미 기반 지식모델은 효율적 의사결정트리(Decision Making Tree)와 특정 상황에 대한 체계적인 문제해결(Problem Solving) 경로 분석에 활용된다. 특히 다양한 복잡계 및 사회 연계망 분석에 있어 정적 지표 생성과 회귀 분석, 행위적 모델을 통한 추이분석, 거시예측을 지원하는 모의실험(Simulation) 모형의 기반이 된다. 본 연구에서는 이러한 의미 기반 지식모델을 통합에 있어 텍스트 마이닝을 통해 도출된 토픽(Topic) 모델 간 통합 방법과 정형적 알고리즘을 제시한다. 이를 위해 먼저, 텍스트 마이닝을 통해 도출되는 키워드 맵을 동치적 지식맵으로 변환하고 이를 의미적 지식모델로 통합하는 방법을 설명한다. 또한 키워드 맵으로부터 유의미한 토픽 맵을 투영하는 방법과 의미적 동치 모델을 유도하는 알고리즘을 제안한다. 통합된 의미 기반 지식모델은 토픽 간의 구조적 규칙과 정도 중심성, 근접 중심성, 매개 중심성 등 관계적 의미분석이 가능하며 대규모 비정형 문서의 의미 분석과 활용에 실질적인 기반 연구가 될 수 있다.