• Title/Summary/Keyword: 설명가능한 인공지능

Search Result 110, Processing Time 0.03 seconds

Crowd counting based on Deep Learning (딥러닝 기반 인원 계수 방안)

  • Sim, Gun-Wu;Sohn, Jung-Mo;Kang, Gun-Ha
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.17-20
    • /
    • 2021
  • 본 연구는 인원 계수에 딥러닝 알고리즘을 적용한다. 인원 계수는 안전 관리 분야, 상업 분야에 적용될 수 있다. 예를 들어, 건물 내 화재 발생 시, 계수된 인원을 활용하여 인명 피해를 최소화할 수 있다. 다른 예로, 유동인구 데이터를 기반으로 상권을 분석하여 경제적 효율성을 극대화할 수 있다. 이처럼 인원 데이터의 중요성이 증가함에 따라 인원 계수 연구도 활발하다. 그 예로, 객체 탐지(Object Detection) 같은 딥러닝 기반 인원 계수, 센서 기반 인원 계수 등이 있다. 본 연구에선 딥러닝 알고리즘인 VGGNet을 사용하여 인원을 계수했다. 결과로 Mean Absolute Percentage Error(이하 MAPE)는 약 5.9%의 오차율을 보였다. 결과 확인 방법으로는 설명 가능한 인공지능(XAI) 알고리즘 중 하나인 Grad-CAM을 적용했다.

  • PDF

BERT-based Two-Stage Classification Models for Alzheimer's Disease and Schizophrenia Diagnosis (BERT 기반 2단계 분류 모델을 이용한 알츠하이머병 치매와 조현병 진단)

  • Jung, Min-Kyo;Na, Seung-Hoon;Kim, Ko Woon;Shin, Byong-Soo;Chung, Young-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.558-563
    • /
    • 2021
  • 알츠하이머병 치매와 조현병 진단을 위한 2단계 분류 모델을 제안한다. 정상군과 환자군의 발화에 나타난 페어 언어 모델 간의 Perplexity 차이에 기반한 분류와 기존 단일 BERT 모델의 미세조정(fine-tuning)을 이용한 분류의 통합을 시도하였다. Perplexity 기반의 분류 성능이 알츠하이머병, 조현병 모두 우수한 결과를 보임을 확인 하였고, 조현병 분류 모델의 성능이 소폭 증가하였다. 향후 설명 가능한 인공지능 기법을 적용에 따른 성능 향상을 기대할 수 있었다.

  • PDF

Development of a real-time prediction model for intraoperative hypotension using Explainable AI and Transformer (Explainable AI와 Transformer를 이용한 수술 중 저혈압 실시간 예측 모델 개발)

  • EunSeo Jung;Sang-Hyun Kim;Jiyoung Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.35-36
    • /
    • 2024
  • 전신 마취 수술 중 저혈압의 발생은 다양한 합병증을 유발하며 이를 사전에 예측하여 대응하는 것은 매우 중요한 일이다. 따라서 본 연구에서는 SHAP 모델을 통해 변수 선택을 진행하고, Transformer 모델을 이용해 저혈압 발생 여부를 예측함으로써 임상적 의사결정을 지원한다. 또한 기존 연구들과는 달리, 수술실에서 수집되는 데이터를 기반으로 하여 높은 범용성을 가진다. 비침습적 혈압 예측에서 RMSE 9.46, MAPE 4.4%를 달성하였고, 저혈압 여부를 예측에서는 저혈압 기준 F1-Score 0.75로 우수한 결과를 얻었다.

  • PDF

An Availability of Low Cost Sensors for Machine Fault Diagnosis

  • SON, JONG-DUK
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2012.10a
    • /
    • pp.394-399
    • /
    • 2012
  • In recent years, MEMS sensors show huge attraction in machine condition monitoring, which have advantages in power, size, cost, mobility and flexibility. They can integrate with smart sensors and MEMS sensors are batch product. So the prices are cheap. And the suitability of it for condition monitoring is researched by experimental study. This paper presents a comparative study and performance test of classification of MEMS sensors in target machine fault classification by 3 intelligent classifiers. We attempt to signal validation of MEMS sensor accuracy and reliability and performance comparisons of classifiers are conducted. MEMS accelerometer and MEMS current sensors are employed for experiment test. In addition, a simple feature extraction and cross validation methods were applied to make sure MEMS sensors availabilities. The result of application is good for using fault classification.

  • PDF

Evaluation of Data-based Expansion Joint-gap for Digital Maintenance (디지털 유지관리를 위한 데이터 기반 교량 신축이음 유간 평가 )

  • Jongho Park;Yooseong Shin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.2
    • /
    • pp.1-8
    • /
    • 2024
  • The expansion joint is installed to offset the expansion of the superstructure and must ensure sufficient gap during its service life. In detailed guideline of safety inspection and precise safety diagnosis for bridge, damage due to lack or excessive gap is specified, but there are insufficient standards for determining the abnormal behavior of superstructures. In this study, a data-based maintenance was proposed by continuously monitoring the expansion-gap data of the same expansion joint. A total of 2,756 data were collected from 689 expansion joint, taking into account the effects of season. We have developed a method to evaluate changes in the expansion joint-gap that can analyze the thermal movement through four or more data at the same location, and classified the factors that affect the superstructure behavior and analyze the influence of each factor through deep learning and explainable artificial intelligence(AI). Abnormal behavior of the superstructure was classified into narrowing and functional failure through the expansion joint-gap evaluation graph. The influence factor analysis using deep learning and explainable AI is considered to be reliable because the results can be explained by the existing expansion gap calculation formula and bridge design.

Face Recognition Network using gradCAM (gradCam을 사용한 얼굴인식 신경망)

  • Chan Hyung Baek;Kwon Jihun;Ho Yub Jung
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.9-14
    • /
    • 2023
  • In this paper, we proposed a face recognition network which attempts to use more facial features awhile using smaller number of training sets. When combining the neural network together for face recognition, we want to use networks that use different part of the facial features. However, the network training chooses randomly where these facial features are obtained. Other hand, the judgment basis of the network model can be expressed as a saliency map through gradCAM. Therefore, in this paper, we use gradCAM to visualize where the trained face recognition model has made a observations and recognition judgments. Thus, the network combination can be constructed based on the different facial features used. Using this approach, we trained a network for small face recognition problem. In an simple toy face recognition example, the recognition network used in this paper improves the accuracy by 1.79% and reduces the equal error rate (EER) by 0.01788 compared to the conventional approach.

A Case Study on the Effect of the Artificial Intelligence Storytelling(AI+ST) Learning Method (인공지능 스토리텔링(AI+ST) 학습 효과에 관한 사례연구)

  • Yeo, Hyeon Deok;Kang, Hye-Kyung
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.5
    • /
    • pp.495-509
    • /
    • 2020
  • This study is a theoretical research to explore ways to effectively learn AI in the age of intelligent information driven by artificial intelligence (hereinafter referred to as AI). The emphasis is on presenting a teaching method to make AI education accessible not only to students majoring in mathematics, statistics, or computer science, but also to other majors such as humanities and social sciences and the general public. Given the need for 'Explainable AI(XAI: eXplainable AI)' and 'the importance of storytelling for a sensible and intelligent machine(AI)' by Patrick Winston at the MIT AI Institute [33], we can find the significance of research on AI storytelling learning model. To this end, we discuss the possibility through a pilot study targeting general students of an university in Daegu. First, we introduce the AI storytelling(AI+ST) learning method[30], and review the educational goals, the system of contents, the learning methodology and the use of new AI tools in the method. Then, the results of the learners are compared and analyzed, focusing on research questions: 1) Can the AI+ST learning method complement algorithm-driven or developer-centered learning methods? 2) Whether the AI+ST learning method is effective for students and thus help them to develop their AI comprehension, interest and application skills.

SIEM System Performance Enhancement Mechanism Using Active Model Improvement Feedback Technology (능동형 모델 개선 피드백 기술을 활용한 보안관제 시스템 성능 개선 방안)

  • Shin, Youn-Sup;Jo, In-June
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.896-905
    • /
    • 2021
  • In the field of SIEM(Security information and event management), many studies try to use a feedback system to solve lack of completeness of training data and false positives of new attack events that occur in the actual operation. However, the current feedback system requires too much human inputs to improve the running model and even so, those feedback from inexperienced analysts can affect the model performance negatively. Therefore, we propose "active model improving feedback technology" to solve the shortage of security analyst manpower, increasing false positive rates and degrading model performance. First, we cluster similar predicted events during the operation, calculate feedback priorities for those clusters and select and provide representative events from those highly prioritized clusters using XAI (eXplainable AI)-based event visualization. Once these events are feedbacked, we exclude less analogous events and then propagate the feedback throughout the clusters. Finally, these events are incrementally trained by an existing model. To verify the effectiveness of our proposal, we compared three distinct scenarios using PKDD2007 and CSIC2012. As a result, our proposal confirmed a 30% higher performance in all indicators compared to that of the model with no feedback and the current feedback system.

Study on the Selection of Optimal Operation Position Using AI Techniques (인공지능 기법에 의한 최적 운항자세 선정에 관한 연구)

  • Dong-Woo Park
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.681-687
    • /
    • 2023
  • The selection technique for optimal operation position selection technique is used to present the initial bow and stern draft with minimum resistance, for achievingthat is, the optimal fuel consumption efficiency at a given operating displacement and speed. The main purpose of this studypaper is to develop a program to select the optimal operating position with maximum energy efficiency under given operating conditions based on the effective power data of the target ship. This program was written as a Python-based GUI (Graphic User Interface) usingbased on artificial intelligence techniques sucho that ship owners could easily use the GUIit. In the process, tThe introduction of the target ship, the collection of effective power data through computational fluid dynamics (CFD), the learning method of the effective power model using deep learning, and the program for presenting the optimal operation position using the deep neural network (DNN) model were specifically explained. Ships are loaded and unloaded for each operation, which changes the cargo load and changes the displacement. The shipowners wants to know the optimal operating position with minimum resistance, that is, maximum energy efficiency, according to the given speed of each displacement. The developed GUI can be installed on the ship's tablet PC and application and used to determineselect the optimal operating position.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.