• Title/Summary/Keyword: XAI

Search Result 87, Processing Time 0.031 seconds

Characterization of the xaiF Gene Encoding a Novel Xylanase-activity- increasing Factor, XaiF

  • Cho, Ssang-Goo;Choi, Yong-Jin
    • Journal of Microbiology and Biotechnology
    • /
    • v.8 no.4
    • /
    • pp.378-387
    • /
    • 1998
  • The DNA sequence immediately following the xynA gene of Bacillus stearothermophilus 236 [about l-kb region downstream from the translational termination codon (TAA) of the xynA gene]was found to have an ability to enhance the xylanase activity of the upstream xynA gene. An 849-bp ORF was identified in the downstream region, and the ORF was confirmed to encode a novel protein of 283 amino acids designated as XaiF (xylanase-activity-increasing factor). From the nucleotide sequence of the xaiF gene, the molecular mass and pI of XaiF were deduced to be 32,006 Da and 4.46, respectively. XaiF was overproduced in the E. coli cells from the cloned xaiF gene by using the T7 expression system. The transcriptional initiation site was determined by primer extension analysis and the putative promoter and ribosome binding regions were also identified. Blast search showed that the xaiF and its protein product had no homology with any gene nor any protein reported so far. Also, in B. subtilis, the xaiF trans-activated the xylanase activity at the same rate as in E. coli. In contrast, xaiF had no activating effect on the co-expressed ${\beta}-xylosidase$ of the xylA gene derived from the same strain of B. stearothermophilus. In addition, the intracellular and extracellular fractions from the E. coli cells carrying the plasmid-borne xaiF gene did not increase the isolated xylanase activity, indicating that the protein-protein interaction between XynA and XaiF was not a causative event for the xylanase activating effect of the xaiF gene.

  • PDF

A Study on Evaluation Methods for Interpreting AI Results in Malware Analysis (악성코드 분석에서의 AI 결과해석에 대한 평가방안 연구)

  • Kim, Jin-gang;Hwang, Chan-woong;Lee, Tae-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1193-1204
    • /
    • 2021
  • In information security, AI technology is used to detect unknown malware. Although AI technology guarantees high accuracy, it inevitably entails false positives, so we are considering introducing XAI to interpret the results predicted by AI. However, XAI evaluation studies that evaluate or verify the interpretation only provide simple interpretation results are lacking. XAI evaluation is essential to ensure safety which technique is more accurate. In this paper, we interpret AI results as features that have significantly contributed to AI prediction in the field of malware, and present an evaluation method for the interpretation of AI results. Interpretation of results is performed using two XAI techniques on a tree-based AI model with an accuracy of about 94%, and interpretation of AI results is evaluated by analyzing descriptive accuracy and sparsity. As a result of the experiment, it was confirmed that the AI result interpretation was properly calculated. In the future, it is expected that the adoption and utilization of XAI will gradually increase due to XAI evaluation, and the reliability and transparency of AI will be greatly improved.

정보보호 분야의 XAI 기술 동향

  • Kim, Hongbi;Lee, Taejin
    • Review of KIISC
    • /
    • v.31 no.5
    • /
    • pp.21-31
    • /
    • 2021
  • 컴퓨터 기술의 발전에 따라 ML(Machine Learning) 및 AI(Artificial Intelligence)의 도입이 활발히 진행되고 있으며, 정보보호 분야에서도 활용이 증가하고 있는 추세이다. 그러나 이러한 모델들은 black-box 특성을 가지고 있으므로 의사결정 과정을 이해하기 어렵다. 특히, 오탐지 리스크가 큰 정보보호 환경에서 이러한 문제점은 AI 기술을 널리 활용하는데 상당한 장애로 작용한다. 이를 해결하기 위해 XAI(eXplainable Artificial Intelligence) 방법론에 대한 연구가 주목받고 있다. XAI는 예측의 해석이 어려운 AI의 문제점을 보완하기 위해 등장한 방법으로 AI의 학습 과정을 투명하게 보여줄 수 있으며, 예측에 대한 신뢰성을 제공할 수 있다. 본 논문에서는 이러한 XAI 기술의 개념 및 필요성, XAI 방법론의 정보보호 분야 적용 사례에 설명한다. 또한, XAI 평가 방법을 제시하며, XAI 방법론을 보안 시스템에 적용한 경우의 결과도 논의한다. XAI 기술은 AI 판단에 대한 사람 중심의 해석정보를 제공하여, 한정된 인력에 많은 분석데이터를 처리해야 하는 보안담당자들의 분석 및 의사결정 시간을 줄이는데 기여할 수 있을 것으로 예상된다.

Explanation of Influence Variables and Development of Tight Oil Productivity Prediction Model by Production Period using XAI Algorithm (XAI를 활용한 생산기간에 따른 치밀오일 생산성 예측 모델 개발 및 영향변수 설명)

  • Han, Dong-kwon;An, Yu-bin;Kwon, Sun-il
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.484-487
    • /
    • 2022
  • This study suggests an XAI-based machine learning method to predict the productivity of tight oil reservoirs according to the production period. The XAI algorithm refers to interpretable artificial intelligence and provides the basis for the predicted result and the validity of the derivation process. In this study, we proposed a supervised learning model that predicts productivity in the early and late stages of production after performing data preprocessing based on field data. and then based on the model results, the factors affecting the productivity prediction model were analyzed using XAI.

  • PDF

A Study on XAI-based Clinical Decision Support System (XAI 기반의 임상의사결정시스템에 관한 연구)

  • Ahn, Yoon-Ae;Cho, Han-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.13-22
    • /
    • 2021
  • The clinical decision support system uses accumulated medical data to apply an AI model learned by machine learning to patient diagnosis and treatment prediction. However, the existing black box-based AI application does not provide a valid reason for the result predicted by the system, so there is a limitation in that it lacks explanation. To compensate for these problems, this paper proposes a system model that applies XAI that can be explained in the development stage of the clinical decision support system. The proposed model can supplement the limitations of the black box by additionally applying a specific XAI technology that can be explained to the existing AI model. To show the application of the proposed model, we present an example of XAI application using LIME and SHAP. Through testing, it is possible to explain how data affects the prediction results of the model from various perspectives. The proposed model has the advantage of increasing the user's trust by presenting a specific reason to the user. In addition, it is expected that the active use of XAI will overcome the limitations of the existing clinical decision support system and enable better diagnosis and decision support.

Esthetic Evaluation of Decision tree Visualization in XAI (XAI에서 의사결정 나무 시각화의 심미도 평가)

  • Ahn, Cheol-Yong;Park, Ji Su;Shon, Jin Gon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.1122-1125
    • /
    • 2020
  • AI의 결과를 이해하기 위해서 XAI(eXplainable Artificial Intelligence)의 연구는 매우 중요하다. 세계적으로 XAI 개발 연구는 많이 진행되고 있지만 개발된 XAI를 평가하는 연구는 매우 적다. 본 논문은 사용성 측면에서 XAI를 평가하기 위해 AI 사용성 요소, 과학적 설명의 요소, 휴리스틱 평가 요소를 분류하고 의사결정 나무를 시각화여 심미도를 평가한다.

A Study on Drift Phenomenon of Trained ML (학습된 머신러닝의 표류 현상에 관한 고찰)

  • Shin, ByeongChun;Cha, YoonSeok;Kim, Chaeyun;Cha, ByungRae
    • Smart Media Journal
    • /
    • v.11 no.7
    • /
    • pp.61-69
    • /
    • 2022
  • In the learned machine learning, the performance of machine learning degrades at the same time as drift occurs in terms of learning models and learning data over time. As a solution to this problem, I would like to propose the concept and evaluation method of ML drift to determine the re-learning period of machine learning. An XAI test and an XAI test of an apple image were performed according to strawberry and clarity. In the case of strawberries, the change in the XAI analysis of ML models according to the clarity value was insignificant, and in the case of XAI of apple image, apples normally classified objects and heat map areas, but in the case of apple flowers and buds, the results were insignificant compared to strawberries and apples. This is expected to be caused by the lack of learning images of apple flowers and buds, and more apple flowers and buds will be studied and tested in the future.

CDSS Architechure Based on Blockchain and XAI (블록체인과 XAI 기반의 CDSS 아키텍처)

  • Heo, Yoonnyoung;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.255-256
    • /
    • 2022
  • 임상의사결정지원시스템(Clinical Decision Support System)은 환자의 질병을 진단하고 치료할 때 의사결정을 도와주는 시스템이다.[1] 본 논문에서는 블록체인과 XAI 기술을 활용해 임상의사결정지원시스템의 아키텍처를 제안한다. 제안 아키텍처는 데이터의 중앙화, 의료데이터의 보안을 블록체인기술로 해결하고 블록체인을 기반으로 한 보반 기술인 DID 기술을 활용해 데이터의 신뢰성과 보안성을 확보하였다. 또한 XAI 모듈을 활용해 예측 결과의 신뢰도와 투명성도 제공해 의료인의 의사결정을 지원하였다.

Analysis of Malware Group Classification with eXplainable Artificial Intelligence (XAI기반 악성코드 그룹분류 결과 해석 연구)

  • Kim, Do-yeon;Jeong, Ah-yeon;Lee, Tae-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.4
    • /
    • pp.559-571
    • /
    • 2021
  • Along with the increase prevalence of computers, the number of malware distributions by attackers to ordinary users has also increased. Research to detect malware continues to this day, and in recent years, research on malware detection and analysis using AI is focused. However, the AI algorithm has a disadvantage that it cannot explain why it detects and classifies malware. XAI techniques have emerged to overcome these limitations of AI and make it practical. With XAI, it is possible to provide a basis for judgment on the final outcome of the AI. In this paper, we conducted malware group classification using XGBoost and Random Forest, and interpreted the results through SHAP. Both classification models showed a high classification accuracy of about 99%, and when comparing the top 20 API features derived through XAI with the main APIs of malware, it was possible to interpret and understand more than a certain level. In the future, based on this, a direct AI reliability improvement study will be conducted.

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.