• Title/Summary/Keyword: Deep Learning AI

Search Result 610, Processing Time 0.036 seconds

Evaluating Unsupervised Deep Learning Models for Network Intrusion Detection Using Real Security Event Data

  • Jang, Jiho;Lim, Dongjun;Seong, Changmin;Lee, JongHun;Park, Jong-Geun;Cheong, Yun-Gyung
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.10-19
    • /
    • 2022
  • AI-based Network Intrusion Detection Systems (AI-NIDS) detect network attacks using machine learning and deep learning models. Recently, unsupervised AI-NIDS methods are getting more attention since there is no need for labeling, which is crucial for building practical NIDS systems. This paper aims to test the impact of designing autoencoder models that can be applied to unsupervised an AI-NIDS in real network systems. We collected security events of legacy network security system and carried out an experiment. We report the results and discuss the findings.

Application of Deep Recurrent Q Network with Dueling Architecture for Optimal Sepsis Treatment Policy

  • Do, Thanh-Cong;Yang, Hyung Jeong;Ho, Ngoc-Huynh
    • Smart Media Journal
    • /
    • v.10 no.2
    • /
    • pp.48-54
    • /
    • 2021
  • Sepsis is one of the leading causes of mortality globally, and it costs billions of dollars annually. However, treating septic patients is currently highly challenging, and more research is needed into a general treatment method for sepsis. Therefore, in this work, we propose a reinforcement learning method for learning the optimal treatment strategies for septic patients. We model the patient physiological time series data as the input for a deep recurrent Q-network that learns reliable treatment policies. We evaluate our model using an off-policy evaluation method, and the experimental results indicate that it outperforms the physicians' policy, reducing patient mortality up to 3.04%. Thus, our model can be used as a tool to reduce patient mortality by supporting clinicians in making dynamic decisions.

A Study on Impact of Deep Learning on Korean Economic Growth Factor

  • Dong Hwa Kim;Dae Sung Seo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.4
    • /
    • pp.90-99
    • /
    • 2023
  • This paper deals with studying strategy about impact of deep learning (DL) on the factor of Korean economic growth. To study classification of impact factors of Korean economic growth, we suggest dynamic equation of microeconomy and study methods on economic growth impact of deep learning. Next step is to suggest DL model to dynamic equation with Korean economy data with growth related factors to classify what factor is import and dominant factors to build policy and education. DL gives an influence in many areas because it can be implemented with ease as just normal editing works and speak including code development by using huge data. Currently, young generations will take a big impact on their job selection because generative AI can do well as much as humans can do it everywhere. Therefore, policy and education methods should be rearranged as new paradigm. However, government and officers do not understand well how it is serious in policy and education. This paper provides method of policy and education for AI education including generative AI through analysing many papers and reports, and experience.

A Study on the Explainability of Inception Network-Derived Image Classification AI Using National Defense Data (국방 데이터를 활용한 인셉션 네트워크 파생 이미지 분류 AI의 설명 가능성 연구)

  • Kangun Cho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.256-264
    • /
    • 2024
  • In the last 10 years, AI has made rapid progress, and image classification, in particular, are showing excellent performance based on deep learning. Nevertheless, due to the nature of deep learning represented by a black box, it is difficult to actually use it in critical decision-making situations such as national defense, autonomous driving, medical care, and finance due to the lack of explainability of judgement results. In order to overcome these limitations, in this study, a model description algorithm capable of local interpretation was applied to the inception network-derived AI to analyze what grounds they made when classifying national defense data. Specifically, we conduct a comparative analysis of explainability based on confidence values by performing LIME analysis from the Inception v2_resnet model and verify the similarity between human interpretations and LIME explanations. Furthermore, by comparing the LIME explanation results through the Top1 output results for Inception v3, Inception v2_resnet, and Xception models, we confirm the feasibility of comparing the efficiency and availability of deep learning networks using XAI.

Performance Improvement Analysis of Building Extraction Deep Learning Model Based on UNet Using Transfer Learning at Different Learning Rates (전이학습을 이용한 UNet 기반 건물 추출 딥러닝 모델의 학습률에 따른 성능 향상 분석)

  • Chul-Soo Ye;Young-Man Ahn;Tae-Woong Baek;Kyung-Tae Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1111-1123
    • /
    • 2023
  • In recent times, semantic image segmentation methods using deep learning models have been widely used for monitoring changes in surface attributes using remote sensing imagery. To enhance the performance of various UNet-based deep learning models, including the prominent UNet model, it is imperative to have a sufficiently large training dataset. However, enlarging the training dataset not only escalates the hardware requirements for processing but also significantly increases the time required for training. To address these issues, transfer learning is used as an effective approach, enabling performance improvement of models even in the absence of massive training datasets. In this paper we present three transfer learning models, UNet-ResNet50, UNet-VGG19, and CBAM-DRUNet-VGG19, which are combined with the representative pretrained models of VGG19 model and ResNet50 model. We applied these models to building extraction tasks and analyzed the accuracy improvements resulting from the application of transfer learning. Considering the substantial impact of learning rate on the performance of deep learning models, we also analyzed performance variations of each model based on different learning rate settings. We employed three datasets, namely Kompsat-3A dataset, WHU dataset, and INRIA dataset for evaluating the performance of building extraction results. The average accuracy improvements for the three dataset types, in comparison to the UNet model, were 5.1% for the UNet-ResNet50 model, while both UNet-VGG19 and CBAM-DRUNet-VGG19 models achieved a 7.2% improvement.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

Fitness Measurement system using deep learning-based pose recognition (딥러닝 기반 포즈인식을 이용한 체력측정 시스템)

  • Kim, Hyeong-gyun;Hong, Ho-Pyo;Kim, Yong-ho
    • Journal of Digital Convergence
    • /
    • v.18 no.12
    • /
    • pp.97-103
    • /
    • 2020
  • The proposed system is composed of two parts, an AI physical fitness measurement part and an AI physical fitness management part. In the AI fitness measurement part, a guide to physical fitness measurement and accurate calculation of the measured value are performed through deep learning-based pose recognition. Based on these measurements, the AI fitness management part designs personalized exercise programs and provides them to dedicated smart applications. To guide the measurement posture, the posture of the subject to be measured is photographed through a webcam and the skeleton line is extracted. Next, the skeletal line of the learned preparation posture is compared with the extracted skeletal line to determine whether or not it is normal, and voice guidance is provided to maintain the normal posture.

Deep Learning-Based Artificial Intelligence for Mammography

  • Jung Hyun Yoon;Eun-Kyung Kim
    • Korean Journal of Radiology
    • /
    • v.22 no.8
    • /
    • pp.1225-1239
    • /
    • 2021
  • During the past decade, researchers have investigated the use of computer-aided mammography interpretation. With the application of deep learning technology, artificial intelligence (AI)-based algorithms for mammography have shown promising results in the quantitative assessment of parenchymal density, detection and diagnosis of breast cancer, and prediction of breast cancer risk, enabling more precise patient management. AI-based algorithms may also enhance the efficiency of the interpretation workflow by reducing both the workload and interpretation time. However, more in-depth investigation is required to conclusively prove the effectiveness of AI-based algorithms. This review article discusses how AI algorithms can be applied to mammography interpretation as well as the current challenges in its implementation in real-world practice.

Detection of Anomaly Lung Sound using Deep Temporal Feature Extraction (깊은 시계열 특성 추출을 이용한 폐 음성 이상 탐지)

  • Kim-Ngoc T. Le;Gyurin Byun;Hyunseung Choo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.605-607
    • /
    • 2023
  • Recent research has highlighted the effectiveness of Deep Learning (DL) techniques in automating the detection of lung sound anomalies. However, the available lung sound datasets often suffer from limitations in both size and balance, prompting DL methods to employ data preprocessing such as augmentation and transfer learning techniques. These strategies, while valuable, contribute to the increased complexity of DL models and necessitate substantial training memory. In this study, we proposed a streamlined and lightweight DL method but effectively detects lung sound anomalies from small and imbalanced dataset. The utilization of 1D dilated convolutional neural networks enhances sensitivity to lung sound anomalies by efficiently capturing deep temporal features and small variations. We conducted a comprehensive evaluation of the ICBHI dataset and achieved a notable improvement over state-of-the-art results, increasing the average score of sensitivity and specificity metrics by 2.7%.

ETRI AI Strategy #1: Proactively Securing AI Core Technologies (ETRI AI 실행전략 1: 인공지능 핵심기술 선제적 확보)

  • Kim, S.M.;Yeon, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.7
    • /
    • pp.3-12
    • /
    • 2020
  • In this paper, we introduce ETRI AI Strategy #1, "Proactively Securing AI Core Technologies." The first goal of this strategy is to innovate artificial intelligence (AI) service technology to overcome the current limitations of AI technologies. Even though we saw a big jump in AI technology development recently due to the rise of deep learning (DL), DL still has technical limitations and problems. This paper introduces the four major parts of the advanced AI technologies that ETRI will secure to overcome the problems of DL and harmonize AI with the human world: post DL technology, human-AI collaboration technology, intelligence for autonomous things, and big data platform technology.