• Title/Summary/Keyword: deep-learning

Search Result 5,679, Processing Time 0.029 seconds

Deep Learning Based Radiographic Classification of Morphology and Severity of Peri-implantitis Bone Defects: A Preliminary Pilot Study

  • Jae-Hong Lee;Jeong-Ho Yun
    • Journal of Korean Dental Science
    • /
    • v.16 no.2
    • /
    • pp.156-163
    • /
    • 2023
  • Purpose: The aim of this study was to evaluate the feasibility of deep learning techniques to classify the morphology and severity of peri-implantitis bone defects based on periapical radiographs. Materials and Methods: Based on a pre-trained and fine-tuned ResNet-50 deep learning algorithm, the morphology and severity of peri-implantitis bone defects on periapical radiographs were classified into six groups (class I/II and slight/moderate/severe). Accuracy, precision, recall, and F1 scores were calculated to measure accuracy. Result: A total of 971 dental images were included in this study. Deep-learning-based classification achieved an accuracy of 86.0% with precision, recall, and F1 score values of 84.45%, 81.22%, and 82.80%, respectively. Class II and moderate groups had the highest F1 scores (92.23%), whereas class I and severe groups had the lowest F1 scores (69.33%). Conclusion: The artificial intelligence-based deep learning technique is promising for classifying the morphology and severity of peri-implantitis. However, further studies are required to validate their feasibility in clinical practice.

A Deep Learning Approach for Intrusion Detection

  • Roua Dhahbi;Farah Jemili
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.89-96
    • /
    • 2023
  • Intrusion detection has been widely studied in both industry and academia, but cybersecurity analysts always want more accuracy and global threat analysis to secure their systems in cyberspace. Big data represent the great challenge of intrusion detection systems, making it hard to monitor and analyze this large volume of data using traditional techniques. Recently, deep learning has been emerged as a new approach which enables the use of Big Data with a low training time and high accuracy rate. In this paper, we propose an approach of an IDS based on cloud computing and the integration of big data and deep learning techniques to detect different attacks as early as possible. To demonstrate the efficacy of this system, we implement the proposed system within Microsoft Azure Cloud, as it provides both processing power and storage capabilities, using a convolutional neural network (CNN-IDS) with the distributed computing environment Apache Spark, integrated with Keras Deep Learning Library. We study the performance of the model in two categories of classification (binary and multiclass) using CSE-CIC-IDS2018 dataset. Our system showed a great performance due to the integration of deep learning technique and Apache Spark engine.

Hyperparameter optimization for Lightweight and Resource-Efficient Deep Learning Model in Human Activity Recognition using Short-range mmWave Radar (mmWave 레이더 기반 사람 행동 인식 딥러닝 모델의 경량화와 자원 효율성을 위한 하이퍼파라미터 최적화 기법)

  • Jiheon Kang
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.319-325
    • /
    • 2023
  • In this study, we proposed a method for hyperparameter optimization in the building and training of a deep learning model designed to process point cloud data collected by a millimeter-wave radar system. The primary aim of this study is to facilitate the deployment of a baseline model in resource-constrained IoT devices. We evaluated a RadHAR baseline deep learning model trained on a public dataset composed of point clouds representing five distinct human activities. Additionally, we introduced a coarse-to-fine hyperparameter optimization procedure, showing substantial potential to enhance model efficiency without compromising predictive performance. Experimental results show the feasibility of significantly reducing model size without adversely impacting performance. Specifically, the optimized model demonstrated a 3.3% improvement in classification accuracy despite a 16.8% reduction in number of parameters compared th the baseline model. In conclusion, this research offers valuable insights for the development of deep learning models for resource-constrained IoT devices, underscoring the potential of hyperparameter optimization and model size reduction strategies. This work contributes to enhancing the practicality and usability of deep learning models in real-world environments, where high levels of accuracy and efficiency in data processing and classification tasks are required.

Research on High-resolution Seafloor Topography Generation using Feature Extraction Algorithm Based on Deep Learning (딥러닝 기반의 특징점 추출 알고리즘을 활용한 고해상도 해저지형 생성기법 연구)

  • Hyun Seung Kim;Jae Deok Jang;Chul Hyun;Sung Kyun Lee
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.20 no.spc1
    • /
    • pp.90-96
    • /
    • 2024
  • In this paper, we propose a technique to model high resolution seafloor topography with 1m intervals using actual water depth data near the east coast of the Korea with 1.6km distance intervals. Using a feature point extraction algorithm that harris corner based on deep learning, the location of the center of seafloor mountain was calculated and the surrounding topology was modeled. The modeled high-resolution seafloor topography based on deep learning was verified within 1.1m mean error between the actual warder dept data. And average error that result of calculating based on deep learning was reduced by 54.4% compared to the case that deep learning was not applied. The proposed algorithm is expected to generate high resolution underwater topology for the entire Korean peninsula and be used to establish a path plan for autonomous navigation of underwater vehicle.

Performance Analysis of Deep Learning-Based Detection/Classification for SAR Ground Targets with the Synthetic Dataset (합성 데이터를 이용한 SAR 지상표적의 딥러닝 탐지/분류 성능분석)

  • Ji-Hoon Park
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.147-155
    • /
    • 2024
  • Based on the recently developed deep learning technology, many studies have been conducted on deep learning networks that simultaneously detect and classify targets of interest in synthetic aperture radar(SAR) images. Although numerous research results have been derived mainly with the open SAR ship datasets, there is a lack of work carried out on the deep learning network aimed at detecting and classifying SAR ground targets and trained with the synthetic dataset generated from electromagnetic scattering simulations. In this respect, this paper presents the deep learning network trained with the synthetic dataset and applies it to detecting and classifying real SAR ground targets. With experiment results, this paper also analyzes the network performance according to the composition ratio between the real measured data and the synthetic data involved in network training. Finally, the summary and limitations are discussed to give information on the future research direction.

Improving Deep Learning Models Considering the Time Lags between Explanatory and Response Variables

  • Chaehyeon Kim;Ki Yong Lee
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.345-359
    • /
    • 2024
  • A regression model represents the relationship between explanatory and response variables. In real life, explanatory variables often affect a response variable with a certain time lag, rather than immediately. For example, the marriage rate affects the birth rate with a time lag of 1 to 2 years. Although deep learning models have been successfully used to model various relationships, most of them do not consider the time lags between explanatory and response variables. Therefore, in this paper, we propose an extension of deep learning models, which automatically finds the time lags between explanatory and response variables. The proposed method finds out which of the past values of the explanatory variables minimize the error of the model, and uses the found values to determine the time lag between each explanatory variable and response variables. After determining the time lags between explanatory and response variables, the proposed method trains the deep learning model again by reflecting these time lags. Through various experiments applying the proposed method to a few deep learning models, we confirm that the proposed method can find a more accurate model whose error is reduced by more than 60% compared to the original model.

Bi-directional Electricity Negotiation Scheme based on Deep Reinforcement Learning Algorithm in Smart Building Systems (스마트 빌딩 시스템을 위한 심층 강화학습 기반 양방향 전력거래 협상 기법)

  • Lee, Donggu;Lee, Jiyoung;Kyeong, Chanuk;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.215-219
    • /
    • 2021
  • In this paper, we propose a deep reinforcement learning algorithm-based bi-directional electricity negotiation scheme that adjusts and propose the price they want to exchange for negotiation over smart building and utility grid. By employing a deep Q network algorithm, which is a kind of deep reinforcement learning algorithm, the proposed scheme adjusts the price proposal of smart building and utility grid. From the simulation results, it can be verified that consensus on electricity price negotiation requires average of 43.78 negotiation process. The negotiation process under simulation settings and scenario can also be confirmed through the simulation results.

PartitionTuner: An operator scheduler for deep-learning compilers supporting multiple heterogeneous processing units

  • Misun Yu;Yongin Kwon;Jemin Lee;Jeman Park;Junmo Park;Taeho Kim
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.318-328
    • /
    • 2023
  • Recently, embedded systems, such as mobile platforms, have multiple processing units that can operate in parallel, such as centralized processing units (CPUs) and neural processing units (NPUs). We can use deep-learning compilers to generate machine code optimized for these embedded systems from a deep neural network (DNN). However, the deep-learning compilers proposed so far generate codes that sequentially execute DNN operators on a single processing unit or parallel codes for graphic processing units (GPUs). In this study, we propose PartitionTuner, an operator scheduler for deep-learning compilers that supports multiple heterogeneous PUs including CPUs and NPUs. PartitionTuner can generate an operator-scheduling plan that uses all available PUs simultaneously to minimize overall DNN inference time. Operator scheduling is based on the analysis of DNN architecture and the performance profiles of individual and group operators measured on heterogeneous processing units. By the experiments for seven DNNs, PartitionTuner generates scheduling plans that perform 5.03% better than a static type-based operator-scheduling technique for SqueezeNet. In addition, PartitionTuner outperforms recent profiling-based operator-scheduling techniques for ResNet50, ResNet18, and SqueezeNet by 7.18%, 5.36%, and 2.73%, respectively.

Improving Dynamic Missile Defense Effectiveness Using Multi-Agent Deep Q-Network Model (멀티에이전트 기반 Deep Q-Network 모델을 이용한 동적 미사일 방어효과 개선)

  • Min Gook Kim;Dong Wook Hong;Bong Wan Choi;Ji Hoon Kyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.2
    • /
    • pp.74-83
    • /
    • 2024
  • The threat of North Korea's long-range firepower is recognized as a typical asymmetric threat, and South Korea is prioritizing the development of a Korean-style missile defense system to defend against it. To address this, previous research modeled North Korean long-range artillery attacks as a Markov Decision Process (MDP) and used Approximate Dynamic Programming as an algorithm for missile defense, but due to its limitations, there is an intention to apply deep reinforcement learning techniques that incorporate deep learning. In this paper, we aim to develop a missile defense system algorithm by applying a modified DQN with multi-agent-based deep reinforcement learning techniques. Through this, we have researched to ensure an efficient missile defense system can be implemented considering the style of attacks in recent wars, such as how effectively it can respond to enemy missile attacks, and have proven that the results learned through deep reinforcement learning show superior outcomes.

Morphological analysis of virtual teeth generated by deep learning (딥러닝으로 생성된 가상 치아의 형태학적 분석 연구)

  • Eun-Jeong Bae
    • Journal of Technologic Dentistry
    • /
    • v.46 no.3
    • /
    • pp.93-100
    • /
    • 2024
  • Purpose: This study aimed to generate virtual mandibular first molars using deep learning technology, specifically deep convolutional generative adversarial network (DCGAN), and evaluate the accuracy and reliability of these virtual teeth by analyzing their morphological characteristics. These morphological characteristics were classified based on various evaluation criteria, facilitating the assessment of deep learning-based dental prosthesis production's practical applicability. Methods: Based on our previous research, 1,000 virtual mandibular first molars were generated, and based on morphological criteria, categorized as matching, non-matching, and partially matching. The generated first molars or the categorization of the generated molars were analyzed through the expert judgment of dental technicians. Results: Among the 1,000 generated virtual teeth, 143 (14.3%) met all five evaluation criteria, whereas 76 (7.6%) were judged as completely non-matching. The most frequent issue, with 781 (78.1%) instances, including some overlapping instances, was related to occlusal buccal cusp discrepancies. Conclusion: The study reveals the potential of DCGAN-generated virtual teeth as substitutes for real teeth; however, additional research and improvements in data quality are necessary to enhance accuracy. Continued data collection and refinement of generation methods can maximize the practicality and utility of deep learning-based dental prosthesis production.