• Title/Summary/Keyword: Skip Connection

Search Result 34, Processing Time 0.018 seconds

Enhanced Deep Feature Reconstruction : Texture Defect Detection and Segmentation through Preservation of Multi-scale Features (개선된 Deep Feature Reconstruction : 다중 스케일 특징의 보존을 통한 텍스쳐 결함 감지 및 분할)

  • Jongwook Si;Sungyoung Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.369-377
    • /
    • 2023
  • In the industrial manufacturing sector, quality control is pivotal for minimizing defect rates; inadequate management can result in additional costs and production delays. This study underscores the significance of detecting texture defects in manufactured goods and proposes a more precise defect detection technique. While the DFR(Deep Feature Reconstruction) model adopted an approach based on feature map amalgamation and reconstruction, it had inherent limitations. Consequently, we incorporated a new loss function using statistical methodologies, integrated a skip connection structure, and conducted parameter tuning to overcome constraints. When this enhanced model was applied to the texture category of the MVTec-AD dataset, it recorded a 2.3% higher Defect Segmentation AUC compared to previous methods, and the overall defect detection performance was improved. These findings attest to the significant contribution of the proposed method in defect detection through the reconstruction of feature map combinations.

Performance Evaluation of YOLOv5 Model according to Various Hyper-parameters in Nuclear Medicine Phantom Images (핵의학 팬텀 영상에서 초매개변수 변화에 따른 YOLOv5 모델의 성능평가)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.21-26
    • /
    • 2024
  • The one of the famous deep learning models for object detection task is you only look once version 5 (YOLOv5) framework based on the one stage architecture. In addition, YOLOv5 model indicated high performance for accurate lesion detection using the bottleneck CSP layer and skip connection function. The purpose of this study was to evaluate the performance of YOLOv5 framework according to various hyperparameters in position emission tomogrpahy (PET) phantom images. The dataset was obtained from QIN PET segmentation challenge in 500 slices. We set the bounding box to generate ground truth dataset using labelImg software. The hyperparameters for network train were applied by changing optimization function (SDG, Adam, and AdamW), activation function (SiLU, LeakyRelu, Mish, and Hardwish), and YOLOv5 model size (nano, small, large, and xlarge). The intersection over union (IOU) method was used for performance evaluation. As a results, the condition of outstanding performance is to apply AdamW, Hardwish, and nano size for optimization function, activation function and model version, respectively. In conclusion, we confirmed the usefulness of YOLOv5 network for object detection performance in nuclear medicine images.

Scalable Video Coding using Super-Resolution based on Convolutional Neural Networks for Video Transmission over Very Narrow-Bandwidth Networks (초협대역 비디오 전송을 위한 심층 신경망 기반 초해상화를 이용한 스케일러블 비디오 코딩)

  • Kim, Dae-Eun;Ki, Sehwan;Kim, Munchurl;Jun, Ki Nam;Baek, Seung Ho;Kim, Dong Hyun;Choi, Jeung Won
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.132-141
    • /
    • 2019
  • The necessity of transmitting video data over a narrow-bandwidth exists steadily despite that video service over broadband is common. In this paper, we propose a scalable video coding framework for low-resolution video transmission over a very narrow-bandwidth network by super-resolution of decoded frames of a base layer using a convolutional neural network based super resolution technique to improve the coding efficiency by using it as a prediction for the enhancement layer. In contrast to the conventional scalable high efficiency video coding (SHVC) standard, in which upscaling is performed with a fixed filter, we propose a scalable video coding framework that replaces the existing fixed up-scaling filter by using the trained convolutional neural network for super-resolution. For this, we proposed a neural network structure with skip connection and residual learning technique and trained it according to the application scenario of the video coding framework. For the application scenario where a video whose resolution is $352{\times}288$ and frame rate is 8fps is encoded at 110kbps, the quality of the proposed scalable video coding framework is higher than that of the SHVC framework.

S-PRESENT Cryptanalysis through Know-Plaintext Attack Based on Deep Learning (딥러닝 기반의 알려진 평문 공격을 통한 S-PRESENT 분석)

  • Se-jin Lim;Hyun-Ji Kim;Kyung-Bae Jang;Yea-jun Kang;Won-Woong Kim;Yu-Jin Yang;Hwa-Jeong Seo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.193-200
    • /
    • 2023
  • Cryptanalysis can be performed by various techniques such as known plaintext attack, differential attack, side-channel analysis, and the like. Recently, many studies have been conducted on cryptanalysis using deep learning. A known-plaintext attack is a technique that uses a known plaintext and ciphertext pair to find a key. In this paper, we use deep learning technology to perform a known-plaintext attack against S-PRESENT, a reduced version of the lightweight block cipher PRESENT. This paper is significant in that it is the first known-plaintext attack based on deep learning performed on a reduced lightweight block cipher. For cryptanalysis, MLP (Multi-Layer Perceptron) and 1D and 2D CNN(Convolutional Neural Network) models are used and optimized, and the performance of the three models is compared. It showed the highest performance in 2D convolutional neural networks, but it was possible to attack only up to some key spaces. From this, it can be seen that the known-plaintext attack through the MLP model and the convolutional neural network is limited in attackable key bits.