• Title/Summary/Keyword: 합성 데이터 셋

Search Result 136, Processing Time 0.031 seconds

A Study on Lane Detection Based on Split-Attention Backbone Network (Split-Attention 백본 네트워크를 활용한 차선 인식에 관한 연구)

  • Song, In seo;Lee, Seon woo;Kwon, Jang woo;Won, Jong hoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.5
    • /
    • pp.178-188
    • /
    • 2020
  • This paper proposes a lane recognition CNN network using split-attention network as a backbone to extract feature. Split-attention is a method of assigning weight to each channel of a feature map in the CNN feature extraction process; it can reliably extract the features of an image during the rapidly changing driving environment of a vehicle. The proposed deep neural networks in this paper were trained and evaluated using the Tusimple data set. The change in performance according to the number of layers of the backbone network was compared and analyzed. A result comparable to the latest research was obtained with an accuracy of up to 96.26, and FN showed the best result. Therefore, even in the driving environment of an actual vehicle, stable lane recognition is possible without misrecognition using the model proposed in this study.

Shadow Removal based on the Deep Neural Network Using Self Attention Distillation (자기 주의 증류를 이용한 심층 신경망 기반의 그림자 제거)

  • Kim, Jinhee;Kim, Wonjun
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.419-428
    • /
    • 2021
  • Shadow removal plays a key role for the pre-processing of image processing techniques such as object tracking and detection. With the advances of image recognition based on deep convolution neural networks, researches for shadow removal have been actively conducted. In this paper, we propose a novel method for shadow removal, which utilizes self attention distillation to extract semantic features. The proposed method gradually refines results of shadow detection, which are extracted from each layer of the proposed network, via top-down distillation. Specifically, the training procedure can be efficiently performed by learning the contextual information for shadow removal without shadow masks. Experimental results on various datasets show the effectiveness of the proposed method for shadow removal under real world environments.

Grad-CAM based deep learning network for location detection of the main object (주 객체 위치 검출을 위한 Grad-CAM 기반의 딥러닝 네트워크)

  • Kim, Seon-Jin;Lee, Jong-Keun;Kwak, Nae-Jung;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.204-211
    • /
    • 2020
  • In this paper, we propose an optimal deep learning network architecture for main object location detection through weak supervised learning. The proposed network adds convolution blocks for improving the localization accuracy of the main object through weakly-supervised learning. The additional deep learning network consists of five additional blocks that add a composite product layer based on VGG-16. And the proposed network was trained by the method of weakly-supervised learning that does not require real location information for objects. In addition, Grad-CAM to compensate for the weakness of GAP in CAM, which is one of weak supervised learning methods, was used. The proposed network was tested through the CUB-200-2011 data set, we could obtain 50.13% in top-1 localization error. Also, the proposed network shows higher accuracy in detecting the main object than the existing method.

Deep Learning based Raw Audio Signal Bandwidth Extension System (딥러닝 기반 음향 신호 대역 확장 시스템)

  • Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1122-1128
    • /
    • 2020
  • Bandwidth Extension refers to restoring and expanding a narrow band signal(NB) that is damaged or damaged in the encoding and decoding process due to the lack of channel capacity or the characteristics of the codec installed in the mobile communication device. It means converting to a wideband signal(WB). Bandwidth extension research mainly focuses on voice signals and converts high bands into frequency domains, such as SBR (Spectral Band Replication) and IGF (Intelligent Gap Filling), and restores disappeared or damaged high bands based on complex feature extraction processes. In this paper, we propose a model that outputs an bandwidth extended signal based on an autoencoder among deep learning models, using the residual connection of one-dimensional convolutional neural networks (CNN), the bandwidth is extended by inputting a time domain signal of a certain length without complicated pre-processing. In addition, it was confirmed that the damaged high band can be restored even by training on a dataset containing various types of sound sources including music that is not limited to the speech.

Deep Learning-Based Brain Tumor Classification in MRI images using Ensemble of Deep Features

  • Kang, Jaeyong;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.37-44
    • /
    • 2021
  • Automatic classification of brain MRI images play an important role in early diagnosis of brain tumors. In this work, we present a deep learning-based brain tumor classification model in MRI images using ensemble of deep features. In our proposed framework, three different deep features from brain MR image are extracted using three different pre-trained models. After that, the extracted deep features are fed to the classification module. In the classification module, the three different deep features are first fed into the fully-connected layers individually to reduce the dimension of the features. After that, the output features from the fully-connected layers are concatenated and fed into the fully-connected layer to predict the final output. To evaluate our proposed model, we use openly accessible brain MRI dataset from web. Experimental results show that our proposed model outperforms other machine learning-based models.

A USB classification system using deep neural networks (인공신경망을 이용한 USB 인식 시스템)

  • Woo, Sae-Hyeong;Park, Jisu;Eun, Seongbae;Cha, Shin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.535-538
    • /
    • 2022
  • For Plug & Play of IoT devices, we develop a module that recognizes the type of USB, which is a typical wired interface of IoT devices, through image recognition. In order to drive an IoT device, a driver for communication and device hardware is required. The wired interface for connecting to the IoT device is recognized by using the image obtained through the camera of smartphone shooting to recognize the corresponding communication interface. For USB, which is a most popular wired interface, types of USB are classified through artificial neural network-based machine learning. In order to secure sufficient data set of artificial neural networks, USB images are collected through the Internet, and additional image data sets are secured through image processing. In addition to the convolution neural networks, recognizers are implemented with various deep artificial neural networks, and their performance is compared and evaluated.

  • PDF

Effective Classification Method of Hierarchical CNN for Multi-Class Outlier Detection (다중 클래스 이상치 탐지를 위한 계층 CNN의 효과적인 클래스 분할 방법)

  • Kim, Jee-Hyun;Lee, Seyoung;Kim, Yerim;Ahn, Seo-Yeong;Park, Saerom
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.81-84
    • /
    • 2022
  • 제조 산업에서의 이상치 검출은 생산품의 품질과 운영비용을 절감하기 위한 중요한 요소로 최근 딥러닝을 사용하여 자동화되고 있다. 이상치 검출을 위한 딥러닝 기법에는 CNN이 있으며, CNN을 계층적으로 구성할 경우 단일 CNN 모델에 비해 상대적으로 성능의 향상을 보일 수 있다는 것이 많은 선행 연구에서 나타났다. 이에 MVTec-AD 데이터셋을 이용하여 계층 CNN이 다중 클래스 이상치 판별 문제에 대해 효과적인지를 탐구하고자 하였다. 실험 결과 단일 CNN의 정확도는 0.7715, 계층 CNN의 정확도는 0.7838로 다중 클래스 이상치 판별 문제에 있어 계층 CNN 방식 접근이 다중 클래스 이상치 탐지 문제에서 알고리즘의 성능을 향상할 수 있음을 확인할 수 있었다. 계층 CNN은 모델과 파라미터의 개수와 리소스의 사용이 단일 CNN에 비하여 기하급수적으로 증가한다는 단점이 존재한다. 이에 계층 CNN의 장점을 유지하며 사용 리소스를 절약하고자 하였고 K-means, GMM, 계층적 클러스터링 알고리즘을 통해 제작한 새로운 클래스를 이용해 계층 CNN을 구성하여 각각 정확도 0.7930, 0.7891, 0.7936의 결과를 얻을 수 있었다. 이를 통해 Clustering 알고리즘을 사용하여 적절히 물체를 분류할 경우 물체에 따른 개별 상태 판단 모델을 제작하는 것과 비슷하거나 더 좋은 성능을 내며 리소스 사용을 줄일 수 있음을 확인할 수 있었다.

  • PDF

Gaussian Blending: Improved 3D Gaussian Splatting for Model Light-Weighting and Deep Learning-Based Performance Enhancement

  • Yeong-In Lee;Jin-Nyeong Heo;Ji-Hwan Moon;Ha-Young Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.23-32
    • /
    • 2024
  • NVS (Novel View Synthesis) is a field in computer vision that reconstructs new views of a scene from a set of input views. Real-time rendering and high performance are essential for NVS technology to be effectively utilized in various applications. Recently, 3D-GS (3D Gaussian Splatting) has gained popularity due to its faster training and inference times compared to those of NeRF (Neural Radiance Fields)-based methodologies. However, since 3D-GS reconstructs a 3D (Three-Dimensional) scene by splitting and cloning (Density Control) Gaussian points, the number of Gaussian points continuously increases, causing the model to become heavier as training progresses. To address this issue, we propose two methodologies: 1) Gaussian blending, an improved density control methodology that removes unnecessary Gaussian points, and 2) a performance enhancement methodology using a depth estimation model to minimize the loss in representation caused by the blending of Gaussian points. Experiments on the Tanks and Temples Dataset show that the proposed methodologies reduce the number of Gaussian points by up to 4% while maintaining performance.

Diagnosis of Valve Internal Leakage for Ship Piping System using Acoustic Emission Signal-based Machine Learning Approach (선박용 밸브의 내부 누설 진단을 위한 음향방출신호의 머신러닝 기법 적용 연구)

  • Lee, Jung-Hyung
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.1
    • /
    • pp.184-192
    • /
    • 2022
  • Valve internal leakage is caused by damage to the internal parts of the valve, resulting in accidents and shutdowns of the piping system. This study investigated the possibility of a real-time leak detection method using the acoustic emission (AE) signal generated from the piping system during the internal leakage of a butterfly valve. Datasets of raw time-domain AE signals were collected and postprocessed for each operation mode of the valve in a systematic manner to develop a data-driven model for the detection and classification of internal leakage, by applying machine learning algorithms. The aim of this study was to determine whether it is possible to treat leak detection as a classification problem by applying two classification algorithms: support vector machine (SVM) and convolutional neural network (CNN). The results showed different performances for the algorithms and datasets used. The SVM-based binary classification models, based on feature extraction of data, achieved an overall accuracy of 83% to 90%, while in the case of a multiple classification model, the accuracy was reduced to 66%. By contrast, the CNN-based classification model achieved an accuracy of 99.85%, which is superior to those of any other models based on the SVM algorithm. The results revealed that the SVM classification model requires effective feature extraction of the AE signals to improve the accuracy of multi-class classification. Moreover, the CNN-based classification can be a promising approach to detect both leakage and valve opening as long as the performance of the processor does not degrade.

Automatic Sagittal Plane Detection for the Identification of the Mandibular Canal (치아 신경관 식별을 위한 자동 시상면 검출법)

  • Pak, Hyunji;Kim, Dongjoon;Shin, Yeong-Gil
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.31-37
    • /
    • 2020
  • Identification of the mandibular canal path in Computed Tomography (CT) scans is important in dental implantology. Typically, prior to the implant planning, dentists find a sagittal plane where the mandibular canal path is maximally observed, to manually identify the mandibular canal. However, this is time-consuming and requires extensive experience. In this paper, we propose a deep-learning-based framework to detect the desired sagittal plane automatically. This is accomplished by utilizing two main techniques: 1) a modified version of the iterative transformation network (ITN) method for obtaining initial planes, and 2) a fine searching method based on a convolutional neural network (CNN) classifier for detecting the desirable sagittal plane. This combination of techniques facilitates accurate plane detection, which is a limitation of the stand-alone ITN method. We have tested on a number of CT datasets to demonstrate that the proposed method can achieve more satisfactory results compared to the ITN method. This allows dentists to identify the mandibular canal path efficiently, providing a foundation for future research into more efficient, automatic mandibular canal detection methods.