• 제목/요약/키워드: deep transfer learning

검색결과 259건 처리시간 0.025초

딥러닝 기반 객체 인식을 통한 철계 열처리 부품의 인지에 관한 연구 (Deep Learning-based Material Object Recognition Research for Steel Heat Treatment Parts)

  • 박혜정;황창하;김상권;여국현;서상우
    • 열처리공학회지
    • /
    • 제35권6호
    • /
    • pp.327-336
    • /
    • 2022
  • In this study, a model for automatically recognizing several steel parts through a camera before charging materials was developed under the assumption that the temperature distribution in the pre-air atmosphere was known. For model development, datasets were collected in random environments and factories. In this study, the YOLO-v5 model, which is a YOLO model with strengths in real-time detection in the field of object detection, was used, and the disadvantages of taking a lot of time to collect images and learning models was solved through the transfer learning methods. The performance evaluation results of the derived model showed excellent performance of 0.927 based on mAP 0.5. The derived model will be applied to the model development study, which uses the model to accurately recognize the material and then match it with the temperature distribution in the atmosphere to determine whether the material layout is suitable before charging materials.

YOLO 네트워크를 활용한 전이학습 기반 객체 탐지 알고리즘 (Transfer Learning-based Object Detection Algorithm Using YOLO Network)

  • 이동구;선영규;김수현;심이삭;이계산;송명남;김진영
    • 한국인터넷방송통신학회논문지
    • /
    • 제20권1호
    • /
    • pp.219-223
    • /
    • 2020
  • 딥 러닝 기반 객체 탐지 및 영상처리 분야에서 모델의 인식률과 정확도를 보장하기 위해 다량의 데이터 확보는 필수적이다. 본 논문에서는 학습데이터가 적은 경우에도 인공지능 모델의 높은 성능을 도출하기 위해 전이학습 기반 객체탐지 알고리즘을 제안한다. 본 논문에서는 객체탐지를 위해 사전 학습된 Resnet-50 네트워크와 YOLO(You Only Look Once) 네트워크를 결합한 전이학습 네트워크를 구성하였다. 구성된 전이학습 네트워크는 Leeds Sports Pose 데이터셋의 일부를 활용하여 이미지에서 가장 넓은 영역을 차지하고 있는 사람을 탐지하는 네트워크로 학습을 진행하였다. 실험결과는 탐지율 84%, 탐지 정확도 97%를 기록하였다.

딥러닝 전이학습을 이용한 경량 트렌드 분석 시스템 설계 및 구현 (Design and implementation of trend analysis system through deep learning transfer learning)

  • 신종호;안수빈;박태영;방승철;노기섭
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 추계학술대회
    • /
    • pp.87-89
    • /
    • 2022
  • 최근 코로나로 인해 집에 있는 시간이 많아진 소비자들이 증가함에 따라 비대면으로 쉽게 사용 할 수 있는 SNS와 OTT등 디지털 소비를 하는 시간이 자연스럽게 늘어났다. 코로나가 발생한 2019년 이후 디지털 소비는 44%에서 82%로 두 배가량 증가하였고 트렌드가 빠르게 변화하는 디지털 특성상 소비자들의 감성을 분석하여 트렌드를 신속, 정확하게 파악하여 적용하는 것은 중요하다. 그러나 대기업 수준의 시스템이 아닌 소규모 시스템에서 감성분석을 활용한 서비스를 실제로 구현하기에는 제약 사항이 있으며 실제 서비스 되는 경우도 많지 않다. 하지만 소규모 시스템이라도 간편하게 소비자들 트렌드 분석을 할 수 있다면 빠르게 변화하는 현대사회에 도움이 될 것이다. 본 논문에서는 BERT Model의 Transfer Learning(Fine Tuning)을 통해 학습 네트워크를 구축하고, 실시간 데이터 수집을 위한 Crawler를 연동하는 경량 트렌드 분석 시스템을 제안한다.

  • PDF

Deep Learning-Enabled Detection of Pneumoperitoneum in Supine and Erect Abdominal Radiography: Modeling Using Transfer Learning and Semi-Supervised Learning

  • Sangjoon Park;Jong Chul Ye;Eun Sun Lee;Gyeongme Cho;Jin Woo Yoon;Joo Hyeok Choi;Ijin Joo;Yoon Jin Lee
    • Korean Journal of Radiology
    • /
    • 제24권6호
    • /
    • pp.541-552
    • /
    • 2023
  • Objective: Detection of pneumoperitoneum using abdominal radiography, particularly in the supine position, is often challenging. This study aimed to develop and externally validate a deep learning model for the detection of pneumoperitoneum using supine and erect abdominal radiography. Materials and Methods: A model that can utilize "pneumoperitoneum" and "non-pneumoperitoneum" classes was developed through knowledge distillation. To train the proposed model with limited training data and weak labels, it was trained using a recently proposed semi-supervised learning method called distillation for self-supervised and self-train learning (DISTL), which leverages the Vision Transformer. The proposed model was first pre-trained with chest radiographs to utilize common knowledge between modalities, fine-tuned, and self-trained on labeled and unlabeled abdominal radiographs. The proposed model was trained using data from supine and erect abdominal radiographs. In total, 191212 chest radiographs (CheXpert data) were used for pre-training, and 5518 labeled and 16671 unlabeled abdominal radiographs were used for fine-tuning and self-supervised learning, respectively. The proposed model was internally validated on 389 abdominal radiographs and externally validated on 475 and 798 abdominal radiographs from the two institutions. We evaluated the performance in diagnosing pneumoperitoneum using the area under the receiver operating characteristic curve (AUC) and compared it with that of radiologists. Results: In the internal validation, the proposed model had an AUC, sensitivity, and specificity of 0.881, 85.4%, and 73.3% and 0.968, 91.1, and 95.0 for supine and erect positions, respectively. In the external validation at the two institutions, the AUCs were 0.835 and 0.852 for the supine position and 0.909 and 0.944 for the erect position. In the reader study, the readers' performances improved with the assistance of the proposed model. Conclusion: The proposed model trained with the DISTL method can accurately detect pneumoperitoneum on abdominal radiography in both the supine and erect positions.

High-Resolution Satellite Image Super-Resolution Using Image Degradation Model with MTF-Based Filters

  • Minkyung Chung;Minyoung Jung;Yongil Kim
    • 대한원격탐사학회지
    • /
    • 제39권4호
    • /
    • pp.395-407
    • /
    • 2023
  • Super-resolution (SR) has great significance in image processing because it enables downstream vision tasks with high spatial resolution. Recently, SR studies have adopted deep learning networks and achieved remarkable SR performance compared to conventional example-based methods. Deep-learning-based SR models generally require low-resolution (LR) images and the corresponding high-resolution (HR) images as training dataset. Due to the difficulties in obtaining real-world LR-HR datasets, most SR models have used only HR images and generated LR images with predefined degradation such as bicubic downsampling. However, SR models trained on simple image degradation do not reflect the properties of the images and often result in deteriorated SR qualities when applied to real-world images. In this study, we propose an image degradation model for HR satellite images based on the modulation transfer function (MTF) of an imaging sensor. Because the proposed method determines the image degradation based on the sensor properties, it is more suitable for training SR models on remote sensing images. Experimental results on HR satellite image datasets demonstrated the effectiveness of applying MTF-based filters to construct a more realistic LR-HR training dataset.

딥 트랜스퍼 러닝 기반의 아기 울음소리 식별 (Infant cry recognition using a deep transfer learning method)

  • 박철;이종욱;오스만;박대희;정용화
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2020년도 추계학술발표대회
    • /
    • pp.971-974
    • /
    • 2020
  • Infants express their physical and emotional needs to the outside world mainly through crying. However, most of parents find it challenging to understand the reason behind their babies' cries. Failure to correctly understand the cause of a baby' cry and take appropriate actions can affect the cognitive and motor development of newborns undergoing rapid brain development. In this paper, we propose an infant cry recognition system based on deep transfer learning to help parents identify crying babies' needs the same way a specialist would. The proposed system works by transforming the waveform of the cry signal into log-mel spectrogram, then uses the VGGish model pre-trained on AudioSet to extract a 128-dimensional feature vector from the spectrogram. Finally, a softmax function is used to classify the extracted feature vector and recognize the corresponding type of cry. The experimental results show that our method achieves a good performance exceeding 0.96 in precision and recall, and f1-score.

딥러닝 기반 작물 질병 탐지 및 분류 시스템 (Deep Learning-based system for plant disease detection and classification)

  • 고유진;이현준;정희자;위리;김남호
    • 스마트미디어저널
    • /
    • 제12권7호
    • /
    • pp.9-17
    • /
    • 2023
  • 작물의 병충해는 다양한 작물의 성장에 영향을 미치기 때문에 초기에 병충해를 식별하는 것이 매우 중요하다. 이미 많은 머신러닝(ML) 모델이 작물 병충해의 검사와 분류에 사용되었지만, 머신러닝의 부분 집합인 딥러닝(DL)이 발전을 이루면서 이 연구 분야에서 많은 진보가 있었다. 본 연구에서는 YOLOX 검출기와 MobileNet 분류기를 사용하여 비정상 작물의 병충해 검사 및 정상 작물에 대해서는 성숙도 분류를 진행하였다. 이 방법을 통해 다양한 작물 병충해 특징을 효과적으로 추출할 수 있으며, 실험을 위해 딸기, 고추, 토마토와 관련된 다양한 해상도의 이미지 데이터 셋을 준비하여 작물 병충해 분류에 사용하였다. 실험 결과에 따르면 복잡한 배경 조건을 가진 영상에서 평균 테스트 정확도가 84%, 성숙도 분류 정확도가 83.91% 임을 확인할 수 있었다. 이 모델은 자연 상태에서 3가지 작물에 대한 6가지 질병 검출 및 각 작물의 성숙도 분류를 효과적으로 진행할 수 있었다.

Remote Sensing Image Classification for Land Cover Mapping in Developing Countries: A Novel Deep Learning Approach

  • Lynda, Nzurumike Obianuju;Nnanna, Nwojo Agwu;Boukar, Moussa Mahamat
    • International Journal of Computer Science & Network Security
    • /
    • 제22권2호
    • /
    • pp.214-222
    • /
    • 2022
  • Convolutional Neural networks (CNNs) are a category of deep learning networks that have proven very effective in computer vision tasks such as image classification. Notwithstanding, not much has been seen in its use for remote sensing image classification in developing countries. This is majorly due to the scarcity of training data. Recently, transfer learning technique has successfully been used to develop state-of-the art models for remote sensing (RS) image classification tasks using training and testing data from well-known RS data repositories. However, the ability of such model to classify RS test data from a different dataset has not been sufficiently investigated. In this paper, we propose a deep CNN model that can classify RS test data from a dataset different from the training dataset. To achieve our objective, we first, re-trained a ResNet-50 model using EuroSAT, a large-scale RS dataset to develop a base model then we integrated Augmentation and Ensemble learning to improve its generalization ability. We further experimented on the ability of this model to classify a novel dataset (Nig_Images). The final classification results shows that our model achieves a 96% and 80% accuracy on EuroSAT and Nig_Images test data respectively. Adequate knowledge and usage of this framework is expected to encourage research and the usage of deep CNNs for land cover mapping in cases of lack of training data as obtainable in developing countries.

수중에서의 특징점 매칭을 위한 CNN기반 Opti-Acoustic변환 (CNN-based Opti-Acoustic Transformation for Underwater Feature Matching)

  • 장혜수;이영준;김기섭;김아영
    • 로봇학회논문지
    • /
    • 제15권1호
    • /
    • pp.1-7
    • /
    • 2020
  • In this paper, we introduce the methodology that utilizes deep learning-based front-end to enhance underwater feature matching. Both optical camera and sonar are widely applicable sensors in underwater research, however, each sensor has its own weaknesses, such as light condition and turbidity for the optic camera, and noise for sonar. To overcome the problems, we proposed the opti-acoustic transformation method. Since feature detection in sonar image is challenging, we converted the sonar image to an optic style image. Maintaining the main contents in the sonar image, CNN-based style transfer method changed the style of the image that facilitates feature detection. Finally, we verified our result using cosine similarity comparison and feature matching against the original optic image.

Fight Detection in Hockey Videos using Deep Network

  • Mukherjee, Subham;Saini, Rajkumar;Kumar, Pradeep;Roy, Partha Pratim;Dogra, Debi Prosad;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • 제4권4호
    • /
    • pp.225-232
    • /
    • 2017
  • Understanding actions in videos is an important task. It helps in finding the anomalies present in videos such as fights. Detection of fights becomes more crucial when it comes to sports. This paper focuses on finding fight scenes in Hockey sport videos using blur & radon transform and convolutional neural networks (CNNs). First, the local motion within the video frames has been extracted using blur information. Next, fast fourier and radon transform have been applied on the local motion. The video frames with fight scene have been identified using transfer learning with the help of pre-trained deep learning model VGG-Net. Finally, a comparison of the methodology has been performed using feed forward neural networks. Accuracies of 56.00% and 75.00% have been achieved using feed forward neural network and VGG16-Net, respectively.