• 제목/요약/키워드: deep transfer learning

검색결과 253건 처리시간 0.023초

딥러닝 기반의 도메인 적응 기술: 서베이 (Deep Learning based Domain Adaptation: A Survey)

  • 나재민;황원준
    • 방송공학회논문지
    • /
    • 제27권4호
    • /
    • pp.511-518
    • /
    • 2022
  • 딥러닝 기반의 지도학습은 다양한 응용 분야에서 비약적인 발전을 이루었다. 그러나 많은 지도 학습 방법들은 학습 및 테스트 데이터가 동일한 분포에서 추출된다는 공통된 가정 하에 이루어진다. 이 제약 조건에서 벗어나는 경우, 학습 도메인에서 훈련된 딥러닝 네트워크는 도메인 간의 분포 차이로 인하여 테스트 도메인에서의 성능이 급격하게 저하될 가능성이 높다. 도메인 적응 기술은 레이블이 풍부한 학습 도메인 (소스 도메인)의 학습된 지식을 기반으로 레이블이 불충분한 테스트 도메인 (타겟 도메인) 에서 성공적인 추론을 할 수 있도록 딥러닝 네트워크를 훈련하는 전이 학습의 한 방법론이다. 특히 비지도 도메인 적응 기술은 타겟 도메인에 레이블이 전혀 없는 이미지 데이터에만 접근할 수 있는 상황을 가정하여 도메인 적응 문제를 다룬다. 본 논문에서는 이러한 비지도 학습 기반의 도메인 적응 기술들에 대해 탐구한다.

전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론 (Deep Learning-based Professional Image Interpretation Using Expertise Transplant)

  • 김태진;김남규
    • 지능정보연구
    • /
    • 제26권2호
    • /
    • pp.79-104
    • /
    • 2020
  • 최근 텍스트와 이미지 딥러닝 기술의 괄목할만한 발전에 힘입어, 두 분야의 접점에 해당하는 이미지 캡셔닝에 대한 관심이 급증하고 있다. 이미지 캡셔닝은 주어진 이미지에 대한 캡션을 자동으로 생성하는 기술로, 이미지 이해와 텍스트 생성을 동시에 다룬다. 다양한 활용 가능성 덕분에 인공지능의 핵심 연구 분야 중 하나로 자리매김하고 있으며, 성능을 다양한 측면에서 향상시키고자 하는 시도가 꾸준히 이루어지고 있다. 하지만 이처럼 이미지 캡셔닝의 성능을 고도화하기 위한 최근의 많은 노력에도 불구하고, 이미지를 일반인이 아닌 분야별 전문가의 시각에서 해석하기 위한 연구는 찾아보기 어렵다. 동일한 이미지에 대해서도 이미지를 접한 사람의 전문 분야에 따라 관심을 갖고 주목하는 부분이 상이할 뿐 아니라, 전문성의 수준에 따라 이를 해석하고 표현하는 방식도 다르다. 이에 본 연구에서는 전문가의 전문성을 활용하여 이미지에 대해 해당 분야에 특화된 캡션을 생성하기 위한 방안을 제안한다. 구체적으로 제안 방법론은 방대한 양의 일반 데이터에 대해 사전 학습을 수행한 후, 소량의 전문 데이터에 대한 전이 학습을 통해 해당 분야의 전문성을 이식한다. 또한 본 연구에서는 이 과정에서 발생하게 되는 관찰간 간섭 문제를 해결하기 위해 '특성 독립 전이 학습' 방안을 제안한다. 제안 방법론의 실현 가능성을 파악하기 위해 MSCOCO의 이미지-캡션 데이터 셋을 활용하여 사전 학습을 수행하고, 미술 치료사의 자문을 토대로 생성한 '이미지-전문 캡션' 데이터를 활용하여 전문성을 이식하는 실험을 수행하였다. 실험 결과 일반 데이터에 대한 학습을 통해 생성된 캡션은 전문적 해석과 무관한 내용을 다수 포함하는 것과 달리, 제안 방법론에 따라 생성된 캡션은 이식된 전문성 관점에서의 캡션을 생성함을 확인하였다. 본 연구는 전문 이미지 해석이라는 새로운 연구 목표를 제안하였고, 이를 위해 전이 학습의 새로운 활용 방안과 특정 도메인에 특화된 캡션을 생성하는 방법을 제시하였다.

A Novel Transfer Learning-Based Algorithm for Detecting Violence Images

  • Meng, Yuyan;Yuan, Deyu;Su, Shaofan;Ming, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권6호
    • /
    • pp.1818-1832
    • /
    • 2022
  • Violence in the Internet era poses a new challenge to the current counter-riot work, and according to research and analysis, most of the violent incidents occurring are related to the dissemination of violence images. The use of the popular deep learning neural network to automatically analyze the massive amount of images on the Internet has become one of the important tools in the current counter-violence work. This paper focuses on the use of transfer learning techniques and the introduction of an attention mechanism to the residual network (ResNet) model for the classification and identification of violence images. Firstly, the feature elements of the violence images are identified and a targeted dataset is constructed; secondly, due to the small number of positive samples of violence images, pre-training and attention mechanisms are introduced to suggest improvements to the traditional residual network; finally, the improved model is trained and tested on the constructed dedicated dataset. The research results show that the improved network model can quickly and accurately identify violence images with an average accuracy rate of 92.20%, thus effectively reducing the cost of manual identification and providing decision support for combating rebel organization activities.

Food Detection by Fine-Tuning Pre-trained Convolutional Neural Network Using Noisy Labels

  • Alshomrani, Shroog;Aljoudi, Lina;Aljabri, Banan;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • 제21권7호
    • /
    • pp.182-190
    • /
    • 2021
  • Deep learning is an advanced technology for large-scale data analysis, with numerous promising cases like image processing, object detection and significantly more. It becomes customarily to use transfer learning and fine-tune a pre-trained CNN model for most image recognition tasks. Having people taking photos and tag themselves provides a valuable resource of in-data. However, these tags and labels might be noisy as people who annotate these images might not be experts. This paper aims to explore the impact of noisy labels on fine-tuning pre-trained CNN models. Such effect is measured on a food recognition task using Food101 as a benchmark. Four pre-trained CNN models are included in this study: InceptionV3, VGG19, MobileNetV2 and DenseNet121. Symmetric label noise will be added with different ratios. In all cases, models based on DenseNet121 outperformed the other models. When noisy labels were introduced to the data, the performance of all models degraded almost linearly with the amount of added noise.

양방향 인재매칭을 위한 BERT 기반의 전이학습 모델 (A BERT-based Transfer Learning Model for Bidirectional HR Matching)

  • 오소진;장문경;송희석
    • Journal of Information Technology Applications and Management
    • /
    • 제28권4호
    • /
    • pp.33-43
    • /
    • 2021
  • While youth unemployment has recorded the lowest level since the global COVID-19 pandemic, SMEs(small and medium sized enterprises) are still struggling to fill vacancies. It is difficult for SMEs to find good candidates as well as for job seekers to find appropriate job offers due to information mismatch. To overcome information mismatch, this study proposes the fine-turning model for bidirectional HR matching based on a pre-learning language model called BERT(Bidirectional Encoder Representations from Transformers). The proposed model is capable to recommend job openings suitable for the applicant, or applicants appropriate for the job through sufficient pre-learning of terms including technical jargons. The results of the experiment demonstrate the superior performance of our model in terms of precision, recall, and f1-score compared to the existing content-based metric learning model. This study provides insights for developing practical models for job recommendations and offers suggestions for future research.

Effect of deep transfer learning with a different kind of lesion on classification performance of pre-trained model: Verification with radiolucent lesions on panoramic radiographs

  • Yoshitaka Kise;Yoshiko Ariji;Chiaki Kuwada;Motoki Fukuda;Eiichiro Ariji
    • Imaging Science in Dentistry
    • /
    • 제53권1호
    • /
    • pp.27-34
    • /
    • 2023
  • Purpose: The aim of this study was to clarify the influence of training with a different kind of lesion on the performance of a target model. Materials and Methods: A total of 310 patients(211 men, 99 women; average age, 47.9±16.1 years) were selected and their panoramic images were used in this study. We created a source model using panoramic radiographs including mandibular radiolucent cyst-like lesions (radicular cyst, dentigerous cyst, odontogenic keratocyst, and ameloblastoma). The model was simulatively transferred and trained on images of Stafne's bone cavity. A learning model was created using a customized DetectNet built in the Digits version 5.0 (NVIDIA, Santa Clara, CA). Two machines(Machines A and B) with identical specifications were used to simulate transfer learning. A source model was created from the data consisting of ameloblastoma, odontogenic keratocyst, dentigerous cyst, and radicular cyst in Machine A. Thereafter, it was transferred to Machine B and trained on additional data of Stafne's bone cavity to create target models. To investigate the effect of the number of cases, we created several target models with different numbers of Stafne's bone cavity cases. Results: When the Stafne's bone cavity data were added to the training, both the detection and classification performances for this pathology improved. Even for lesions other than Stafne's bone cavity, the detection sensitivities tended to increase with the increase in the number of Stafne's bone cavities. Conclusion: This study showed that using different lesions for transfer learning improves the performance of the model.

반려견 자동 품종 분류를 위한 전이학습 효과 분석 (Analysis of Transfer Learning Effect for Automatic Dog Breed Classification)

  • 이동수;박구만
    • 방송공학회논문지
    • /
    • 제27권1호
    • /
    • pp.133-145
    • /
    • 2022
  • 국내에서 지속적으로 증가하는 반려견 인구 및 산업 규모에 비해 이와 관련한 데이터의 체계적인 분석이나 품종 분류 방법 연구 등은 매우 부족한 실정이다. 본 논문에서는 국내에서 양육되는 반려견의 주요 14개 품종에 대해 딥러닝 기술을 이용한 자동 품종 분류 방법을 수행하였다. 이를 위해 먼저 딥러닝 학습을 위한 반려견 이미지를 수집하고 데이터셋을 구축하였으며, VGG-16 및 Resnet-34를 백본 네트워크로 사용하는 전이학습을 각각 수행하여 품종 분류 알고리즘을 만들었다. 반려견 이미지에 대한 두 모델의 전이학습 효과를 확인하기 위해, Pre-trained 가중치를 사용한 것과 가중치를 업데이트하는 실험을 수행하여 비교하였으며, VGG-16 기반으로 fine tuning을 수행했을 때, 최종 모델에서 Top 1 정확도는 약 89%, Top 3 정확도는 약 94%의 정확도 성능을 각각 얻을수 있었다. 본 논문에서 제안하는 국내의 주요 반려견 품종 분류 방법 및 데이터 구축은 동물보호센터에서의 유기·유실견 품종 구분이나 사료 산업체에서의 활용 등 여러가지 응용 목적으로도 활용될 수 있는 가능성을 가지고 있다.

Plant Disease Identification using Deep Neural Networks

  • Mukherjee, Subham;Kumar, Pradeep;Saini, Rajkumar;Roy, Partha Pratim;Dogra, Debi Prosad;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • 제4권4호
    • /
    • pp.233-238
    • /
    • 2017
  • Automatic identification of disease in plants from their leaves is one of the most challenging task to researchers. Diseases among plants degrade their performance and results into a huge reduction of agricultural products. Therefore, early and accurate diagnosis of such disease is of the utmost importance. The advancement in deep Convolutional Neural Network (CNN) has change the way of processing images as compared to traditional image processing techniques. Deep learning architectures are composed of multiple processing layers that learn the representations of data with multiple levels of abstraction. Therefore, proved highly effective in comparison to many state-of-the-art works. In this paper, we present a plant disease identification methodology from their leaves using deep CNNs. For this, we have adopted GoogLeNet that is considered a powerful architecture of deep learning to identify the disease types. Transfer learning has been used to fine tune the pre-trained model. An accuracy of 85.04% has been recorded in the identification of four disease class in Apple plant leaves. Finally, a comparison with other models has been performed to show the effectiveness of the approach.

콘크리트 균열 탐지를 위한 딥 러닝 기반 CNN 모델 비교 (Comparison of Deep Learning-based CNN Models for Crack Detection)

  • 설동현;오지훈;김홍진
    • 대한건축학회논문집:구조계
    • /
    • 제36권3호
    • /
    • pp.113-120
    • /
    • 2020
  • The purpose of this study is to compare the models of Deep Learning-based Convolution Neural Network(CNN) for concrete crack detection. The comparison models are AlexNet, GoogLeNet, VGG16, VGG19, ResNet-18, ResNet-50, ResNet-101, and SqueezeNet which won ImageNet Large Scale Visual Recognition Challenge(ILSVRC). To train, validate and test these models, we constructed 3000 training data and 12000 validation data with 256×256 pixel resolution consisting of cracked and non-cracked images, and constructed 5 test data with 4160×3120 pixel resolution consisting of concrete images with crack. In order to increase the efficiency of the training, transfer learning was performed by taking the weight from the pre-trained network supported by MATLAB. From the trained network, the validation data is classified into crack image and non-crack image, yielding True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), and 6 performance indicators, False Negative Rate (FNR), False Positive Rate (FPR), Error Rate, Recall, Precision, Accuracy were calculated. The test image was scanned twice with a sliding window of 256×256 pixel resolution to classify the cracks, resulting in a crack map. From the comparison of the performance indicators and the crack map, it was concluded that VGG16 and VGG19 were the most suitable for detecting concrete cracks.

Image-based Soft Drink Type Classification and Dietary Assessment System Using Deep Convolutional Neural Network with Transfer Learning

  • Rubaiya Hafiz;Mohammad Reduanul Haque;Aniruddha Rakshit;Amina khatun;Mohammad Shorif Uddin
    • International Journal of Computer Science & Network Security
    • /
    • 제24권2호
    • /
    • pp.158-168
    • /
    • 2024
  • There is hardly any person in modern times who has not taken soft drinks instead of drinking water. The rate of people taking soft drinks being surprisingly high, researchers around the world have cautioned from time to time that these drinks lead to weight gain, raise the risk of non-communicable diseases and so on. Therefore, in this work an image-based tool is developed to monitor the nutritional information of soft drinks by using deep convolutional neural network with transfer learning. At first, visual saliency, mean shift segmentation, thresholding and noise reduction technique, collectively known as 'pre-processing' are adopted to extract the location of drinks region. After removing backgrounds and segment out only the desired area from image, we impose Discrete Wavelength Transform (DWT) based resolution enhancement technique is applied to improve the quality of image. After that, transfer learning model is employed for the classification of drinks. Finally, nutrition value of each drink is estimated using Bag-of-Feature (BoF) based classification and Euclidean distance-based ratio calculation technique. To achieve this, a dataset is built with ten most consumed soft drinks in Bangladesh. These images were collected from imageNet dataset as well as internet and proposed method confirms that it has the ability to detect and recognize different types of drinks with an accuracy of 98.51%.