• 제목/요약/키워드: VGG16 models

검색결과 47건 처리시간 0.025초

비디오 분류에 기반 해석가능한 딥러닝 알고리즘 (An Explainable Deep Learning Algorithm based on Video Classification)

  • 김택위;조인휘
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.449-452
    • /
    • 2023
  • The rapid development of the Internet has led to a significant increase in multimedia content in social networks. How to better analyze and improve video classification models has become an important task. Deep learning models have typical "black box" characteristics. The model requires explainable analysis. This article uses two classification models: ConvLSTM and VGG16+LSTM models. And combined with the explainable method of LRP, generate visualized explainable results. Finally, based on the experimental results, the accuracy of the classification model is: ConvLSTM: 75.94%, VGG16+LSTM: 92.50%. We conducted explainable analysis on the VGG16+LSTM model combined with the LRP method. We found VGG16+LSTM classification model tends to use the frames biased towards the latter half of the video and the last frame as the basis for classification.

Optimized Deep Learning Techniques for Disease Detection in Rice Crop using Merged Datasets

  • Muhammad Junaid;Sohail Jabbar;Muhammad Munwar Iqbal;Saqib Majeed;Mubarak Albathan;Qaisar Abbas;Ayyaz Hussain
    • International Journal of Computer Science & Network Security
    • /
    • 제23권3호
    • /
    • pp.57-66
    • /
    • 2023
  • Rice is an important food crop for most of the population in the world and it is largely cultivated in Pakistan. It not only fulfills food demand in the country but also contributes to the wealth of Pakistan. But its production can be affected by climate change. The irregularities in the climate can cause several diseases such as brown spots, bacterial blight, tungro and leaf blasts, etc. Detection of these diseases is necessary for suitable treatment. These diseases can be effectively detected using deep learning such as Convolution Neural networks. Due to the small dataset, transfer learning models such as vgg16 model can effectively detect the diseases. In this paper, vgg16, inception and xception models are used. Vgg16, inception and xception models have achieved 99.22%, 88.48% and 93.92% validation accuracies when the epoch value is set to 10. Evaluation of models has also been done using accuracy, recall, precision, and confusion matrix.

저선량 흉부 CT를 이용한 VGGNet 폐기종 검출 유용성 평가 (Effectiveness of the Detection of Pulmonary Emphysema using VGGNet with Low-dose Chest Computed Tomography Images)

  • 김두빈;박영준;홍주완
    • 한국방사선학회논문지
    • /
    • 제16권4호
    • /
    • pp.411-417
    • /
    • 2022
  • 본 연구에서는 저선량 흉부 CT 영상을 이용하여 VGGNet을 학습시키고 폐기종 검출 모델을 구현하고 성능을 확인하고자 한다. 연구에 사용된 저선량 흉부 CT 영상은 정상 진단 8000장, 폐기종 진단 3189장이며, 모델 학습을 위해 정상 데이터와 폐기종 데이터를 train, validation, test dataset으로 각각 60%, 24%, 16%로 무작위 추출하여 구분하였다. 학습을 위한 인공신경망은 VGGNet 중 VGG16과 VGG19를 사용하였으며, 학습이 완료된 모델 평가를 위해 정확도, 손실율, 오차 행렬, 정밀도, 재현율, 특이도, F1-score의 평가지표를 사용하였다. 폐기종 검출 정확도와 손실율은 VGG16과 VGG19 각각 92.35%, 95.88%, 0.21%, 0.09%, 정밀도는 91.60%, 96.55%, 재현율은 98.36%, 97.39%, 특이도는 77.08%, 92.72%, F1-score는 94.86%, 96.97%였다. 위의 평가지표를 통해 VGG19 모델의 폐기종 검출 성능이 VGG16 모델에 비해 우수하다고 판단된다. 본 연구를 통해 VGGNet과 인공신경망을 이용한 폐기종 검출 모델 연구에 기초자료로 사용할 수 있을 것으로 사료된다.

Accuracy Measurement of Image Processing-Based Artificial Intelligence Models

  • Jong-Hyun Lee;Sang-Hyun Lee
    • International journal of advanced smart convergence
    • /
    • 제13권1호
    • /
    • pp.212-220
    • /
    • 2024
  • When a typhoon or natural disaster occurs, a significant number of orchard fruits fall. This has a great impact on the income of farmers. In this paper, we introduce an AI-based method to enhance low-quality raw images. Specifically, we focus on apple images, which are being used as AI training data. In this paper, we utilize both a basic program and an artificial intelligence model to conduct a general image process that determines the number of apples in an apple tree image. Our objective is to evaluate high and low performance based on the close proximity of the result to the actual number. The artificial intelligence models utilized in this study include the Convolutional Neural Network (CNN), VGG16, and RandomForest models, as well as a model utilizing traditional image processing techniques. The study found that 49 red apple fruits out of a total of 87 were identified in the apple tree image, resulting in a 62% hit rate after the general image process. The VGG16 model identified 61, corresponding to 88%, while the RandomForest model identified 32, corresponding to 83%. The CNN model identified 54, resulting in a 95% confirmation rate. Therefore, we aim to select an artificial intelligence model with outstanding performance and use a real-time object separation method employing artificial function and image processing techniques to identify orchard fruits. This application can notably enhance the income and convenience of orchard farmers.

다양한 CNN 모델을 이용한 얼굴 영상의 나이 인식 연구 (A study on age estimation of facial images using various CNNs (Convolutional Neural Networks))

  • 최성은
    • Journal of Platform Technology
    • /
    • 제11권5호
    • /
    • pp.16-22
    • /
    • 2023
  • 얼굴 영상으로부터 나이를 인식하는 기술의 응용분야가 증가함에 따라 이에 대한 연구가 활발히 진행되고 있다. 얼굴 영상으로부터 나이를 인식하기 위해서는 나이를 표현하는 특징을 추출하고, 추출된 특징으로 나이를 정확하게 분류하는 기술이 필요하다. 최근 영상 인식 분야에서 다양한 CNN 기반 딥러닝 모델이 적용되어 성능이 크게 개선되고 있으며, 얼굴 나이 인식 분야에서도 성능 개선을 위해 다양한 CNN 기반 딥러닝 모델이 적용되고 있다. 본 논문에서는 다양한 CNN 기반 딥러닝 모델의 얼굴 나이 인식 성능을 비교하는 연구를 수행하였다. 영상 인식 분야에서 많이 활용되고 있는 AlexNet, VGG-16, VGG-19, ResNet-18, ResNet-34, ResNet-50, ResNet-101, ResNet-152를 활용하여 얼굴 나이 인식을 위한 모델을 구성하고 성능을 비교하였다. 실험 결과에서 ResNet-34를 이용한 얼굴 나이 인식 모델의 성능이 가장 우수하다는 것을 확인하였다.

  • PDF

앙상블 학습 알고리즘을 이용한 컨벌루션 신경망의 분류 성능 분석에 관한 연구 (A Study on Classification Performance Analysis of Convolutional Neural Network using Ensemble Learning Algorithm)

  • 박성욱;김종찬;김도연
    • 한국멀티미디어학회논문지
    • /
    • 제22권6호
    • /
    • pp.665-675
    • /
    • 2019
  • In this paper, we compare and analyze the classification performance of deep learning algorithm Convolutional Neural Network(CNN) ac cording to ensemble generation and combining techniques. We used several CNN models(VGG16, VGG19, DenseNet121, DenseNet169, DenseNet201, ResNet18, ResNet34, ResNet50, ResNet101, ResNet152, GoogLeNet) to create 10 ensemble generation combinations and applied 6 combine techniques(average, weighted average, maximum, minimum, median, product) to the optimal combination. Experimental results, DenseNet169-VGG16-GoogLeNet combination in ensemble generation, and the product rule in ensemble combination showed the best performance. Based on this, it was concluded that ensemble in different models of high benchmarking scores is another way to get good results.

딥러닝을 활용한 전시 정원 디자인 유사성 인지 모형 연구 (Development of Deep Recognition of Similarity in Show Garden Design Based on Deep Learning)

  • 조우윤;권진욱
    • 한국조경학회지
    • /
    • 제52권2호
    • /
    • pp.96-109
    • /
    • 2024
  • 본 연구는 딥러닝 모델 중 VGG-16 및 ResNet50 모델을 활용하여 전시 정원의 유사성 평가 방법을 제시하는 것에 목적이 있다. VGG-16과 ResNet50 모델을 기반으로 전시 정원 유사성 판단을 위한 모형을 개발하였고, 이를 DRG(deep recognition of similarity in show garden design)모형이라 한다. 평가를 위한 방법으로 GAP와 피어슨 상관계수를 활용한 알고리즘을 사용하여 모형을 구축하고 1순위(Top1), 3순위(Top3), 5순위(Top5)에서 원본 이미지와 유사한 이미지를 도출하는 총 개수 비교로 유사성의 정확도를 분석하였다. DRG 모형에 활용된 이미지 데이터는 국외 쇼몽가든페스티벌 전시 정원 총 278개 작품과 국내 정원박람회인 서울정원박람회 27개 작품 및 코리아가든쇼 전시정원 이미지 17개 작품이다. DRG모형을 활용하여 동일 집단과 타 집단간의 이미지 분석을 진행하였고, 이를 기반으로 전시 정원 유사성의 가이드라인을 제시하였다. 첫째, 전체 이미지 유사성 분석은 ResNet50 모델을 기반으로 하여 데이터 증강 기법을 적용하는 것이 유사성 도출에 적합하였다. 둘째, 내부 구조와 외곽형태에 중점을 둔 이미지 분석에서는 형태에 집중하기 위한 일정한 크기의 필터(16cm × 16cm)를 적용하여 이미지를 생성하고 VGG-16 모델을 적용하여 유사성을 비교하는 방법이 효과적임을 알 수 있었다. 이때, 이미지 크기는 448 × 448 픽셀이 효과적이며, 유채색의 원본 이미지를 기본으로 설정함을 제안하였다. 이러한 연구 결과를 토대로 전시 정원 유사성 판단에 대한 정량적 방법을 제안하고, 향후 다양한 분야와의 융합 연구를 통해 정원 문화의 지속적인 발전에 기여할 것으로 기대한다.

콘크리트 균열 탐지를 위한 딥 러닝 기반 CNN 모델 비교 (Comparison of Deep Learning-based CNN Models for Crack Detection)

  • 설동현;오지훈;김홍진
    • 대한건축학회논문집:구조계
    • /
    • 제36권3호
    • /
    • pp.113-120
    • /
    • 2020
  • The purpose of this study is to compare the models of Deep Learning-based Convolution Neural Network(CNN) for concrete crack detection. The comparison models are AlexNet, GoogLeNet, VGG16, VGG19, ResNet-18, ResNet-50, ResNet-101, and SqueezeNet which won ImageNet Large Scale Visual Recognition Challenge(ILSVRC). To train, validate and test these models, we constructed 3000 training data and 12000 validation data with 256×256 pixel resolution consisting of cracked and non-cracked images, and constructed 5 test data with 4160×3120 pixel resolution consisting of concrete images with crack. In order to increase the efficiency of the training, transfer learning was performed by taking the weight from the pre-trained network supported by MATLAB. From the trained network, the validation data is classified into crack image and non-crack image, yielding True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), and 6 performance indicators, False Negative Rate (FNR), False Positive Rate (FPR), Error Rate, Recall, Precision, Accuracy were calculated. The test image was scanned twice with a sliding window of 256×256 pixel resolution to classify the cracks, resulting in a crack map. From the comparison of the performance indicators and the crack map, it was concluded that VGG16 and VGG19 were the most suitable for detecting concrete cracks.

Transfer learning for crack detection in concrete structures: Evaluation of four models

  • Ali Bagheri;Mohammadreza Mosalmanyazdi;Hasanali Mosalmanyazdi
    • Structural Engineering and Mechanics
    • /
    • 제91권2호
    • /
    • pp.163-175
    • /
    • 2024
  • The objective of this research is to improve public safety in civil engineering by recognizing fractures in concrete structures quickly and correctly. The study offers a new crack detection method based on advanced image processing and machine learning techniques, specifically transfer learning with convolutional neural networks (CNNs). Four pre-trained models (VGG16, AlexNet, ResNet18, and DenseNet161) were fine-tuned to detect fractures in concrete surfaces. These models constantly produced accuracy rates greater than 80%, showing their ability to automate fracture identification and potentially reduce structural failure costs. Furthermore, the study expands its scope beyond crack detection to identify concrete health, using a dataset with a wide range of surface defects and anomalies including cracks. Notably, using VGG16, which was chosen as the most effective network architecture from the first phase, the study achieves excellent accuracy in classifying concrete health, demonstrating the model's satisfactorily performance even in more complex scenarios.

Early Detection of Rice Leaf Blast Disease using Deep-Learning Techniques

  • Syed Rehan Shah;Syed Muhammad Waqas Shah;Hadia Bibi;Mirza Murad Baig
    • International Journal of Computer Science & Network Security
    • /
    • 제24권4호
    • /
    • pp.211-221
    • /
    • 2024
  • Pakistan is a top producer and exporter of high-quality rice, but traditional methods are still being used for detecting rice diseases. This research project developed an automated rice blast disease diagnosis technique based on deep learning, image processing, and transfer learning with pre-trained models such as Inception V3, VGG16, VGG19, and ResNet50. The modified connection skipping ResNet 50 had the highest accuracy of 99.16%, while the other models achieved 98.16%, 98.47%, and 98.56%, respectively. In addition, CNN and an ensemble model K-nearest neighbor were explored for disease prediction, and the study demonstrated superior performance and disease prediction using recommended web-app approaches.