• Title/Summary/Keyword: VGG16 and Inception V3

Search Result 21, Processing Time 0.014 seconds

Grading of Harvested 'Mihwang' Peach Maturity with Convolutional Neural Network (합성곱 신경망을 이용한 '미황' 복숭아 과실의 성숙도 분류)

  • Shin, Mi Hee;Jang, Kyeong Eun;Lee, Seul Ki;Cho, Jung Gun;Song, Sang Jun;Kim, Jin Gook
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.270-278
    • /
    • 2022
  • This study was conducted using deep learning technology to classify for 'Mihwang' peach maturity with RGB images and fruit quality attributes during fruit development and maturation periods. The 730 images of peach were used in the training data set and validation data set at a ratio of 8:2. The remains of 170 images were used to test the deep learning models. In this study, among the fruit quality attributes, firmness, Hue value, and a* value were adapted to the index with maturity classification, such as immature, mature, and over mature fruit. This study used the CNN (Convolutional Neural Networks) models for image classification; VGG16 and InceptionV3 of GoogLeNet. The performance results show 87.1% and 83.6% with Hue left value in VGG16 and InceptionV3, respectively. In contrast, the performance results show 72.2% and 76.9% with firmness in VGG16 and InceptionV3, respectively. The loss rate shows 54.3% and 62.1% with firmness in VGG16 and InceptionV3, respectively. It considers increasing for adapting a field utilization with firmness index in peach.

Performance Comparison of CNN-Based Image Classification Models for Drone Identification System (드론 식별 시스템을 위한 합성곱 신경망 기반 이미지 분류 모델 성능 비교)

  • YeongWan Kim;DaeKyun Cho;GunWoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.639-644
    • /
    • 2024
  • Recent developments in the use of drones on battlefields, extending beyond reconnaissance to firepower support, have greatly increased the importance of technologies for early automatic drone identification. In this study, to identify an effective image classification model that can distinguish drones from other aerial targets of similar size and appearance, such as birds and balloons, we utilized a dataset of 3,600 images collected from the internet. We adopted a transfer learning approach that combines the feature extraction capabilities of three pre-trained convolutional neural network models (VGG16, ResNet50, InceptionV3) with an additional classifier. Specifically, we conducted a comparative analysis of the performance of these three pre-trained models to determine the most effective one. The results showed that the InceptionV3 model achieved the highest accuracy at 99.66%. This research represents a new endeavor in utilizing existing convolutional neural network models and transfer learning for drone identification, which is expected to make a significant contribution to the advancement of drone identification technologies.

Automatic detection of icing wind turbine using deep learning method

  • Hacıefendioglu, Kemal;Basaga, Hasan Basri;Ayas, Selen;Karimi, Mohammad Tordi
    • Wind and Structures
    • /
    • v.34 no.6
    • /
    • pp.511-523
    • /
    • 2022
  • Detecting the icing on wind turbine blades built-in cold regions with conventional methods is always a very laborious, expensive and very difficult task. Regarding this issue, the use of smart systems has recently come to the agenda. It is quite possible to eliminate this issue by using the deep learning method, which is one of these methods. In this study, an application has been implemented that can detect icing on wind turbine blades images with visualization techniques based on deep learning using images. Pre-trained models of Resnet-50, VGG-16, VGG-19 and Inception-V3, which are well-known deep learning approaches, are used to classify objects automatically. Grad-CAM, Grad-CAM++, and Score-CAM visualization techniques were considered depending on the deep learning methods used to predict the location of icing regions on the wind turbine blades accurately. It was clearly shown that the best visualization technique for localization is Score-CAM. Finally, visualization performance analyses in various cases which are close-up and remote photos of a wind turbine, density of icing and light were carried out using Score-CAM for Resnet-50. As a result, it is understood that these methods can detect icing occurring on the wind turbine with acceptable high accuracy.

Tea Leaf Disease Classification Using Artificial Intelligence (AI) Models (인공지능(AI) 모델을 사용한 차나무 잎의 병해 분류)

  • K.P.S. Kumaratenna;Young-Yeol Cho
    • Journal of Bio-Environment Control
    • /
    • v.33 no.1
    • /
    • pp.1-11
    • /
    • 2024
  • In this study, five artificial intelligence (AI) models: Inception v3, SqueezeNet (local), VGG-16, Painters, and DeepLoc were used to classify tea leaf diseases. Eight image categories were used: healthy, algal leaf spot, anthracnose, bird's eye spot, brown blight, gray blight, red leaf spot, and white spot. Software used in this study was Orange 3 which functions as a Python library for visual programming, that operates through an interface that generates workflows to visually manipulate and analyze the data. The precision of each AI model was recorded to select the ideal AI model. All models were trained using the Adam solver, rectified linear unit activation function, 100 neurons in the hidden layers, 200 maximum number of iterations in the neural network, and 0.0001 regularizations. To extend the functionality of Orange 3, new add-ons can be installed and, this study image analytics add-on was newly added which is required for image analysis. For the training model, the import image, image embedding, neural network, test and score, and confusion matrix widgets were used, whereas the import images, image embedding, predictions, and image viewer widgets were used for the prediction. Precisions of the neural networks of the five AI models (Inception v3, SqueezeNet (local), VGG-16, Painters, and DeepLoc) were 0.807, 0.901, 0.780, 0.800, and 0.771, respectively. Finally, the SqueezeNet (local) model was selected as the optimal AI model for the detection of tea diseases using tea leaf images owing to its high precision and good performance throughout the confusion matrix.

A Study on the Optimal Convolution Neural Network Backbone for Sinkhole Feature Extraction of GPR B-scan Grayscale Images (GPR B-scan 회색조 이미지의 싱크홀 특성추출 최적 컨볼루션 신경망 백본 연구)

  • Park, Younghoon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.385-396
    • /
    • 2024
  • To enhance the accuracy of sinkhole detection using GPR, this study derived a convolutional neural network that can optimally extract sinkhole characteristics from GPR B-scan grayscale images. The pre-trained convolutional neural network is evaluated to be more than twice as effective as the vanilla convolutional neural network. In pre-trained convolutional neural networks, fast feature extraction is found to cause less overfitting than feature extraction. It is analyzed that the top-1 verification accuracy and computation time are different depending on the type of architecture and simulation conditions. Among the pre-trained convolutional neural networks, InceptionV3 are evaluated as most robust for sinkhole detection in GPR B-scan grayscale images. When considering both top-1 verification accuracy and architecture efficiency index, VGG19 and VGG16 are analyzed to have high efficiency as the backbone for extracting sinkhole feature from GPR B-scan grayscale images. MobileNetV3-Large backbone is found to be suitable when mounted on GPR equipment to extract sinkhole feature in real time.

Waste Classification by Fine-Tuning Pre-trained CNN and GAN

  • Alsabei, Amani;Alsayed, Ashwaq;Alzahrani, Manar;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.65-70
    • /
    • 2021
  • Waste accumulation is becoming a significant challenge in most urban areas and if it continues unchecked, is poised to have severe repercussions on our environment and health. The massive industrialisation in our cities has been followed by a commensurate waste creation that has become a bottleneck for even waste management systems. While recycling is a viable solution for waste management, it can be daunting to classify waste material for recycling accurately. In this study, transfer learning models were proposed to automatically classify wastes based on six materials (cardboard, glass, metal, paper, plastic, and trash). The tested pre-trained models were ResNet50, VGG16, InceptionV3, and Xception. Data augmentation was done using a Generative Adversarial Network (GAN) with various image generation percentages. It was found that models based on Xception and VGG16 were more robust. In contrast, models based on ResNet50 and InceptionV3 were sensitive to the added machine-generated images as the accuracy degrades significantly compared to training with no artificial data.

Comparison of Fine-Tuned Convolutional Neural Networks for Clipart Style Classification

  • Lee, Seungbin;Kim, Hyungon;Seok, Hyekyoung;Nang, Jongho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.9 no.4
    • /
    • pp.1-7
    • /
    • 2017
  • Clipart is artificial visual contents that are created using various tools such as Illustrator to highlight some information. Here, the style of the clipart plays a critical role in determining how it looks. However, previous studies on clipart are focused only on the object recognition [16], segmentation, and retrieval of clipart images using hand-craft image features. Recently, some clipart classification researches based on the style similarity using CNN have been proposed, however, they have used different CNN-models and experimented with different benchmark dataset so that it is very hard to compare their performances. This paper presents an experimental analysis of the clipart classification based on the style similarity with two well-known CNN-models (Inception Resnet V2 [13] and VGG-16 [14] and transfers learning with the same benchmark dataset (Microsoft Style Dataset 3.6K). From this experiment, we find out that the accuracy of Inception Resnet V2 is better than VGG for clipart style classification because of its deep nature and convolution map with various sizes in parallel. We also find out that the end-to-end training can improve the accuracy more than 20% in both CNN models.

Early Detection of Rice Leaf Blast Disease using Deep-Learning Techniques

  • Syed Rehan Shah;Syed Muhammad Waqas Shah;Hadia Bibi;Mirza Murad Baig
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.211-221
    • /
    • 2024
  • Pakistan is a top producer and exporter of high-quality rice, but traditional methods are still being used for detecting rice diseases. This research project developed an automated rice blast disease diagnosis technique based on deep learning, image processing, and transfer learning with pre-trained models such as Inception V3, VGG16, VGG19, and ResNet50. The modified connection skipping ResNet 50 had the highest accuracy of 99.16%, while the other models achieved 98.16%, 98.47%, and 98.56%, respectively. In addition, CNN and an ensemble model K-nearest neighbor were explored for disease prediction, and the study demonstrated superior performance and disease prediction using recommended web-app approaches.

Image Recognition System for Early Detection of Oral Cancer (구강암 조기발견을 위한 영상인식 시스템)

  • Cahyadi, Edward Dwijayanto;Song, Mi-Hwa
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.309-311
    • /
    • 2022
  • Oral cancer is a type of cancer that has a high possibility to be cured if it is threatened earlier. The convolutional neural network is very popular for being a good algorithm for image recognition. In this research, we try to compare 4 different architectures of the CNN algorithm: Convnet, VGG16, Inception V3, and Resnet. As we compared those 4 architectures we found that VGG16 and Resnet model has better performance with an 85.35% accuracy rate compared to the other 3 architectures. In the future, we are sure that image recognition can be more developed to identify oral cancer earlier.

Deep Learning-Based Box Office Prediction Using the Image Characteristics of Advertising Posters in Performing Arts (공연예술에서 광고포스터의 이미지 특성을 활용한 딥러닝 기반 관객예측)

  • Cho, Yujung;Kang, Kyungpyo;Kwon, Ohbyung
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.2
    • /
    • pp.19-43
    • /
    • 2021
  • The prediction of box office performance in performing arts institutions is an important issue in the performing arts industry and institutions. For this, traditional prediction methodology and data mining methodology using standardized data such as cast members, performance venues, and ticket prices have been proposed. However, although it is evident that audiences tend to seek out their intentions by the performance guide poster, few attempts were made to predict box office performance by analyzing poster images. Hence, the purpose of this study is to propose a deep learning application method that can predict box office success through performance-related poster images. Prediction was performed using deep learning algorithms such as Pure CNN, VGG-16, Inception-v3, and ResNet50 using poster images published on the KOPIS as learning data set. In addition, an ensemble with traditional regression analysis methodology was also attempted. As a result, it showed high discrimination performance exceeding 85% of box office prediction accuracy. This study is the first attempt to predict box office success using image data in the performing arts field, and the method proposed in this study can be applied to the areas of poster-based advertisements such as institutional promotions and corporate product advertisements.