• Title/Summary/Keyword: VGG Net

Search Result 89, Processing Time 0.022 seconds

An Optimized Deep Learning Techniques for Analyzing Mammograms

  • Satish Babu Bandaru;Natarajasivan. D;Rama Mohan Babu. G
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.39-48
    • /
    • 2023
  • Breast cancer screening makes extensive utilization of mammography. Even so, there has been a lot of debate with regards to this application's starting age as well as screening interval. The deep learning technique of transfer learning is employed for transferring the knowledge learnt from the source tasks to the target tasks. For the resolution of real-world problems, deep neural networks have demonstrated superior performance in comparison with the standard machine learning algorithms. The architecture of the deep neural networks has to be defined by taking into account the problem domain knowledge. Normally, this technique will consume a lot of time as well as computational resources. This work evaluated the efficacy of the deep learning neural network like Visual Geometry Group Network (VGG Net) Residual Network (Res Net), as well as inception network for classifying the mammograms. This work proposed optimization of ResNet with Teaching Learning Based Optimization (TLBO) algorithm's in order to predict breast cancers by means of mammogram images. The proposed TLBO-ResNet, an optimized ResNet with faster convergence ability when compared with other evolutionary methods for mammogram classification.

Preliminary study of artificial intelligence-based fuel-rod pattern analysis of low-quality tomographic image of fuel assembly

  • Seong, Saerom;Choi, Sehwan;Ahn, Jae Joon;Choi, Hyung-joo;Chung, Yong Hyun;You, Sei Hwan;Yeom, Yeon Soo;Choi, Hyun Joon;Min, Chul Hee
    • Nuclear Engineering and Technology
    • /
    • v.54 no.10
    • /
    • pp.3943-3948
    • /
    • 2022
  • Single-photon emission computed tomography is one of the reliable pin-by-pin verification techniques for spent-fuel assemblies. One of the challenges with this technique is to increase the total fuel assembly verification speed while maintaining high verification accuracy. The aim of the present study, therefore, was to develop an artificial intelligence (AI) algorithm-based tomographic image analysis technique for partial-defect verification of fuel assemblies. With the Monte Carlo (MC) simulation technique, a tomographic image dataset consisting of 511 fuel-rod patterns of a 3 × 3 fuel assembly was generated, and with these images, the VGG16, GoogLeNet, and ResNet models were trained. According to an evaluation of these models for different training dataset sizes, the ResNet model showed 100% pattern estimation accuracy. And, based on the different tomographic image qualities, all of the models showed almost 100% pattern estimation accuracy, even for low-quality images with unrecognizable fuel patterns. This study verified that an AI model can be effectively employed for accurate and fast partial-defect verification of fuel assemblies.

Detecting Similar Designs Using Deep Learning-based Image Feature Extracting Model (딥러닝 기반 이미지 특징 추출 모델을 이용한 유사 디자인 검출에 대한 연구)

  • Lee, Byoung Woo;Lee, Woo Chang;Chae, Seung Wan;Kim, Dong Hyun;Lee, Choong Kwon
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.162-169
    • /
    • 2020
  • Design is a key factor that determines the competitiveness of products in the textile and fashion industry. It is very important to measure the similarity of the proposed design in order to prevent unauthorized copying and to confirm the originality. In this study, a deep learning technique was used to quantify features from images of textile designs, and similarity was measured using Spearman correlation coefficients. To verify that similar samples were actually detected, 300 images were randomly rotated and color changed. The results of Top-3 and Top-5 in the order of similarity value were measured to see if samples that rotated or changed color were detected. As a result, the VGG-16 model recorded significantly higher performance than did AlexNet. The performance of the VGG-16 model was the highest at 64% and 73.67% in the Top-3 and Top-5, where similarity results were high in the case of the rotated image. appear. In the case of color change, the highest in Top-3 and Top-5 at 86.33% and 90%, respectively.

Grading of Harvested 'Mihwang' Peach Maturity with Convolutional Neural Network (합성곱 신경망을 이용한 '미황' 복숭아 과실의 성숙도 분류)

  • Shin, Mi Hee;Jang, Kyeong Eun;Lee, Seul Ki;Cho, Jung Gun;Song, Sang Jun;Kim, Jin Gook
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.270-278
    • /
    • 2022
  • This study was conducted using deep learning technology to classify for 'Mihwang' peach maturity with RGB images and fruit quality attributes during fruit development and maturation periods. The 730 images of peach were used in the training data set and validation data set at a ratio of 8:2. The remains of 170 images were used to test the deep learning models. In this study, among the fruit quality attributes, firmness, Hue value, and a* value were adapted to the index with maturity classification, such as immature, mature, and over mature fruit. This study used the CNN (Convolutional Neural Networks) models for image classification; VGG16 and InceptionV3 of GoogLeNet. The performance results show 87.1% and 83.6% with Hue left value in VGG16 and InceptionV3, respectively. In contrast, the performance results show 72.2% and 76.9% with firmness in VGG16 and InceptionV3, respectively. The loss rate shows 54.3% and 62.1% with firmness in VGG16 and InceptionV3, respectively. It considers increasing for adapting a field utilization with firmness index in peach.

Deep Learning Models for Autonomous Crack Detection System (자동화 균열 탐지 시스템을 위한 딥러닝 모델에 관한 연구)

  • Ji, HongGeun;Kim, Jina;Hwang, Syjung;Kim, Dogun;Park, Eunil;Kim, Young Seok;Ryu, Seung Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.5
    • /
    • pp.161-168
    • /
    • 2021
  • Cracks affect the robustness of infrastructures such as buildings, bridge, pavement, and pipelines. This paper presents an automated crack detection system which detect cracks in diverse surfaces. We first constructed the combined crack dataset, consists of multiple crack datasets in diverse domains presented in prior studies. Then, state-of-the-art deep learning models in computer vision tasks including VGG, ResNet, WideResNet, ResNeXt, DenseNet, and EfficientNet, were used to validate the performance of crack detection. We divided the combined dataset into train (80%) and test set (20%) to evaluate the employed models. DenseNet121 showed the highest accuracy at 96.20% with relatively low number of parameters compared to other models. Based on the validation procedures of the advanced deep learning models in crack detection task, we shed light on the cost-effective automated crack detection system which can be applied to different surfaces and structures with low computing resources.

SVM on Top of Deep Networks for Covid-19 Detection from Chest X-ray Images

  • Do, Thanh-Nghi;Le, Van-Thanh;Doan, Thi-Huong
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.219-225
    • /
    • 2022
  • In this study, we propose training a support vector machine (SVM) model on top of deep networks for detecting Covid-19 from chest X-ray images. We started by gathering a real chest X-ray image dataset, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Instead of training deep networks from scratch, we fine-tuned recent pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to classify chest X-ray images into one of three classes (Covid-19, normal, and other lung). We propose training an SVM model on top of deep networks to perform a nonlinear combination of deep network outputs, improving classification over any single deep network. The empirical test results on the real chest X-ray image dataset show that deep network models, with an exception of ResNet50 with 82.44%, provide an accuracy of at least 92% on the test set. The proposed SVM on top of the deep network achieved the highest accuracy of 96.16%.

Glaring Wall Pad classification by transfer learning (전이학습을 이용한 전반사가 있는 월패드 분류)

  • Lee, Yong-Jun;Jo, Geun-Sik
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.35-36
    • /
    • 2021
  • 딥러닝을 이용한 이미지 처리에서 데이터 셋이 반드시 필요하다. 월패드는 널리 보급되는 다양한 성능을 포함한 IoT가전으로 그 기능의 사용을 돕기 위해서는 해당 월패드에 해당하는 매뉴얼을 제공해야 하고 이를 위해 딥러닝을 이용한 월패드 분류를 이용 할 수 있다. 하지만 월패드 중 일부 모델은 화면의 전반사가 매우 심해 기존의 작은 데이터 셋으로는 딥러닝을 이용한 이미지 분류 성능이 좋지 못하다. 본 논문은 이를 해결하기 위해 추가적으로 데이터 셋을 구축하고 이를 이용해 대규모 데이터로 사전 학습된 VGG16, VGG19, ResNet50, MobileNet 등을 이용해 전이학습을 통해 월패드를 분류한다.

  • PDF

Application and Analysis of Machine Learning for Discriminating Image Copyright (이미지 저작권 판별을 위한 기계학습 적용과 분석)

  • Kim, Sooin;Lee, Sangwoo;Kim, Hakhee;Kim, Wongyum;Hwang, Doosung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.899-902
    • /
    • 2021
  • 본 논문은 이미지 저작권 유무 판별을 분류 문제로 정의하고 기계학습과 합성곱 신경망 모델을 적용하여 해결한다. 학습을 위해 입력 데이터를 고정된 크기로 변환하고 정규화 과정을 수행하여 학습 데이터셋을 준비한다. 저작권 유무 판별 실험에서 SVM, k-NN, 랜덤포레스트, VGG-Net 모델의 분류 성능을 비교 분석한다. VGG-Net C 모델의 결과가 다른 알고리즘과 비교 시 10.65% 높은 성능을 나타냈으며 배치 정규화 층을 이용하여 과적합 현상을 개선했다.

Early Detection of Rice Leaf Blast Disease using Deep-Learning Techniques

  • Syed Rehan Shah;Syed Muhammad Waqas Shah;Hadia Bibi;Mirza Murad Baig
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.211-221
    • /
    • 2024
  • Pakistan is a top producer and exporter of high-quality rice, but traditional methods are still being used for detecting rice diseases. This research project developed an automated rice blast disease diagnosis technique based on deep learning, image processing, and transfer learning with pre-trained models such as Inception V3, VGG16, VGG19, and ResNet50. The modified connection skipping ResNet 50 had the highest accuracy of 99.16%, while the other models achieved 98.16%, 98.47%, and 98.56%, respectively. In addition, CNN and an ensemble model K-nearest neighbor were explored for disease prediction, and the study demonstrated superior performance and disease prediction using recommended web-app approaches.

Comparison of Deep Learning Models for Judging Business Card Image Rotation (명함 이미지 회전 판단을 위한 딥러닝 모델 비교)

  • Ji-Hoon, Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.34-40
    • /
    • 2023
  • A smart business card printing system that automatically prints business cards requested by customers online is being activated. What matters is that the business card submitted by the customer to the system may be abnormal. This paper deals with the problem of determining whether the image of a business card has been abnormally rotated by adopting artificial intelligence technology. It is assumed that the business card rotates 0 degrees, 90 degrees, 180 degrees, and 270 degrees. Experiments were conducted by applying existing VGG, ResNet, and DenseNet artificial neural networks without designing special artificial neural networks, and they were able to distinguish image rotation with an accuracy of about 97%. DenseNet161 showed 97.9% accuracy and ResNet34 also showed 97.2% precision. This illustrates that if the problem is simple, it can produce sufficiently good results even if the neural network is not a complex one.