• Title/Summary/Keyword: VGG Net

Search Result 99, Processing Time 0.039 seconds

A study on age estimation of facial images using various CNNs (Convolutional Neural Networks) (다양한 CNN 모델을 이용한 얼굴 영상의 나이 인식 연구)

  • Sung Eun Choi
    • Journal of Platform Technology
    • /
    • v.11 no.5
    • /
    • pp.16-22
    • /
    • 2023
  • There is a growing interest in facial age estimation because many applications require age estimation techniques from facial images. In order to estimate the exact age of a face, a technique for extracting aging features from a face image and classifying the age according to the extracted features is required. Recently, the performance of various CNN-based deep learning models has been greatly improved in the image recognition field, and various CNN-based deep learning models are being used to improve performance in the field of facial age estimation. In this paper, age estimation performance was compared by learning facial features based on various CNN-based models such as AlexNet, VGG-16, VGG-19, ResNet-18, ResNet-34, ResNet-50, ResNet-101, ResNet-152. As a result of experiment, it was confirmed that the performance of the facial age estimation models using ResNet-34 was the best.

  • PDF

Performance Improvement Analysis of Building Extraction Deep Learning Model Based on UNet Using Transfer Learning at Different Learning Rates (전이학습을 이용한 UNet 기반 건물 추출 딥러닝 모델의 학습률에 따른 성능 향상 분석)

  • Chul-Soo Ye;Young-Man Ahn;Tae-Woong Baek;Kyung-Tae Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1111-1123
    • /
    • 2023
  • In recent times, semantic image segmentation methods using deep learning models have been widely used for monitoring changes in surface attributes using remote sensing imagery. To enhance the performance of various UNet-based deep learning models, including the prominent UNet model, it is imperative to have a sufficiently large training dataset. However, enlarging the training dataset not only escalates the hardware requirements for processing but also significantly increases the time required for training. To address these issues, transfer learning is used as an effective approach, enabling performance improvement of models even in the absence of massive training datasets. In this paper we present three transfer learning models, UNet-ResNet50, UNet-VGG19, and CBAM-DRUNet-VGG19, which are combined with the representative pretrained models of VGG19 model and ResNet50 model. We applied these models to building extraction tasks and analyzed the accuracy improvements resulting from the application of transfer learning. Considering the substantial impact of learning rate on the performance of deep learning models, we also analyzed performance variations of each model based on different learning rate settings. We employed three datasets, namely Kompsat-3A dataset, WHU dataset, and INRIA dataset for evaluating the performance of building extraction results. The average accuracy improvements for the three dataset types, in comparison to the UNet model, were 5.1% for the UNet-ResNet50 model, while both UNet-VGG19 and CBAM-DRUNet-VGG19 models achieved a 7.2% improvement.

A Study on Classification Performance Analysis of Convolutional Neural Network using Ensemble Learning Algorithm (앙상블 학습 알고리즘을 이용한 컨벌루션 신경망의 분류 성능 분석에 관한 연구)

  • Park, Sung-Wook;Kim, Jong-Chan;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.665-675
    • /
    • 2019
  • In this paper, we compare and analyze the classification performance of deep learning algorithm Convolutional Neural Network(CNN) ac cording to ensemble generation and combining techniques. We used several CNN models(VGG16, VGG19, DenseNet121, DenseNet169, DenseNet201, ResNet18, ResNet34, ResNet50, ResNet101, ResNet152, GoogLeNet) to create 10 ensemble generation combinations and applied 6 combine techniques(average, weighted average, maximum, minimum, median, product) to the optimal combination. Experimental results, DenseNet169-VGG16-GoogLeNet combination in ensemble generation, and the product rule in ensemble combination showed the best performance. Based on this, it was concluded that ensemble in different models of high benchmarking scores is another way to get good results.

A Comparative Study of the CNN Model for AD Diagnosis

  • Vyshnavi Ramineni;Goo-Rak Kwon
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.52-58
    • /
    • 2023
  • Alzheimer's disease is one type of dementia, the symptoms can be treated by detecting the disease at its early stages. Recently, many computer-aided diagnosis using magnetic resonance image(MRI) have shown a good results in the classification of AD. Taken these MRI images and feed to Free surfer software to extra the features. In consideration, using T1-weighted images and classifying using the convolution neural network (CNN) model are proposed. In this paper, taking the subjects from ADNI of subcortical and cortical features of 190 subjects. Consider the study to reduce the complexity of the model by using the single layer in the Res-Net, VGG, and Alex Net. Multi-class classification is used to classify four different stages, CN, EMCI, LMCI, AD. The following experiment shows for respective classification Res-Net, VGG, and Alex Net with the best accuracy with VGG at 96%, Res-Net, GoogLeNet and Alex Net at 91%, 93% and 89% respectively.

Performance comparison of wake-up-word detection on mobile devices using various convolutional neural networks (다양한 합성곱 신경망 방식을 이용한 모바일 기기를 위한 시작 단어 검출의 성능 비교)

  • Kim, Sanghong;Lee, Bowon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.454-460
    • /
    • 2020
  • Artificial intelligence assistants that provide speech recognition operate through cloud-based voice recognition with high accuracy. In cloud-based speech recognition, Wake-Up-Word (WUW) detection plays an important role in activating devices on standby. In this paper, we compare the performance of Convolutional Neural Network (CNN)-based WUW detection models for mobile devices by using Google's speech commands dataset, using the spectrogram and mel-frequency cepstral coefficient features as inputs. The CNN models used in this paper are multi-layer perceptron, general convolutional neural network, VGG16, VGG19, ResNet50, ResNet101, ResNet152, MobileNet. We also propose network that reduces the model size to 1/25 while maintaining the performance of MobileNet is also proposed.

Development of Deep Recognition of Similarity in Show Garden Design Based on Deep Learning (딥러닝을 활용한 전시 정원 디자인 유사성 인지 모형 연구)

  • Cho, Woo-Yun;Kwon, Jin-Wook
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.52 no.2
    • /
    • pp.96-109
    • /
    • 2024
  • The purpose of this study is to propose a method for evaluating the similarity of Show gardens using Deep Learning models, specifically VGG-16 and ResNet50. A model for judging the similarity of show gardens based on VGG-16 and ResNet50 models was developed, and was referred to as DRG (Deep Recognition of similarity in show Garden design). An algorithm utilizing GAP and Pearson correlation coefficient was employed to construct the model, and the accuracy of similarity was analyzed by comparing the total number of similar images derived at 1st (Top1), 3rd (Top3), and 5th (Top5) ranks with the original images. The image data used for the DRG model consisted of a total of 278 works from the Le Festival International des Jardins de Chaumont-sur-Loire, 27 works from the Seoul International Garden Show, and 17 works from the Korea Garden Show. Image analysis was conducted using the DRG model for both the same group and different groups, resulting in the establishment of guidelines for assessing show garden similarity. First, overall image similarity analysis was best suited for applying data augmentation techniques based on the ResNet50 model. Second, for image analysis focusing on internal structure and outer form, it was effective to apply a certain size filter (16cm × 16cm) to generate images emphasizing form and then compare similarity using the VGG-16 model. It was suggested that an image size of 448 × 448 pixels and the original image in full color are the optimal settings. Based on these research findings, a quantitative method for assessing show gardens is proposed and it is expected to contribute to the continuous development of garden culture through interdisciplinary research moving forward.

Comparison of Deep Learning-based CNN Models for Crack Detection (콘크리트 균열 탐지를 위한 딥 러닝 기반 CNN 모델 비교)

  • Seol, Dong-Hyeon;Oh, Ji-Hoon;Kim, Hong-Jin
    • Journal of the Architectural Institute of Korea Structure & Construction
    • /
    • v.36 no.3
    • /
    • pp.113-120
    • /
    • 2020
  • The purpose of this study is to compare the models of Deep Learning-based Convolution Neural Network(CNN) for concrete crack detection. The comparison models are AlexNet, GoogLeNet, VGG16, VGG19, ResNet-18, ResNet-50, ResNet-101, and SqueezeNet which won ImageNet Large Scale Visual Recognition Challenge(ILSVRC). To train, validate and test these models, we constructed 3000 training data and 12000 validation data with 256×256 pixel resolution consisting of cracked and non-cracked images, and constructed 5 test data with 4160×3120 pixel resolution consisting of concrete images with crack. In order to increase the efficiency of the training, transfer learning was performed by taking the weight from the pre-trained network supported by MATLAB. From the trained network, the validation data is classified into crack image and non-crack image, yielding True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), and 6 performance indicators, False Negative Rate (FNR), False Positive Rate (FPR), Error Rate, Recall, Precision, Accuracy were calculated. The test image was scanned twice with a sliding window of 256×256 pixel resolution to classify the cracks, resulting in a crack map. From the comparison of the performance indicators and the crack map, it was concluded that VGG16 and VGG19 were the most suitable for detecting concrete cracks.

Fight Detection in Hockey Videos using Deep Network

  • Mukherjee, Subham;Saini, Rajkumar;Kumar, Pradeep;Roy, Partha Pratim;Dogra, Debi Prosad;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.225-232
    • /
    • 2017
  • Understanding actions in videos is an important task. It helps in finding the anomalies present in videos such as fights. Detection of fights becomes more crucial when it comes to sports. This paper focuses on finding fight scenes in Hockey sport videos using blur & radon transform and convolutional neural networks (CNNs). First, the local motion within the video frames has been extracted using blur information. Next, fast fourier and radon transform have been applied on the local motion. The video frames with fight scene have been identified using transfer learning with the help of pre-trained deep learning model VGG-Net. Finally, a comparison of the methodology has been performed using feed forward neural networks. Accuracies of 56.00% and 75.00% have been achieved using feed forward neural network and VGG16-Net, respectively.

Detection of Number and Character Area of License Plate Using Deep Learning and Semantic Image Segmentation (딥러닝과 의미론적 영상분할을 이용한 자동차 번호판의 숫자 및 문자영역 검출)

  • Lee, Jeong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.29-35
    • /
    • 2021
  • License plate recognition plays a key role in intelligent transportation systems. Therefore, it is a very important process to efficiently detect the number and character areas. In this paper, we propose a method to effectively detect license plate number area by applying deep learning and semantic image segmentation algorithm. The proposed method is an algorithm that detects number and text areas directly from the license plate without preprocessing such as pixel projection. The license plate image was acquired from a fixed camera installed on the road, and was used in various real situations taking into account both weather and lighting changes. The input images was normalized to reduce the color change, and the deep learning neural networks used in the experiment were Vgg16, Vgg19, ResNet18, and ResNet50. To examine the performance of the proposed method, we experimented with 500 license plate images. 300 sheets were used for learning and 200 sheets were used for testing. As a result of computer simulation, it was the best when using ResNet50, and 95.77% accuracy was obtained.

Malignant and Benign Classification of Liver Tumor in CT according to Data pre-processing and Deep running model (CT영상에서의 AlexNet과 VggNet을 이용한 간암 병변 분류 연구)

  • Choi, Bo Hye;Kim, Young Jae;Choi, Seung Jun;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.6
    • /
    • pp.229-236
    • /
    • 2018
  • Liver cancer is one of the highest incidents in the world, and the mortality rate is the second most common disease after lung cancer. The purpose of this study is to evaluate the diagnostic ability of deep learning in the classification of malignant and benign tumors in CT images of patients with liver tumors. We also tried to identify the best data processing methods and deep learning models for classifying malignant and benign tumors in the liver. In this study, CT data were collected from 92 patients (benign liver tumors: 44, malignant liver tumors: 48) at the Gil Medical Center. The CT data of each patient were used for cross-sectional images of 3,024 liver tumors. In AlexNet and VggNet, the average of the overall accuracy at each image size was calculated: the average of the overall accuracy of the $200{\times}200$ image size is 69.58% (AlexNet), 69.4% (VggNet), $150{\times}150$ image size is 71.54%, 67%, $100{\times}100$ image size is 68.79%, 66.2%. In conclusion, the overall accuracy of each does not exceed 80%, so it does not have a high level of accuracy. In addition, the average accuracy in benign was 90.3% and the accuracy in malignant was 46.2%, which is a significant difference between benign and malignant. Also, the time it takes for AlexNet to learn is about 1.6 times faster than VggNet but statistically no different (p > 0.05). Since both models are less than 90% of the overall accuracy, more research and development are needed, such as learning the liver tumor data using a new model, or the process of pre-processing the data images in other methods. In the future, it will be useful to use specialists for image reading using deep learning.