• Title/Summary/Keyword: VggNet

Search Result 93, Processing Time 0.022 seconds

Compression and Performance Evaluation of CNN Models on Embedded Board (임베디드 보드에서의 CNN 모델 압축 및 성능 검증)

  • Moon, Hyeon-Cheol;Lee, Ho-Young;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.200-207
    • /
    • 2020
  • Recently, deep neural networks such as CNN are showing excellent performance in various fields such as image classification, object recognition, visual quality enhancement, etc. However, as the model size and computational complexity of deep learning models for most applications increases, it is hard to apply neural networks to IoT and mobile environments. Therefore, neural network compression algorithms for reducing the model size while keeping the performance have been being studied. In this paper, we apply few compression methods to CNN models and evaluate their performances in the embedded environment. For evaluate the performance, the classification performance and inference time of the original CNN models and the compressed CNN models on the image inputted by the camera are evaluated in the embedded board equipped with QCS605, which is a customized AI chip. In this paper, a few CNN models of MobileNetV2, ResNet50, and VGG-16 are compressed by applying the methods of pruning and matrix decomposition. The experimental results show that the compressed models give not only the model size reduction of 1.3~11.2 times at a classification performance loss of less than 2% compared to the original model, but also the inference time reduction of 1.2~2.21 times, and the memory reduction of 1.2~3.8 times in the embedded board.

Compression of CNN Using Low-Rank Approximation and CP Decomposition Methods (저계수 행렬 근사 및 CP 분해 기법을 이용한 CNN 압축)

  • Moon, HyeonCheol;Moon, Gihwa;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.125-131
    • /
    • 2021
  • In recent years, Convolutional Neural Networks (CNNs) have achieved outstanding performance in the fields of computer vision such as image classification, object detection, visual quality enhancement, etc. However, as huge amount of computation and memory are required in CNN models, there is a limitation in the application of CNN to low-power environments such as mobile or IoT devices. Therefore, the need for neural network compression to reduce the model size while keeping the task performance as much as possible has been emerging. In this paper, we propose a method to compress CNN models by combining matrix decomposition methods of LR (Low-Rank) approximation and CP (Canonical Polyadic) decomposition. Unlike conventional methods that apply one matrix decomposition method to CNN models, we selectively apply two decomposition methods depending on the layer types of CNN to enhance the compression performance. To evaluate the performance of the proposed method, we use the models for image classification such as VGG-16, RestNet50 and MobileNetV2 models. The experimental results show that the proposed method gives improved classification performance at the same range of 1.5 to 12.1 times compression ratio than the existing method that applies only the LR approximation.

Food Detection by Fine-Tuning Pre-trained Convolutional Neural Network Using Noisy Labels

  • Alshomrani, Shroog;Aljoudi, Lina;Aljabri, Banan;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.182-190
    • /
    • 2021
  • Deep learning is an advanced technology for large-scale data analysis, with numerous promising cases like image processing, object detection and significantly more. It becomes customarily to use transfer learning and fine-tune a pre-trained CNN model for most image recognition tasks. Having people taking photos and tag themselves provides a valuable resource of in-data. However, these tags and labels might be noisy as people who annotate these images might not be experts. This paper aims to explore the impact of noisy labels on fine-tuning pre-trained CNN models. Such effect is measured on a food recognition task using Food101 as a benchmark. Four pre-trained CNN models are included in this study: InceptionV3, VGG19, MobileNetV2 and DenseNet121. Symmetric label noise will be added with different ratios. In all cases, models based on DenseNet121 outperformed the other models. When noisy labels were introduced to the data, the performance of all models degraded almost linearly with the amount of added noise.

Evaluation of Deep Learning Model for Scoliosis Pre-Screening Using Preprocessed Chest X-ray Images

  • Min Gu Jang;Jin Woong Yi;Hyun Ju Lee;Ki Sik Tae
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.4
    • /
    • pp.293-301
    • /
    • 2023
  • Scoliosis is a three-dimensional deformation of the spine that is a deformity induced by physical or disease-related causes as the spine is rotated abnormally. Early detection has a significant influence on the possibility of nonsurgical treatment. To train a deep learning model with preprocessed images and to evaluate the results with and without data augmentation to enable the diagnosis of scoliosis based only on a chest X-ray image. The preprocessed images in which only the spine, rib contours, and some hard tissues were left from the original chest image, were used for learning along with the original images, and three CNN(Convolutional Neural Networks) models (VGG16, ResNet152, and EfficientNet) were selected to proceed with training. The results obtained by training with the preprocessed images showed a superior accuracy to those obtained by training with the original image. When the scoliosis image was added through data augmentation, the accuracy was further improved, ultimately achieving a classification accuracy of 93.56% with the ResNet152 model using test data. Through supplementation with future research, the method proposed herein is expected to allow the early diagnosis of scoliosis as well as cost reduction by reducing the burden of additional radiographic imaging for disease detection.

Face Frontalization Model with A.I. Based on U-Net using Convolutional Neural Network (합성곱 신경망(CNN)을 이용한 U-Net 기반의 인공지능 안면 정면화 모델)

  • Lee, Sangmin;Son, Wonho;Jin, ChangGyun;Kim, Ji-Hyun;Kim, JiYun;Park, Naeun;Kim, Gaeun;Kwon, Jin young;Lee, Hye Yi;Kim, Jongwan;Oh, Dukshin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.685-688
    • /
    • 2020
  • 안면 인식은 Face ID를 비롯하여 미아 찾기, 범죄자 추적 등의 분야에 도입되고 있다. 안면 인식은 최근 딥러닝을 통해 인식률이 향상되었으나, 측면에서의 인식률은 정면에 비해 특징 추출이 어려우므로 비교적 낮다. 이런 문제는 해당 인물의 정면이 없고 측면만 존재할 경우 안면 인식을 통한 신원확인이 어려워 단점으로 작용될 수 있다. 본 논문에서는 측면 이미지를 바탕으로 정면을 생성함으로써 안면 인식을 적용할 수 있는 상황을 확장하는 인공지능 기반의 안면 정면화 모델을 구현한다. 모델의 안면 특징 추출을 위해 VGG-Face를 사용하며 특징 추출에서 생길 수 있는 정보 손실을 막기 위해 U-Net 구조를 사용한다.

Transfer Learning Models for Enhanced Prediction of Cracked Tires

  • Candra Zonyfar;Taek Lee;Jung-Been Lee;Jeong-Dong Kim
    • Journal of Platform Technology
    • /
    • v.11 no.6
    • /
    • pp.13-20
    • /
    • 2023
  • Regularly inspecting vehicle tires' condition is imperative for driving safety and comfort. Poorly maintained tires can pose fatal risks, leading to accidents. Unfortunately, manual tire visual inspections are often considered no less laborious than employing an automatic tire inspection system. Nevertheless, an automated tire inspection method can significantly enhance driver compliance and awareness, encouraging routine checks. Therefore, there is an urgency for automated tire inspection solutions. Here, we focus on developing a deep learning (DL) model to predict cracked tires. The main idea of this study is to demonstrate the comparative analysis of DenseNet121, VGG-19 and EfficientNet Convolution Neural Network-based (CNN) Transfer Learning (TL) and suggest which model is more recommended for cracked tire classification tasks. To measure the model's effectiveness, we experimented using a publicly accessible dataset of 1028 images categorized into two classes. Our experimental results obtain good performance in terms of accuracy, with 0.9515. This shows that the model is reliable even though it works on a dataset of tire images which are characterized by homogeneous color intensity.

  • PDF

Weather Recognition Based on 3C-CNN

  • Tan, Ling;Xuan, Dawei;Xia, Jingming;Wang, Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3567-3582
    • /
    • 2020
  • Human activities are often affected by weather conditions. Automatic weather recognition is meaningful to traffic alerting, driving assistance, and intelligent traffic. With the boost of deep learning and AI, deep convolutional neural networks (CNN) are utilized to identify weather situations. In this paper, a three-channel convolutional neural network (3C-CNN) model is proposed on the basis of ResNet50.The model extracts global weather features from the whole image through the ResNet50 branch, and extracts the sky and ground features from the top and bottom regions by two CNN5 branches. Then the global features and the local features are merged by the Concat function. Finally, the weather image is classified by Softmax classifier and the identification result is output. In addition, a medium-scale dataset containing 6,185 outdoor weather images named WeatherDataset-6 is established. 3C-CNN is used to train and test both on the Two-class Weather Images and WeatherDataset-6. The experimental results show that 3C-CNN achieves best on both datasets, with the average recognition accuracy up to 94.35% and 95.81% respectively, which is superior to other classic convolutional neural networks such as AlexNet, VGG16, and ResNet50. It is prospected that our method can also work well for images taken at night with further improvement.

The development of food image detection and recognition model of Korean food for mobile dietary management

  • Park, Seon-Joo;Palvanov, Akmaljon;Lee, Chang-Ho;Jeong, Nanoom;Cho, Young-Im;Lee, Hae-Jeung
    • Nutrition Research and Practice
    • /
    • v.13 no.6
    • /
    • pp.521-528
    • /
    • 2019
  • BACKGROUND/OBJECTIVES: The aim of this study was to develop Korean food image detection and recognition model for use in mobile devices for accurate estimation of dietary intake. MATERIALS/METHODS: We collected food images by taking pictures or by searching web images and built an image dataset for use in training a complex recognition model for Korean food. Augmentation techniques were performed in order to increase the dataset size. The dataset for training contained more than 92,000 images categorized into 23 groups of Korean food. All images were down-sampled to a fixed resolution of $150{\times}150$ and then randomly divided into training and testing groups at a ratio of 3:1, resulting in 69,000 training images and 23,000 test images. We used a Deep Convolutional Neural Network (DCNN) for the complex recognition model and compared the results with those of other networks: AlexNet, GoogLeNet, Very Deep Convolutional Neural Network, VGG and ResNet, for large-scale image recognition. RESULTS: Our complex food recognition model, K-foodNet, had higher test accuracy (91.3%) and faster recognition time (0.4 ms) than those of the other networks. CONCLUSION: The results showed that K-foodNet achieved better performance in detecting and recognizing Korean food compared to other state-of-the-art models.

Development of Fender Segmentation System for Port Structures using Vision Sensor and Deep Learning (비전센서 및 딥러닝을 이용한 항만구조물 방충설비 세분화 시스템 개발)

  • Min, Jiyoung;Yu, Byeongjun;Kim, Jonghyeok;Jeon, Haemin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.2
    • /
    • pp.28-36
    • /
    • 2022
  • As port structures are exposed to various extreme external loads such as wind (typhoons), sea waves, or collision with ships; it is important to evaluate the structural safety periodically. To monitor the port structure, especially the rubber fender, a fender segmentation system using a vision sensor and deep learning method has been proposed in this study. For fender segmentation, a new deep learning network that improves the encoder-decoder framework with the receptive field block convolution module inspired by the eccentric function of the human visual system into the DenseNet format has been proposed. In order to train the network, various fender images such as BP, V, cell, cylindrical, and tire-types have been collected, and the images are augmented by applying four augmentation methods such as elastic distortion, horizontal flip, color jitter, and affine transforms. The proposed algorithm has been trained and verified with the collected various types of fender images, and the performance results showed that the system precisely segmented in real time with high IoU rate (84%) and F1 score (90%) in comparison with the conventional segmentation model, VGG16 with U-net. The trained network has been applied to the real images taken at one port in Republic of Korea, and found that the fenders are segmented with high accuracy even with a small dataset.

A Study on the Risk of Propeller Cavitation Erosion Using Convolutional Neural Network (합성곱 신경망을 이용한 프로펠러 캐비테이션 침식 위험도 연구)

  • Kim, Ji-Hye;Lee, Hyoungseok;Hur, Jea-Wook
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.58 no.3
    • /
    • pp.129-136
    • /
    • 2021
  • Cavitation erosion is one of the major factors causing damage by lowering the structural strength of the marine propeller and the risk of it has been qualitatively evaluated by each institution with their own criteria based on the experiences. In this study, in order to quantitatively evaluate the risk of cavitation erosion on the propeller, we implement a deep learning algorithm based on a convolutional neural network. We train and verify it using the model tests results, including cavitation characteristics of various ship types. Here, we adopt the validated well-known networks such as VGG, GoogLeNet, and ResNet, and the results are compared with the expert's qualitative prediction results to confirm the feasibility of the prediction algorithm using a convolutional neural network.