• Title/Summary/Keyword: MobileNetV3

Search Result 29, Processing Time 0.027 seconds

A new lightweight network based on MobileNetV3

  • Zhao, Liquan;Wang, Leilei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.1-15
    • /
    • 2022
  • The MobileNetV3 is specially designed for mobile devices with limited memory and computing power. To reduce the network parameters and improve the network inference speed, a new lightweight network is proposed based on MobileNetV3. Firstly, to reduce the computation of residual blocks, a partial residual structure is designed by dividing the input feature maps into two parts. The designed partial residual structure is used to replace the residual block in MobileNetV3. Secondly, a dual-path feature extraction structure is designed to further reduce the computation of MobileNetV3. Different convolution kernel sizes are used in the two paths to extract feature maps with different sizes. Besides, a transition layer is also designed for fusing features to reduce the influence of the new structure on accuracy. The CIFAR-100 dataset and Image Net dataset are used to test the performance of the proposed partial residual structure. The ResNet based on the proposed partial residual structure has smaller parameters and FLOPs than the original ResNet. The performance of improved MobileNetV3 is tested on CIFAR-10, CIFAR-100 and ImageNet image classification task dataset. Comparing MobileNetV3, GhostNet and MobileNetV2, the improved MobileNetV3 has smaller parameters and FLOPs. Besides, the improved MobileNetV3 is also tested on CPU and Raspberry Pi. It is faster than other networks

A Study on Optimal Convolutional Neural Networks Backbone for Reinforced Concrete Damage Feature Extraction (철근콘크리트 손상 특성 추출을 위한 최적 컨볼루션 신경망 백본 연구)

  • Park, Younghoon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.511-523
    • /
    • 2023
  • Research on the integration of unmanned aerial vehicles and deep learning for reinforced concrete damage detection is actively underway. Convolutional neural networks have a high impact on the performance of image classification, detection, and segmentation as backbones. The MobileNet, a pre-trained convolutional neural network, is efficient as a backbone for an unmanned aerial vehicle-based damage detection model because it can achieve sufficient accuracy with low computational complexity. Analyzing vanilla convolutional neural networks and MobileNet under various conditions, MobileNet was evaluated to have a verification accuracy 6.0~9.0% higher than vanilla convolutional neural networks with 15.9~22.9% lower computational complexity. MobileNetV2, MobileNetV3Large and MobileNetV3Small showed almost identical maximum verification accuracy, and the optimal conditions for MobileNet's reinforced concrete damage image feature extraction were analyzed to be the optimizer RMSprop, no dropout, and average pooling. The maximum validation accuracy of 75.49% for 7 types of damage detection based on MobilenetV2 derived in this study can be improved by image accumulation and continuous learning.

Implementation of Sports Video Clip Extraction Based on MobileNetV3 Transfer Learning (MobileNetV3 전이학습 기반 스포츠 비디오 클립 추출 구현)

  • YU, LI
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.897-904
    • /
    • 2022
  • Sports video is a very critical information resource. High-precision extraction of effective segments in sports video can better assist coaches in analyzing the player's actions in the video, and enable users to more intuitively appreciate the player's hitting action. Aiming at the shortcomings of the current sports video clip extraction results, such as strong subjectivity, large workload and low efficiency, a classification method of sports video clips based on MobileNetV3 is proposed to save user time. Experiments evaluate the effectiveness of effective segment extraction. Among the extracted segments, the effective proportion is 97.0%, indicating that the effective segment extraction results are good, and it can lay the foundation for the construction of the subsequent badminton action metadata video dataset.

A Black Ice Recognition in Infrared Road Images Using Improved Lightweight Model Based on MobileNetV2 (MobileNetV2 기반의 개선된 Lightweight 모델을 이용한 열화도로 영상에서의 블랙 아이스 인식)

  • Li, Yu-Jie;Kang, Sun-Kyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1835-1845
    • /
    • 2021
  • To accurately identify black ice and warn the drivers of information in advance so they can control speed and take preventive measures. In this paper, we propose a lightweight black ice detection network based on infrared road images. A black ice recognition network model based on CNN transfer learning has been developed. Additionally, to further improve the accuracy of black ice recognition, an enhanced lightweight network based on MobileNetV2 has been developed. To reduce the amount of calculation, linear bottlenecks and inverse residuals was used, and four bottleneck groups were used. At the same time, to improve the recognition rate of the model, each bottleneck group was connected to a 3×3 convolutional layer to enhance regional feature extraction and increase the number of feature maps. Finally, a black ice recognition experiment was performed on the constructed infrared road black ice dataset. The network model proposed in this paper had an accurate recognition rate of 99.07% for black ice.

Application and Performance Analysis of Double Pruning Method for Deep Neural Networks (심층신경망의 더블 프루닝 기법의 적용 및 성능 분석에 관한 연구)

  • Lee, Seon-Woo;Yang, Ho-Jun;Oh, Seung-Yeon;Lee, Mun-Hyung;Kwon, Jang-Woo
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.8
    • /
    • pp.23-34
    • /
    • 2020
  • Recently, the artificial intelligence deep learning field has been hard to commercialize due to the high computing power and the price problem of computing resources. In this paper, we apply a double pruning techniques to evaluate the performance of the in-depth neural network and various datasets. Double pruning combines basic Network-slimming and Parameter-prunning. Our proposed technique has the advantage of reducing the parameters that are not important to the existing learning and improving the speed without compromising the learning accuracy. After training various datasets, the pruning ratio was increased to reduce the size of the model.We confirmed that MobileNet-V3 showed the highest performance as a result of NetScore performance analysis. We confirmed that the performance after pruning was the highest in MobileNet-V3 consisting of depthwise seperable convolution neural networks in the Cifar 10 dataset, and VGGNet and ResNet in traditional convolutional neural networks also increased significantly.

MobileNetV2-based Binary Classification of Dermatoscopic Images of Melanocytic Nevi and Malignant Melanoma (MobileNetV2 기술을 이용한 색소 세포성 모반과 악성 흑색종 Dermatoscopic 영상의 이진 분류)

  • Jeong, Seung Min;Lee, Seung Gun;Lee, Eui Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.670-672
    • /
    • 2021
  • 색소 세포성 모반과 악성 흑색종은 형태가 유사하지만 유해성의 측면에서 악성 흑색종은 암으로써 무해한 색소 세포성 모반에 비해 위험한 질환이다. 이에 기반하여 기존 연구에서 색소 세포성 모반과 악성 흑색종을 구분하기 위한 연구가 있었지만, 데이터를 취득하는 과정에서 많은 cost 가 필요하였다. 본 연구에서는 이를 개선하기 위해 두 병변의 dermatoscopic 영상을 분류 학습의 데이터로 사용하여 연구를 진행하였다. 학습을 위한 데이터는 오픈소스 dermatoscopic 데이터셋인 HAM10000을 사용하였으며 모델은 CNN 에서 개선된 MobileNetV2 를 사용하였다. 실험 결과, MobileNetV2 를 사용한 학습은 3-layer CNN 에 비해 15 분의 1 가량 적은 파라미터를 가졌으며, 검증 성능과 테스트 성능에서 93%에 근사하는 성능을 보였다. 본 연구는 이전 연구에 비해 cost 측면에서 큰 개선을 이루었으며, 상용화 가능한 분류 기법을 발견했다는 점을 시사한다.

Efficient Convolutional Neural Network with low Complexity (저연산량의 효율적인 콘볼루션 신경망)

  • Lee, Chanho;Lee, Joongkyung;Ho, Cong Ahn
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.685-690
    • /
    • 2020
  • We propose an efficient convolutional neural network with much lower computational complexity and higher accuracy based on MobileNet V2 for mobile or edge devices. The proposed network consists of bottleneck layers with larger expansion factors and adjusted number of channels, and excludes a few layers, and therefore, the computational complexity is reduced by half. The performance the proposed network is verified by measuring the accuracy and execution times by CPU and GPU using ImageNet100 dataset. In addition, the execution time on GPU depends on the CNN architecture.

Dog-Species Classification through CycleGAN and Standard Data Augmentation

  • Chan, Park;Nammee, Moon
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.67-79
    • /
    • 2023
  • In the image field, data augmentation refers to increasing the amount of data through an editing method such as rotating or cropping a photo. In this study, a generative adversarial network (GAN) image was created using CycleGAN, and various colors of dogs were reflected through data augmentation. In particular, dog data from the Stanford Dogs Dataset and Oxford-IIIT Pet Dataset were used, and 10 breeds of dog, corresponding to 300 images each, were selected. Subsequently, a GAN image was generated using CycleGAN, and four learning groups were established: 2,000 original photos (group I); 2,000 original photos + 1,000 GAN images (group II); 3,000 original photos (group III); and 3,000 original photos + 1,000 GAN images (group IV). The amount of data in each learning group was augmented using existing data augmentation methods such as rotating, cropping, erasing, and distorting. The augmented photo data were used to train the MobileNet_v3_Large, ResNet-152, InceptionResNet_v2, and NASNet_Large frameworks to evaluate the classification accuracy and loss. The top-3 accuracy for each deep neural network model was as follows: MobileNet_v3_Large of 86.4% (group I), 85.4% (group II), 90.4% (group III), and 89.2% (group IV); ResNet-152 of 82.4% (group I), 83.7% (group II), 84.7% (group III), and 84.9% (group IV); InceptionResNet_v2 of 90.7% (group I), 88.4% (group II), 93.3% (group III), and 93.1% (group IV); and NASNet_Large of 85% (group I), 88.1% (group II), 91.8% (group III), and 92% (group IV). The InceptionResNet_v2 model exhibited the highest image classification accuracy, and the NASNet_Large model exhibited the highest increase in the accuracy owing to data augmentation.

Study the mutual robustness between parameter and accuracy in CNNs and developed an Automated Parameter Bit Operation Framework (CNN 의 파라미터와 정확도간 상호 강인성 연구 및 파라미터 비트 연산 자동화 프레임워크 개발)

  • Dong-In Lee;Jung-Heon Kim;Seung-Ho Lim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.451-452
    • /
    • 2023
  • 최근 CNN 이 다양한 산업에 확산되고 있으며, IoT 기기 및 엣지 컴퓨팅에 적합한 경량 모델에 대한 연구가 급증하고 있다. 본 논문에서는 CNN 모델의 파라미터 비트 연산을 위한 자동화 프레임워크를 제안하고, 파라미터 비트와 모델 정확도 사이의 관계를 실험 및 연구한다. 제안된 프레임워크는 하위 n- bit 를 0 으로 설정하여 정보 손실 발생시킴으로써 ImageNet 데이터셋으로 사전 학습된 CNN 모델의 파라미터와 정확도의 강인성을 비트 단위로 체계적으로 실험할 수 있다. 우리는 비트 연산을 수행한 파라미터로 InceptionV3, InceptionResnetV2, ResNet50, Xception, DenseNet121, MobileNetV1, MobileNetV2 모델의 정확도를 평가한다. 실험 결과는 성능이 낮은 모델일수록 파라미터와 정확도 간의 강인성이 높아 성능이 좋은 모델보다 정확도를 유지하는 비트 수가 적다는 것을 보여준다.

Implementation of Urinalysis Service Application based on MobileNetV3 (MobileNetV3 기반 요검사 서비스 어플리케이션 구현)

  • Gi-Jo Park;Seung-Hwan Choi;Kyung-Seok Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.41-46
    • /
    • 2023
  • Human urine is a process of excreting waste products in the blood, and it is easy to collect and contains various substances. Urinalysis is used to check for diseases, health conditions, and urinary tract infections. There are three methods of urinalysis: physical property test, chemical test, and microscopic test, and chemical test results can be easily confirmed using urine test strips. A variety of items can be tested on the urine test strip, through which various diseases can be identified. Recently, with the spread of smart phones, research on reading urine test strips using smart phones is being conducted. There is a method of detecting and reading the color change of a urine test strip using a smartphone. This method uses the RGB values and the color difference formula to discriminate. However, there is a problem in that accuracy is lowered due to various environmental factors. This paper applies a deep learning model to solve this problem. In particular, color discrimination of a urine test strip is improved in a smartphone using a lightweight CNN (Convolutional Neural Networks) model. CNN is a useful model for image recognition and pattern finding, and a lightweight version is also available. Through this, it is possible to operate a deep learning model on a smartphone and extract accurate urine test results. Urine test strips were taken in various environments to prepare deep learning model training images, and a urine test service application was designed using MobileNet V3.