• Title/Summary/Keyword: Inception Network

Search Result 77, Processing Time 0.023 seconds

A study on evaluation method of NIDS datasets in closed military network (군 폐쇄망 환경에서의 모의 네트워크 데이터 셋 평가 방법 연구)

  • Park, Yong-bin;Shin, Sung-uk;Lee, In-sup
    • Journal of Internet Computing and Services
    • /
    • v.21 no.2
    • /
    • pp.121-130
    • /
    • 2020
  • This paper suggests evaluating the military closed network data as an image which is generated by Generative Adversarial Network (GAN), applying an image evaluation method such as the InceptionV3 model-based Inception Score (IS) and Frechet Inception Distance (FID). We employed the famous image classification models instead of the InceptionV3, added layers to those models, and converted the network data to an image in diverse ways. Experimental results show that the Densenet121 model with one added Dense Layer achieves the best performance in data converted using the arctangent algorithm and 8 * 8 size of the image.

3D Res-Inception Network Transfer Learning for Multiple Label Crowd Behavior Recognition

  • Nan, Hao;Li, Min;Fan, Lvyuan;Tong, Minglei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1450-1463
    • /
    • 2019
  • The problem towards crowd behavior recognition in a serious clustered scene is extremely challenged on account of variable scales with non-uniformity. This paper aims to propose a crowed behavior classification framework based on a transferring hybrid network blending 3D res-net with inception-v3. First, the 3D res-inception network is presented so as to learn the augmented visual feature of UCF 101. Then the target dataset is applied to fine-tune the network parameters in an attempt to classify the behavior of densely crowded scenes. Finally, a transferred entropy function is used to calculate the probability of multiple labels in accordance with these features. Experimental results show that the proposed method could greatly improve the accuracy of crowd behavior recognition and enhance the accuracy of multiple label classification.

A Study on the Explainability of Inception Network-Derived Image Classification AI Using National Defense Data (국방 데이터를 활용한 인셉션 네트워크 파생 이미지 분류 AI의 설명 가능성 연구)

  • Kangun Cho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.256-264
    • /
    • 2024
  • In the last 10 years, AI has made rapid progress, and image classification, in particular, are showing excellent performance based on deep learning. Nevertheless, due to the nature of deep learning represented by a black box, it is difficult to actually use it in critical decision-making situations such as national defense, autonomous driving, medical care, and finance due to the lack of explainability of judgement results. In order to overcome these limitations, in this study, a model description algorithm capable of local interpretation was applied to the inception network-derived AI to analyze what grounds they made when classifying national defense data. Specifically, we conduct a comparative analysis of explainability based on confidence values by performing LIME analysis from the Inception v2_resnet model and verify the similarity between human interpretations and LIME explanations. Furthermore, by comparing the LIME explanation results through the Top1 output results for Inception v3, Inception v2_resnet, and Xception models, we confirm the feasibility of comparing the efficiency and availability of deep learning networks using XAI.

A Hierarchical Deep Convolutional Neural Network for Crop Species and Diseases Classification (Deep Convolutional Neural Network(DCNN)을 이용한 계층적 농작물의 종류와 질병 분류 기법)

  • Borin, Min;Rah, HyungChul;Yoo, Kwan-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.11
    • /
    • pp.1653-1671
    • /
    • 2022
  • Crop diseases affect crop production, more than 30 billion USD globally. We proposed a classification study of crop species and diseases using deep learning algorithms for corn, cucumber, pepper, and strawberry. Our study has three steps of species classification, disease detection, and disease classification, which is noteworthy for using captured images without additional processes. We designed deep learning approach of deep learning convolutional neural networks based on Mask R-CNN model to classify crop species. Inception and Resnet models were presented for disease detection and classification sequentially. For classification, we trained Mask R-CNN network and achieved loss value of 0.72 for crop species classification and segmentation. For disease detection, InceptionV3 and ResNet101-V2 models were trained for nodes of crop species on 1,500 images of normal and diseased labels, resulting in the accuracies of 0.984, 0.969, 0.956, and 0.962 for corn, cucumber, pepper, and strawberry by InceptionV3 model with higher accuracy and AUC. For disease classification, InceptionV3 and ResNet 101-V2 models were trained for nodes of crop species on 1,500 images of diseased label, resulting in the accuracies of 0.995 and 0.992 for corn and cucumber by ResNet101 with higher accuracy and AUC whereas 0.940 and 0.988 for pepper and strawberry by Inception.

Instagram image classification with Deep Learning (딥러닝을 이용한 인스타그램 이미지 분류)

  • Jeong, Nokwon;Cho, Soosun
    • Journal of Internet Computing and Services
    • /
    • v.18 no.5
    • /
    • pp.61-67
    • /
    • 2017
  • In this paper we introduce two experimental results from classification of Instagram images and some valuable lessons from them. We have tried some experiments for evaluating the competitive power of Convolutional Neural Network(CNN) in classification of real social network images such as Instagram images. We used AlexNet and ResNet, which showed the most outstanding capabilities in ImageNet Large Scale Visual Recognition Challenge(ILSVRC) 2012 and 2015, respectively. And we used 240 Instagram images and 12 pre-defined categories for classifying social network images. Also, we performed fine-tuning using Inception V3 model, and compared those results. In the results of four cases of AlexNet, ResNet, Inception V3 and fine-tuned Inception V3, the Top-1 error rates were 49.58%, 40.42%, 30.42%, and 5.00%. And the Top-5 error rates were 35.42%, 25.00%, 20.83%, and 0.00% respectively.

A Deep Neural Network Architecture for Real-Time Semantic Segmentation on Embedded Board (임베디드 보드에서 실시간 의미론적 분할을 위한 심층 신경망 구조)

  • Lee, Junyeop;Lee, Youngwan
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.94-98
    • /
    • 2018
  • We propose Wide Inception ResNet (WIR Net) an optimized neural network architecture as a real-time semantic segmentation method for autonomous driving. The neural network architecture consists of an encoder that extracts features by applying a residual connection and inception module, and a decoder that increases the resolution by using transposed convolution and a low layer feature map. We also improved the performance by applying an ELU activation function and optimized the neural network by reducing the number of layers and increasing the number of filters. The performance evaluations used an NVIDIA Geforce GTX 1080 and TX1 boards to assess the class and category IoU for cityscapes data in the driving environment. The experimental results show that the accuracy of class IoU 53.4, category IoU 81.8 and the execution speed of $640{\times}360$, $720{\times}480$ resolution image processing 17.8fps and 13.0fps on TX1 board.

Facial Age Estimation Using Convolutional Neural Networks Based on Inception Modules (인셉션 모듈 기반 컨볼루션 신경망을 이용한 얼굴 연령 예측)

  • Sukh-Erdene, Bolortuya;Cho, Hyun-chong
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.9
    • /
    • pp.1224-1231
    • /
    • 2018
  • Automatic age estimation has been used in many social network applications, practical commercial applications, and human-computer interaction visual-surveillance biometrics. However, it has rarely been explored. In this paper, we propose an automatic age estimation system, which includes face detection and convolutional deep learning based on an inception module. The latter is a 22-layer-deep network that serves as the particular category of the inception design. To evaluate the proposed approach, we use 4,000 images of eight different age groups from the Adience age dataset. k-fold cross-validation (k = 5) is applied. A comparison of the performance of the proposed work and recent related methods is presented. The results show that the proposed method significantly outperforms existing methods in terms of the exact accuracy and off-by-one accuracy. The off-by-one accuracy is when the result is off by one adjacent age label to the above or below. For the exact accuracy, the age label of "60+" is classified with the highest accuracy of 76%.

Breast Cancer Histopathological Image Classification Based on Deep Neural Network with Pre-Trained Model Architecture (사전훈련된 모델구조를 이용한 심층신경망 기반 유방암 조직병리학적 이미지 분류)

  • Mudeng, Vicky;Lee, Eonjin;Choe, Se-woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.399-401
    • /
    • 2022
  • A definitive diagnosis to classify the breast malignancy status may be achieved by microscopic analysis using surgical open biopsy. However, this procedure requires experts in the specializing of histopathological image analysis directing to time-consuming and high cost. To overcome these issues, deep learning is considered practically efficient to categorize breast cancer into benign and malignant from histopathological images in order to assist pathologists. This study presents a pre-trained convolutional neural network model architecture with a 100% fine-tuning scheme and Adagrad optimizer to classify the breast cancer histopathological images into benign and malignant using a 40× magnification BreaKHis dataset. The pre-trained architecture was constructed using the InceptionResNetV2 model to generate a modified InceptionResNetV2 by substituting the last layer with dense and dropout layers. The results by demonstrating training loss of 0.25%, training accuracy of 99.96%, validation loss of 3.10%, validation accuracy of 99.41%, test loss of 8.46%, and test accuracy of 98.75% indicated that the modified InceptionResNetV2 model is reliable to predict the breast malignancy type from histopathological images. Future works are necessary to focus on k-fold cross-validation, optimizer, model, hyperparameter optimization, and classification on 100×, 200×, and 400× magnification.

  • PDF

Aircraft Recognition from Remote Sensing Images Based on Machine Vision

  • Chen, Lu;Zhou, Liming;Liu, Jinming
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.795-808
    • /
    • 2020
  • Due to the poor evaluation indexes such as detection accuracy and recall rate when Yolov3 network detects aircraft in remote sensing images, in this paper, we propose a remote sensing image aircraft detection method based on machine vision. In order to improve the target detection effect, the Inception module was introduced into the Yolov3 network structure, and then the data set was cluster analyzed using the k-means algorithm. In order to obtain the best aircraft detection model, on the basis of our proposed method, we adjusted the network parameters in the pre-training model and improved the resolution of the input image. Finally, our method adopted multi-scale training model. In this paper, we used remote sensing aircraft dataset of RSOD-Dataset to do experiments, and finally proved that our method improved some evaluation indicators. The experiment of this paper proves that our method also has good detection and recognition ability in other ground objects.

Comparison of Image Classification Performance in Convolutional Neural Network according to Transfer Learning (전이학습에 방법에 따른 컨벌루션 신경망의 영상 분류 성능 비교)

  • Park, Sung-Wook;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1387-1395
    • /
    • 2018
  • Core algorithm of deep learning Convolutional Neural Network(CNN) shows better performance than other machine learning algorithms. However, if there is not sufficient data, CNN can not achieve satisfactory performance even if the classifier is excellent. In this situation, it has been proven that the use of transfer learning can have a great effect. In this paper, we apply two transition learning methods(freezing, retraining) to three CNN models(ResNet-50, Inception-V3, DenseNet-121) and compare and analyze how the classification performance of CNN changes according to the methods. As a result of statistical significance test using various evaluation indicators, ResNet-50, Inception-V3, and DenseNet-121 differed by 1.18 times, 1.09 times, and 1.17 times, respectively. Based on this, we concluded that the retraining method may be more effective than the freezing method in case of transition learning in image classification problem.