• Title/Summary/Keyword: CNN models

Search Result 453, Processing Time 0.024 seconds

Voting and Ensemble Schemes Based on CNN Models for Photo-Based Gender Prediction

  • Jhang, Kyoungson
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.809-819
    • /
    • 2020
  • Gender prediction accuracy increases as convolutional neural network (CNN) architecture evolves. This paper compares voting and ensemble schemes to utilize the already trained five CNN models to further improve gender prediction accuracy. The majority voting usually requires odd-numbered models while the proposed softmax-based voting can utilize any number of models to improve accuracy. The ensemble of CNN models combined with one more fully-connected layer requires further tuning or training of the models combined. With experiments, it is observed that the voting or ensemble of CNN models leads to further improvement of gender prediction accuracy and that especially softmax-based voters always show better gender prediction accuracy than majority voters. Also, compared with softmax-based voters, ensemble models show a slightly better or similar accuracy with added training of the combined CNN models. Softmax-based voting can be a fast and efficient way to get better accuracy without further training since the selection of the top accuracy models among available CNN pre-trained models usually leads to similar accuracy to that of the corresponding ensemble models.

Comparison of the Effect of Interpolation on the Mask R-CNN Model

  • Young-Pill, Ahn;Kwang Baek, Kim;Hyun-Jun, Park
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.17-23
    • /
    • 2023
  • Recently, several high-performance instance segmentation models have used the Mask R-CNN model as a baseline, which reached a historical peak in instance segmentation in 2017. There are numerous derived models using the Mask R-CNN model, and if the performance of Mask R-CNN is improved, the performance of the derived models is also anticipated to improve. The Mask R-CNN uses interpolation to adjust the image size, and the input differs depending on the interpolation method. Therefore, in this study, the performance change of Mask R-CNN was compared when various interpolation methods were applied to the transform layer to improve the performance of Mask R-CNN. To train and evaluate the models, this study utilized the PennFudan and Balloon datasets and the AP metric was used to evaluate model performance. As a result of the experiment, the derived Mask R-CNN model showed the best performance when bicubic interpolation was used in the transform layer.

Comparison of Convolutional Neural Network Models for Image Super Resolution

  • Jian, Chen;Yu, Songhyun;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.63-66
    • /
    • 2018
  • Recently, a convolutional neural network (CNN) models at single image super-resolution have been very successful. Residual learning improves training stability and network performance in CNN. In this paper, we compare four convolutional neural network models for super-resolution (SR) to learn nonlinear mapping from low-resolution (LR) input image to high-resolution (HR) target image. Four models include general CNN model, global residual learning CNN model, local residual learning CNN model, and the CNN model with global and local residual learning. Experiment results show that the results are greatly affected by how skip connections are connected at the basic CNN network, and network trained with only global residual learning generates highest performance among four models at objective and subjective evaluations.

  • PDF

Effects of CNN Backbone on Trajectory Prediction Models for Autonomous Vehicle

  • Seoyoung Lee;Hyogyeong Park;Yeonhwi You;Sungjung Yong;Il-Young Moon
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.346-350
    • /
    • 2023
  • Trajectory prediction is an essential element for driving autonomous vehicles, and various trajectory prediction models have emerged with the development of deep learning technology. Convolutional neural network (CNN) is the most commonly used neural network architecture for extracting the features of visual images, and the latest models exhibit high performances. This study was conducted to identify an efficient CNN backbone model among the components of deep learning models for trajectory prediction. We changed the existing CNN backbone network of multiple-trajectory prediction models used as feature extractors to various state-of-the-art CNN models. The experiment was conducted using nuScenes, which is a dataset used for the development of autonomous vehicles. The results of each model were compared using frequently used evaluation metrics for trajectory prediction. Analyzing the impact of the backbone can improve the performance of the trajectory prediction task. Investigating the influence of the backbone on multiple deep learning models can be a future challenge.

Performance Analysis of Optical Camera Communication with Applied Convolutional Neural Network (합성곱 신경망을 적용한 Optical Camera Communication 시스템 성능 분석)

  • Jong-In Kim;Hyun-Sun Park;Jung-Hyun Kim
    • Smart Media Journal
    • /
    • v.12 no.3
    • /
    • pp.49-59
    • /
    • 2023
  • Optical Camera Communication (OCC), known as the next-generation wireless communication technology, is currently under extensive research. The performance of OCC technology is affected by the communication environment, and various strategies are being studied to improve it. Among them, the most prominent method is applying convolutional neural networks (CNN) to the receiver of OCC using deep learning technology. However, in most studies, CNN is simply used to detect the transmitter. In this paper, we experiment with applying the convolutional neural network not only for transmitter detection but also for the Rx demodulation system. We hypothesize that, since the data images of the OCC system are relatively simple to classify compared to other image datasets, high accuracy results will appear in most CNN models. To prove this hypothesis, we designed and implemented an OCC system to collect data and applied it to 12 different CNN models for experimentation. The experimental results showed that not only high-performance CNN models with many parameters but also lightweight CNN models achieved an accuracy of over 99%. Through this, we confirmed the feasibility of applying the OCC system in real-time on mobile devices such as smartphones.

Comparison Study of the Performance of CNN Models with Multi-view Image Set on the Classification of Ship Hull Blocks (다시점 영상 집합을 활용한 선체 블록 분류를 위한 CNN 모델 성능 비교 연구)

  • Chon, Haemyung;Noh, Jackyou
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.57 no.3
    • /
    • pp.140-151
    • /
    • 2020
  • It is important to identify the location of ship hull blocks with exact block identification number when scheduling the shipbuilding process. The wrong information on the location and identification number of some hull block can cause low productivity by spending time to find where the exact hull block is. In order to solve this problem, it is necessary to equip the system to track the location of the blocks and to identify the identification numbers of the blocks automatically. There were a lot of researches of location tracking system for the hull blocks on the stockyard. However there has been no research to identify the hull blocks on the stockyard. This study compares the performance of 5 Convolutional Neural Network (CNN) models with multi-view image set on the classification of the hull blocks to identify the blocks on the stockyard. The CNN models are open algorithms of ImageNet Large-Scale Visual Recognition Competition (ILSVRC). Four scaled hull block models are used to acquire the images of ship hull blocks. Learning and transfer learning of the CNN models with original training data and augmented data of the original training data were done. 20 tests and predictions in consideration of five CNN models and four cases of training conditions are performed. In order to compare the classification performance of the CNN models, accuracy and average F1-Score from confusion matrix are adopted as the performance measures. As a result of the comparison, Resnet-152v2 model shows the highest accuracy and average F1-Score with full block prediction image set and with cropped block prediction image set.

Compression and Performance Evaluation of CNN Models on Embedded Board (임베디드 보드에서의 CNN 모델 압축 및 성능 검증)

  • Moon, Hyeon-Cheol;Lee, Ho-Young;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.200-207
    • /
    • 2020
  • Recently, deep neural networks such as CNN are showing excellent performance in various fields such as image classification, object recognition, visual quality enhancement, etc. However, as the model size and computational complexity of deep learning models for most applications increases, it is hard to apply neural networks to IoT and mobile environments. Therefore, neural network compression algorithms for reducing the model size while keeping the performance have been being studied. In this paper, we apply few compression methods to CNN models and evaluate their performances in the embedded environment. For evaluate the performance, the classification performance and inference time of the original CNN models and the compressed CNN models on the image inputted by the camera are evaluated in the embedded board equipped with QCS605, which is a customized AI chip. In this paper, a few CNN models of MobileNetV2, ResNet50, and VGG-16 are compressed by applying the methods of pruning and matrix decomposition. The experimental results show that the compressed models give not only the model size reduction of 1.3~11.2 times at a classification performance loss of less than 2% compared to the original model, but also the inference time reduction of 1.2~2.21 times, and the memory reduction of 1.2~3.8 times in the embedded board.

Performance Comparison of Base CNN Models in Transfer Learning for Crop Diseases Classification (농작물 질병분류를 위한 전이학습에 사용되는 기초 합성곱신경망 모델간 성능 비교)

  • Yoon, Hyoup-Sang;Jeong, Seok-Bong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.33-38
    • /
    • 2021
  • Recently, transfer learning techniques with a base convolutional neural network (CNN) model have widely gained acceptance in early detection and classification of crop diseases to increase agricultural productivity with reducing disease spread. The transfer learning techniques based classifiers generally achieve over 90% of classification accuracy for crop diseases using dataset of crop leaf images (e.g., PlantVillage dataset), but they have ability to classify only the pre-trained diseases. This paper provides with an evaluation scheme on selecting an effective base CNN model for crop disease transfer learning with regard to the accuracy of trained target crops as well as of untrained target crops. First, we present transfer learning models called CDC (crop disease classification) architecture including widely used base (pre-trained) CNN models. We evaluate each performance of seven base CNN models for four untrained crops. The results of performance evaluation show that the DenseNet201 is one of the best base CNN models.

Cross-Domain Text Sentiment Classification Method Based on the CNN-BiLSTM-TE Model

  • Zeng, Yuyang;Zhang, Ruirui;Yang, Liang;Song, Sujuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.818-833
    • /
    • 2021
  • To address the problems of low precision rate, insufficient feature extraction, and poor contextual ability in existing text sentiment analysis methods, a mixed model account of a CNN-BiLSTM-TE (convolutional neural network, bidirectional long short-term memory, and topic extraction) model was proposed. First, Chinese text data was converted into vectors through the method of transfer learning by Word2Vec. Second, local features were extracted by the CNN model. Then, contextual information was extracted by the BiLSTM neural network and the emotional tendency was obtained using softmax. Finally, topics were extracted by the term frequency-inverse document frequency and K-means. Compared with the CNN, BiLSTM, and gate recurrent unit (GRU) models, the CNN-BiLSTM-TE model's F1-score was higher than other models by 0.0147, 0.006, and 0.0052, respectively. Then compared with CNN-LSTM, LSTM-CNN, and BiLSTM-CNN models, the F1-score was higher by 0.0071, 0.0038, and 0.0049, respectively. Experimental results showed that the CNN-BiLSTM-TE model can effectively improve various indicators in application. Lastly, performed scalability verification through a takeaway dataset, which has great value in practical applications.

Comparison of Deep Learning Models Using Protein Sequence Data (단백질 기능 예측 모델의 주요 딥러닝 모델 비교 실험)

  • Lee, Jeung Min;Lee, Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.245-254
    • /
    • 2022
  • Proteins are the basic unit of all life activities, and understanding them is essential for studying life phenomena. Since the emergence of the machine learning methodology using artificial neural networks, many researchers have tried to predict the function of proteins using only protein sequences. Many combinations of deep learning models have been reported to academia, but the methods are different and there is no formal methodology, and they are tailored to different data, so there has never been a direct comparative analysis of which algorithms are more suitable for handling protein data. In this paper, the single model performance of each algorithm was compared and evaluated based on accuracy and speed by applying the same data to CNN, LSTM, and GRU models, which are the most frequently used representative algorithms in the convergence research field of predicting protein functions, and the final evaluation scale is presented as Micro Precision, Recall, and F1-score. The combined models CNN-LSTM and CNN-GRU models also were evaluated in the same way. Through this study, it was confirmed that the performance of LSTM as a single model is good in simple classification problems, overlapping CNN was suitable as a single model in complex classification problems, and the CNN-LSTM was relatively better as a combination model.