• Title/Summary/Keyword: Convolutional Neural Networks (CNNs)

Search Result 77, Processing Time 0.029 seconds

Text Classification Using Parallel Word-level and Character-level Embeddings in Convolutional Neural Networks

  • Geonu Kim;Jungyeon Jang;Juwon Lee;Kitae Kim;Woonyoung Yeo;Jong Woo Kim
    • Asia pacific journal of information systems
    • /
    • v.29 no.4
    • /
    • pp.771-788
    • /
    • 2019
  • Deep learning techniques such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) show superior performance in text classification than traditional approaches such as Support Vector Machines (SVMs) and Naïve Bayesian approaches. When using CNNs for text classification tasks, word embedding or character embedding is a step to transform words or characters to fixed size vectors before feeding them into convolutional layers. In this paper, we propose a parallel word-level and character-level embedding approach in CNNs for text classification. The proposed approach can capture word-level and character-level patterns concurrently in CNNs. To show the usefulness of proposed approach, we perform experiments with two English and three Korean text datasets. The experimental results show that character-level embedding works better in Korean and word-level embedding performs well in English. Also the experimental results reveal that the proposed approach provides better performance than traditional CNNs with word-level embedding or character-level embedding in both Korean and English documents. From more detail investigation, we find that the proposed approach tends to perform better when there is relatively small amount of data comparing to the traditional embedding approaches.

Convolutional Neural Network Based on Accelerator-Aware Pruning for Object Detection in Single-Shot Multibox Detector (싱글숏 멀티박스 검출기에서 객체 검출을 위한 가속 회로 인지형 가지치기 기반 합성곱 신경망 기법)

  • Kang, Hyeong-Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.141-144
    • /
    • 2020
  • Convolutional neural networks (CNNs) show high performance in computer vision tasks including object detection, but a lot of weight storage and computation is required. In this paper, a pruning scheme is applied to CNNs for object detection, which can remove much amount of weights with a negligible performance degradation. Contrary to the previous ones, the pruning scheme applied in this paper considers the base accelerator architecture. With the consideration, the pruned CNNs can be efficiently performed on an ASIC or FPGA accelerator. Even with the constrained pruning, the resulting CNN shows a negligible degradation of detection performance, less-than-1% point degradation of mAP on VOD0712 test set. With the proposed scheme, CNNs can be applied to objection dtection efficiently.

TsCNNs-Based Inappropriate Image and Video Detection System for a Social Network

  • Kim, Youngsoo;Kim, Taehong;Yoo, Seong-eun
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.677-687
    • /
    • 2022
  • We propose a detection algorithm based on tree-structured convolutional neural networks (TsCNNs) that finds pornography, propaganda, or other inappropriate content on a social media network. The algorithm sequentially applies the typical convolutional neural network (CNN) algorithm in a tree-like structure to minimize classification errors in similar classes, and thus improves accuracy. We implemented the detection system and conducted experiments on a data set comprised of 6 ordinary classes and 11 inappropriate classes collected from the Korean military social network. Each model of the proposed algorithm was trained, and the performance was then evaluated according to the images and videos identified. Experimental results with 20,005 new images showed that the overall accuracy in image identification achieved a high-performance level of 99.51%, and the effectiveness of the algorithm reduced identification errors by the typical CNN algorithm by 64.87 %. By reducing false alarms in video identification from the domain, the TsCNNs achieved optimal performance of 98.11% when using 10 minutes frame-sampling intervals. This indicates that classification through proper sampling contributes to the reduction of computational burden and false alarms.

Higher-Order Conditional Random Field established with CNNs for Video Object Segmentation

  • Hao, Chuanyan;Wang, Yuqi;Jiang, Bo;Liu, Sijiang;Yang, Zhi-Xin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.9
    • /
    • pp.3204-3220
    • /
    • 2021
  • We perform the task of video object segmentation by incorporating a conditional random field (CRF) and convolutional neural networks (CNNs). Most methods employ a CRF to refine a coarse output from fully convolutional networks. Others treat the inference process of the CRF as a recurrent neural network and then combine CNNs and the CRF into an end-to-end model for video object segmentation. In contrast to these methods, we propose a novel higher-order CRF model to solve the problem of video object segmentation. Specifically, we use CNNs to establish a higher-order dependence among pixels, and this dependence can provide critical global information for a segmentation model to enhance the global consistency of segmentation. In general, the optimization of the higher-order energy is extremely difficult. To make the problem tractable, we decompose the higher-order energy into two parts by utilizing auxiliary variables and then solve it by using an iterative process. We conduct quantitative and qualitative analyses on multiple datasets, and the proposed method achieves competitive results.

Sparse Feature Convolutional Neural Network with Cluster Max Extraction for Fast Object Classification

  • Kim, Sung Hee;Pae, Dong Sung;Kang, Tae-Koo;Kim, Dong W.;Lim, Myo Taeg
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2468-2478
    • /
    • 2018
  • We propose the Sparse Feature Convolutional Neural Network (SFCNN) to reduce the volume of convolutional neural networks (CNNs). Despite the superior classification performance of CNNs, their enormous network volume requires high computational cost and long processing time, making real-time applications such as online-training difficult. We propose an advanced network that reduces the volume of conventional CNNs by producing a region-based sparse feature map. To produce the sparse feature map, two complementary region-based value extraction methods, cluster max extraction and local value extraction, are proposed. Cluster max is selected as the main function based on experimental results. To evaluate SFCNN, we conduct an experiment with two conventional CNNs. The network trains 59 times faster and tests 81 times faster than the VGG network, with a 1.2% loss of accuracy in multi-class classification using the Caltech101 dataset. In vehicle classification using the GTI Vehicle Image Database, the network trains 88 times faster and tests 94 times faster than the conventional CNNs, with a 0.1% loss of accuracy.

Medical Image Classification using Pre-trained Convolutional Neural Networks and Support Vector Machine

  • Ahmed, Ali
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.1-6
    • /
    • 2021
  • Recently, pre-trained convolutional neural network CNNs have been widely used and applied for medical image classification. These models can utilised in three different ways, for feature extraction, to use the architecture of the pre-trained model and to train some layers while freezing others. In this study, the ResNet18 pre-trained CNNs model is used for feature extraction, followed by the support vector machine for multiple classes to classify medical images from multi-classes, which is used as the main classifier. Our proposed classification method was implemented on Kvasir and PH2 medical image datasets. The overall accuracy was 93.38% and 91.67% for Kvasir and PH2 datasets, respectively. The classification results and performance of our proposed method outperformed some of the related similar methods in this area of study.

Optimizing Image Size of Convolutional Neural Networks for Producing Remote Sensing-based Thematic Map

  • Jo, Hyun-Woo;Kim, Ji-Won;Lim, Chul-Hee;Song, Chol-Ho;Lee, Woo-Kyun
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.4
    • /
    • pp.661-670
    • /
    • 2018
  • This study aims to develop a methodology of convolutional neural networks (CNNs) to produce thematic maps from remote sensing data. Optimizing the image size for CNNs was studied, since the size of the image affects to accuracy, working as hyper-parameter. The selected study area is Mt. Ung, located in Dangjin-si, Chungcheongnam-do, South Korea, consisting of both coniferous forest and deciduous forest. Spatial structure analysis and the classification of forest type using CNNs was carried in the study area at a diverse range of scales. As a result of the spatial structure analysis, it was found that the local variance (LV) was high, in the range of 7.65 m to 18.87 m, meaning that the size of objects in the image is likely to be with in this range. As a result of the classification, the image measuring 15.81 m, belonging to the range with highest LV values, had the highest classification accuracy of 85.09%. Also, there was a positive correlation between LV and the accuracy in the range under 15.81 m, which was judged to be the optimal image size. Therefore, the trial and error selection of the optimum image size could be minimized by choosing the result of the spatial structure analysis as the starting point. This study estimated the optimal image size for CNNs using spatial structure analysis and found that this can be used to promote the application of deep-learning in remote sensing.

A Study on the Vehicle License Plate Recognition Using Convolutional Neural Networks(CNNs) (CNN 기법을 이용한 자동차 번호판 인식법 연구)

  • Nkundwanayo Seth;Gyoo-Soo Chae
    • Journal of Advanced Technology Convergence
    • /
    • v.2 no.4
    • /
    • pp.7-11
    • /
    • 2023
  • In this study, we presented a method to recognize vehicle license plates using CNN techniques. A vehicle plate is normally used for the official identification purposes by the authorities. Most regular Optical Character Recognition (OCR) techniques perform well in recognizing printed characters on documents but cannot make out the registration number on the number plates. Besides, the existing approaches to plate number detection require that the vehicle is stationary and not in motion. To address these challenges to number plate detection we make the following contributions. We create a database of captured vehicle number plate's images and recognize the number plate character using Convolutional Neural Networks. The results of this study can be usefully used in parking management systems and enforcement cameras.

Analysis of Reduced-Width Truncated Mitchell Multiplication for Inferences Using CNNs

  • Kim, HyunJin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.5
    • /
    • pp.235-242
    • /
    • 2020
  • This paper analyzes the effect of reduced output width of the truncated logarithmic multiplication and application to inferences using convolutional neural networks (CNNs). For small hardware overhead, output width is reduced in the truncated Mitchell multiplier, so that fractional bits in multiplication output are minimized in error-resilient applications. This analysis shows that when reducing output width in the truncated Mitchell multiplier, even though worst-case relative error increases, average relative error can be kept small. When adopting 8 fractional bits in multiplication output in the evaluations, there is no significant performance degradation in target CNNs compared to existing exact and original Mitchell multipliers.

A novel MobileNet with selective depth multiplier to compromise complexity and accuracy

  • Chan Yung Kim;Kwi Seob Um;Seo Weon Heo
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.666-677
    • /
    • 2023
  • In the last few years, convolutional neural networks (CNNs) have demonstrated good performance while solving various computer vision problems. However, since CNNs exhibit high computational complexity, signal processing is performed on the server side. To reduce the computational complexity of CNNs for edge computing, a lightweight algorithm, such as a MobileNet, is proposed. Although MobileNet is lighter than other CNN models, it commonly achieves lower classification accuracy. Hence, to find a balance between complexity and accuracy, additional hyperparameters for adjusting the size of the model have recently been proposed. However, significantly increasing the number of parameters makes models dense and unsuitable for devices with limited computational resources. In this study, we propose a novel MobileNet architecture, in which the number of parameters is adaptively increased according to the importance of feature maps. We show that our proposed network achieves better classification accuracy with fewer parameters than the conventional MobileNet.