• Title/Summary/Keyword: CNN (Convolutional Neural Network)

Search Result 968, Processing Time 0.031 seconds

Deep Learning-Based Real-Time Pedestrian Detection on Embedded GPUs (임베디드 GPU에서의 딥러닝 기반 실시간 보행자 탐지 기법)

  • Vien, An Gia;Lee, Chul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.357-360
    • /
    • 2019
  • We propose an efficient single convolutional neural network (CNN) for pedestrian detection on embedded GPUs. We first determine the optimal number of the convolutional layers and hyper-parameters for a lightweight CNN. Then, we employ a multi-scale approach to make the network robust to the sizes of the pedestrians in images. Experimental results demonstrate that the proposed algorithm is capable of real-time operation, while providing higher detection performance than conventional algorithms.

Learning of Large-Scale Korean Character Data through the Convolutional Neural Network (Convolutional Neural Network를 통한 대규모 한글 데이터 학습)

  • Kim, Yeon-gyu;Cha, Eui-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.97-100
    • /
    • 2016
  • Using the CNN(Convolutinal Neural Network), Deep Learning for variety of fields are being developed and these are showing significantly high level of performance at image recognition field. In this paper, we show the test accuracy which is learned by large-scale training data, over 5,000,000 of Korean characters. The architecture of CNN used in this paper is KCR(Korean Character Recognition)-AlexNet newly created based on AlexNet. KCR-AlexNet finally showed over 98% of test accuracy. The experimental data used in this paper is large-scale Korean character database PHD08 which has 2,187 samples for each Korean character and there are 2,350 Korean characters that makes total 5,139,450 sample data. Through this study, we show the excellence of architecture of KCR-AlexNet for learning PHD08.

  • PDF

The application of convolutional neural networks for automatic detection of underwater object in side scan sonar images (사이드 스캔 소나 영상에서 수중물체 자동 탐지를 위한 컨볼루션 신경망 기법 적용)

  • Kim, Jungmoon;Choi, Jee Woong;Kwon, Hyuckjong;Oh, Raegeun;Son, Su-Uk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.2
    • /
    • pp.118-128
    • /
    • 2018
  • In this paper, we have studied how to search an underwater object by learning the image generated by the side scan sonar in the convolution neural network. In the method of human side analysis of the side scan image or the image, the convolution neural network algorithm can enhance the efficiency of the analysis. The image data of the side scan sonar used in the experiment is the public data of NSWC (Naval Surface Warfare Center) and consists of four kinds of synthetic underwater objects. The convolutional neural network algorithm is based on Faster R-CNN (Region based Convolutional Neural Networks) learning based on region of interest and the details of the neural network are self-organized to fit the data we have. The results of the study were compared with a precision-recall curve, and we investigated the applicability of underwater object detection in convolution neural networks by examining the effect of change of region of interest assigned to sonar image data on detection performance.

Image Classification using Deep Learning Algorithm and 2D Lidar Sensor (딥러닝 알고리즘과 2D Lidar 센서를 이용한 이미지 분류)

  • Lee, Junho;Chang, Hyuk-Jun
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1302-1308
    • /
    • 2019
  • This paper presents an approach for classifying image made by acquired position data from a 2D Lidar sensor with a convolutional neural network (CNN). Lidar sensor has been widely used for unmanned devices owing to advantages in term of data accuracy, robustness against geometry distortion and light variations. A CNN algorithm consists of one or more convolutional and pooling layers and has shown a satisfactory performance for image classification. In this paper, different types of CNN architectures based on training methods, Gradient Descent(GD) and Levenberg-arquardt(LM), are implemented. The LM method has two types based on the frequency of approximating Hessian matrix, one of the factors to update training parameters. Simulation results of the LM algorithms show better classification performance of the image data than that of the GD algorithm. In addition, the LM algorithm with more frequent Hessian matrix approximation shows a smaller error than the other type of LM algorithm.

CNN-based damage identification method of tied-arch bridge using spatial-spectral information

  • Duan, Yuanfeng;Chen, Qianyi;Zhang, Hongmei;Yun, Chung Bang;Wu, Sikai;Zhu, Qi
    • Smart Structures and Systems
    • /
    • v.23 no.5
    • /
    • pp.507-520
    • /
    • 2019
  • In the structural health monitoring field, damage detection has been commonly carried out based on the structural model and the engineering features related to the model. However, the extracted features are often subjected to various errors, which makes the pattern recognition for damage detection still challenging. In this study, an automated damage identification method is presented for hanger cables in a tied-arch bridge using a convolutional neural network (CNN). Raw measurement data for Fourier amplitude spectra (FAS) of acceleration responses are used without a complex data pre-processing for modal identification. A CNN is a kind of deep neural network that typically consists of convolution, pooling, and fully-connected layers. A numerical simulation study was performed for multiple damage detection in the hangers using ambient wind vibration data on the bridge deck. The results show that the current CNN using FAS data performs better under various damage states than the CNN using time-history data and the traditional neural network using FAS. Robustness of the present CNN has been proven under various observational noise levels and wind speeds.

Comparison of Spatial and Frequency Images for Character Recognition (문자인식을 위한 공간 및 주파수 도메인 영상의 비교)

  • Abdurakhmon, Abduraimjonov;Choi, Hyeon-yeong;Ko, Jaepil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.439-441
    • /
    • 2019
  • Deep learning has become a powerful and robust algorithm in Artificial Intelligence. One of the most impressive forms of Deep learning tools is that of the Convolutional Neural Networks (CNN). CNN is a state-of-the-art solution for object recognition. For instance when we utilize CNN with MNIST handwritten digital dataset, mostly the result is well. Because, in MNIST dataset, all digits are centralized. Unfortunately, the real world is different from our imagination. If digits are shifted from the center, it becomes a big issue for CNN to recognize and provide result like before. To solve that issue, we have created frequency images from spatial images by a Fast Fourier Transform (FFT).

  • PDF

Facial Expression Classification Using Deep Convolutional Neural Network

  • Choi, In-kyu;Ahn, Ha-eun;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.485-492
    • /
    • 2018
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. The proposed structure has general classification performance for any environment or subject. For this purpose, we collect a variety of databases and organize the database into six expression classes such as 'expressionless', 'happy', 'sad', 'angry', 'surprised' and 'disgusted'. Pre-processing and data augmentation techniques are applied to improve training efficiency and classification performance. In the existing CNN structure, the optimal structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of nodes of fully-connected layer. The experimental results show good classification performance compared to the state-of-the-arts in experiments of the cross validation and the cross database. Also, compared to other conventional models, it is confirmed that the proposed structure is superior in classification performance with less execution time.

Real-Time License Plate Detection Based on Faster R-CNN (Faster R-CNN 기반의 실시간 번호판 검출)

  • Lee, Dongsuk;Yoon, Sook;Lee, Jaehwan;Park, Dong Sun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.511-520
    • /
    • 2016
  • Automatic License Plate Detection (ALPD) is a key technology for a efficient traffic control. It is used to improve work efficiency in many applications such as toll payment systems and parking and traffic management. Until recently, the hand-crafted features made for image processing are used to detect license plates in most studies. It has the advantage in speed. but can degrade the detection rate with respect to various environmental changes. In this paper, we propose a way to utilize a Faster Region based Convolutional Neural Networks (Faster R-CNN) and a Conventional Convolutional Neural Networks (CNN), which improves the computational speed and is robust against changed environments. The module based on Faster R-CNN is used to detect license plate candidate regions from images and is followed by the module based on CNN to remove False Positives from the candidates. As a result, we achieved a detection rate of 99.94% from images captured under various environments. In addition, the average operating speed is 80ms/image. We implemented a fast and robust Real-Time License Plate Detection System.

Image based Fire Detection using Convolutional Neural Network (CNN을 활용한 영상 기반의 화재 감지)

  • Kim, Young-Jin;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.9
    • /
    • pp.1649-1656
    • /
    • 2016
  • Performance of the existing sensor-based fire detection system is limited according to factors in the environment surrounding the sensor. A number of image-based fire detection systems were introduced in order to solve these problem. But such a system can generate a false alarm for objects similar in appearance to fire due to algorithm that directly defines the characteristics of a flame. Also fir detection systems using movement between video flames cannot operate correctly as intended in an environment in which the network is unstable. In this paper, we propose an image-based fire detection method using CNN (Convolutional Neural Network). In this method, firstly we extract fire candidate region using color information from video frame input and then detect fire using trained CNN. Also, we show that the performance is significantly improved compared to the detection rate and missing rate found in previous studies.

Comparison of CNN and YOLO for Object Detection (객체 검출을 위한 CNN과 YOLO 성능 비교 실험)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.1
    • /
    • pp.85-92
    • /
    • 2020
  • Object detection plays a critical role in the field of computer vision, and various researches have rapidly increased along with applying convolutional neural network and its modified structures since 2012. There are representative object detection algorithms, which are convolutional neural networks and YOLO. This paper presents two representative algorithm series, based on CNN and YOLO which solves the problem of CNN bounding box. We compare the performance of algorithm series in terms of accuracy, speed and cost. Compared with the latest advanced solution, YOLO v3 achieves a good trade-off between speed and accuracy.