• Title/Summary/Keyword: Convolutional Network (CNN)

Search Result 957, Processing Time 0.029 seconds

Application of Ground Penetrating Radar (GPR) coupled with Convolutional Neural Network (CNN) for characterizing underground conditions

  • Dae-Hong Min;Hyung-Koo Yoon
    • Geomechanics and Engineering
    • /
    • v.37 no.5
    • /
    • pp.467-474
    • /
    • 2024
  • Monitoring and managing the condition of underground utilities is crucial for ground stability. This study aims to determine whether images obtained using ground penetrating radar (GPR) accurately reflect the characteristics of buried pipelines through image analysis. The investigation focuses on pipelines made from different materials, namely concrete and steel, with concrete pipes tested under various diameters to assess detectability under differing conditions. A total of 400 images are acquired at locations with pipelines, and for comparison, an additional 100 data points are collected from areas without pipelines. The study employs GPR at frequencies of 200 MHz and 600 MHz, and image analysis is performed using machine learning-based convolutional neural network (CNN) techniques. The analysis results demonstrate high classification reliability based on the training data, especially in distinguishing between pipes of the same material but of different diameters. The findings suggest that the integration of GPR and CNN algorithms can offer satisfactory performance in exploring the ground's interior characteristics.

Comparison of Spatial and Frequency Images for Character Recognition (문자인식을 위한 공간 및 주파수 도메인 영상의 비교)

  • Abdurakhmon, Abduraimjonov;Choi, Hyeon-yeong;Ko, Jaepil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.439-441
    • /
    • 2019
  • Deep learning has become a powerful and robust algorithm in Artificial Intelligence. One of the most impressive forms of Deep learning tools is that of the Convolutional Neural Networks (CNN). CNN is a state-of-the-art solution for object recognition. For instance when we utilize CNN with MNIST handwritten digital dataset, mostly the result is well. Because, in MNIST dataset, all digits are centralized. Unfortunately, the real world is different from our imagination. If digits are shifted from the center, it becomes a big issue for CNN to recognize and provide result like before. To solve that issue, we have created frequency images from spatial images by a Fast Fourier Transform (FFT).

  • PDF

Image Classification using Deep Learning Algorithm and 2D Lidar Sensor (딥러닝 알고리즘과 2D Lidar 센서를 이용한 이미지 분류)

  • Lee, Junho;Chang, Hyuk-Jun
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1302-1308
    • /
    • 2019
  • This paper presents an approach for classifying image made by acquired position data from a 2D Lidar sensor with a convolutional neural network (CNN). Lidar sensor has been widely used for unmanned devices owing to advantages in term of data accuracy, robustness against geometry distortion and light variations. A CNN algorithm consists of one or more convolutional and pooling layers and has shown a satisfactory performance for image classification. In this paper, different types of CNN architectures based on training methods, Gradient Descent(GD) and Levenberg-arquardt(LM), are implemented. The LM method has two types based on the frequency of approximating Hessian matrix, one of the factors to update training parameters. Simulation results of the LM algorithms show better classification performance of the image data than that of the GD algorithm. In addition, the LM algorithm with more frequent Hessian matrix approximation shows a smaller error than the other type of LM algorithm.

Real-Time License Plate Detection Based on Faster R-CNN (Faster R-CNN 기반의 실시간 번호판 검출)

  • Lee, Dongsuk;Yoon, Sook;Lee, Jaehwan;Park, Dong Sun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.511-520
    • /
    • 2016
  • Automatic License Plate Detection (ALPD) is a key technology for a efficient traffic control. It is used to improve work efficiency in many applications such as toll payment systems and parking and traffic management. Until recently, the hand-crafted features made for image processing are used to detect license plates in most studies. It has the advantage in speed. but can degrade the detection rate with respect to various environmental changes. In this paper, we propose a way to utilize a Faster Region based Convolutional Neural Networks (Faster R-CNN) and a Conventional Convolutional Neural Networks (CNN), which improves the computational speed and is robust against changed environments. The module based on Faster R-CNN is used to detect license plate candidate regions from images and is followed by the module based on CNN to remove False Positives from the candidates. As a result, we achieved a detection rate of 99.94% from images captured under various environments. In addition, the average operating speed is 80ms/image. We implemented a fast and robust Real-Time License Plate Detection System.

Image Caption Generation using Recurrent Neural Network (Recurrent Neural Network를 이용한 이미지 캡션 생성)

  • Lee, Changki
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.878-882
    • /
    • 2016
  • Automatic generation of captions for an image is a very difficult task, due to the necessity of computer vision and natural language processing technologies. However, this task has many important applications, such as early childhood education, image retrieval, and navigation for blind. In this paper, we describe a Recurrent Neural Network (RNN) model for generating image captions, which takes image features extracted from a Convolutional Neural Network (CNN). We demonstrate that our models produce state of the art results in image caption generation experiments on the Flickr 8K, Flickr 30K, and MS COCO datasets.

A Real-Time Hardware Design of CNN for Vehicle Detection (차량 검출용 CNN 분류기의 실시간 처리를 위한 하드웨어 설계)

  • Bang, Ji-Won;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.20 no.4
    • /
    • pp.351-360
    • /
    • 2016
  • Recently, machine learning algorithms, especially deep learning-based algorithms, have been receiving attention due to its high classification performance. Among the algorithms, Convolutional Neural Network(CNN) is known to be efficient for image processing tasks used for Advanced Driver Assistance Systems(ADAS). However, it is difficult to achieve real-time processing for CNN in vehicle embedded software environment due to the repeated operations contained in each layer of CNN. In this paper, we propose a hardware accelerator which enhances the execution time of CNN by parallelizing the repeated operations such as convolution. Xilinx ZC706 evaluation board is used to verify the performance of the proposed accelerator. For $36{\times}36$ input images, the hardware execution time of CNN is 2.812ms in 100MHz clock frequency and shows that our hardware can be executed in real-time.

Categorization of Korean News Articles Based on Convolutional Neural Network Using Doc2Vec and Word2Vec (Doc2Vec과 Word2Vec을 활용한 Convolutional Neural Network 기반 한국어 신문 기사 분류)

  • Kim, Dowoo;Koo, Myoung-Wan
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.742-747
    • /
    • 2017
  • In this paper, we propose a novel approach to improve the performance of the Convolutional Neural Network(CNN) word embedding model on top of word2vec with the result of performing like doc2vec in conducting a document classification task. The Word Piece Model(WPM) is empirically proven to outperform other tokenization methods such as the phrase unit, a part-of-speech tagger with substantial experimental evidence (classification rate: 79.5%). Further, we conducted an experiment to classify ten categories of news articles written in Korean by feeding words and document vectors generated by an application of WPM to the baseline and the proposed model. From the results of the experiment, we report the model we proposed showed a higher classification rate (89.88%) than its counterpart model (86.89%), achieving a 22.80% improvement. Throughout this research, it is demonstrated that applying doc2vec in the document classification task yields more effective results because doc2vec generates similar document vector representation for documents belonging to the same category.

Deep Learning based Frame Synchronization Using Convolutional Neural Network (합성곱 신경망을 이용한 딥러닝 기반의 프레임 동기 기법)

  • Lee, Eui-Soo;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.501-507
    • /
    • 2020
  • This paper proposes a new frame synchronization technique based on convolutional neural network (CNN). The conventional frame synchronizers usually find the matching instance through correlation between the received signal and the preamble. The proposed method converts the 1-dimensional correlator ouput into a 2-dimensional matrix. The 2-dimensional matrix is input to a convolutional neural network, and the convolutional neural network finds the frame arrival time. Specifically, in additive white gaussian noise (AWGN) environments, the received signals are generated with random arrival times and they are used for training data of the CNN. Through computer simulation, the false detection probabilities in various signal-to-noise ratios are investigated and compared between the proposed CNN-based technique and the conventional one. According to the results, the proposed technique shows 2dB better performance than the conventional method.

Image based Fire Detection using Convolutional Neural Network (CNN을 활용한 영상 기반의 화재 감지)

  • Kim, Young-Jin;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.9
    • /
    • pp.1649-1656
    • /
    • 2016
  • Performance of the existing sensor-based fire detection system is limited according to factors in the environment surrounding the sensor. A number of image-based fire detection systems were introduced in order to solve these problem. But such a system can generate a false alarm for objects similar in appearance to fire due to algorithm that directly defines the characteristics of a flame. Also fir detection systems using movement between video flames cannot operate correctly as intended in an environment in which the network is unstable. In this paper, we propose an image-based fire detection method using CNN (Convolutional Neural Network). In this method, firstly we extract fire candidate region using color information from video frame input and then detect fire using trained CNN. Also, we show that the performance is significantly improved compared to the detection rate and missing rate found in previous studies.

Cross-Domain Text Sentiment Classification Method Based on the CNN-BiLSTM-TE Model

  • Zeng, Yuyang;Zhang, Ruirui;Yang, Liang;Song, Sujuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.818-833
    • /
    • 2021
  • To address the problems of low precision rate, insufficient feature extraction, and poor contextual ability in existing text sentiment analysis methods, a mixed model account of a CNN-BiLSTM-TE (convolutional neural network, bidirectional long short-term memory, and topic extraction) model was proposed. First, Chinese text data was converted into vectors through the method of transfer learning by Word2Vec. Second, local features were extracted by the CNN model. Then, contextual information was extracted by the BiLSTM neural network and the emotional tendency was obtained using softmax. Finally, topics were extracted by the term frequency-inverse document frequency and K-means. Compared with the CNN, BiLSTM, and gate recurrent unit (GRU) models, the CNN-BiLSTM-TE model's F1-score was higher than other models by 0.0147, 0.006, and 0.0052, respectively. Then compared with CNN-LSTM, LSTM-CNN, and BiLSTM-CNN models, the F1-score was higher by 0.0071, 0.0038, and 0.0049, respectively. Experimental results showed that the CNN-BiLSTM-TE model can effectively improve various indicators in application. Lastly, performed scalability verification through a takeaway dataset, which has great value in practical applications.