• Title/Summary/Keyword: 1D Convolutional Neural Networks

Search Result 35, Processing Time 0.028 seconds

Deep Learning based Frame Synchronization Using Convolutional Neural Network (합성곱 신경망을 이용한 딥러닝 기반의 프레임 동기 기법)

  • Lee, Eui-Soo;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.501-507
    • /
    • 2020
  • This paper proposes a new frame synchronization technique based on convolutional neural network (CNN). The conventional frame synchronizers usually find the matching instance through correlation between the received signal and the preamble. The proposed method converts the 1-dimensional correlator ouput into a 2-dimensional matrix. The 2-dimensional matrix is input to a convolutional neural network, and the convolutional neural network finds the frame arrival time. Specifically, in additive white gaussian noise (AWGN) environments, the received signals are generated with random arrival times and they are used for training data of the CNN. Through computer simulation, the false detection probabilities in various signal-to-noise ratios are investigated and compared between the proposed CNN-based technique and the conventional one. According to the results, the proposed technique shows 2dB better performance than the conventional method.

Connection stiffness reduction analysis in steel bridge via deep CNN and modal experimental data

  • Dang, Hung V.;Raza, Mohsin;Tran-Ngoc, H.;Bui-Tien, T.;Nguyen, Huan X.
    • Structural Engineering and Mechanics
    • /
    • v.77 no.4
    • /
    • pp.495-508
    • /
    • 2021
  • This study devises a novel approach, namely quadruple 1D convolutional neural network, for detecting connection stiffness reduction in steel truss bridge structure using experimental and numerical modal data. The method is developed based on expertise in two domains: firstly, in Structural Health Monitoring, the mode shapes and its high-order derivatives, including second, third, and fourth derivatives, are accurate indicators in assessing damages. Secondly, in the Machine Learning literature, the deep convolutional neural networks are able to extract relevant features from input data, then perform classification tasks with high accuracy and reduced time complexity. The efficacy and effectiveness of the present method are supported through an extensive case study with the railway Nam O bridge. It delivers highly accurate results in assessing damage localization and damage severity for single as well as multiple damage scenarios. In addition, the robustness of this method is tested with the presence of white noise reflecting unavoidable uncertainties in signal processing and modeling in reality. The proposed approach is able to provide stable results with data corrupted by noise up to 10%.

Improvement of Vocal Detection Accuracy Using Convolutional Neural Networks

  • You, Shingchern D.;Liu, Chien-Hung;Lin, Jia-Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.729-748
    • /
    • 2021
  • Vocal detection is one of the fundamental steps in musical information retrieval. Typically, the detection process consists of feature extraction and classification steps. Recently, neural networks are shown to outperform traditional classifiers. In this paper, we report our study on how to improve detection accuracy further by carefully choosing the parameters of the deep network model. Through experiments, we conclude that a feature-classifier model is still better than an end-to-end model. The recommended model uses a spectrogram as the input plane and the classifier is an 18-layer convolutional neural network (CNN). With this arrangement, when compared with existing literature, the proposed model improves the accuracy from 91.8% to 94.1% in Jamendo dataset. As the dataset has an accuracy of more than 90%, the improvement of 2.3% is difficult and valuable. If even higher accuracy is required, the ensemble learning may be used. The recommend setting is a majority vote with seven proposed models. Doing so, the accuracy increases by about 1.1% in Jamendo dataset.

Deep Unsupervised Learning for Rain Streak Removal using Time-varying Rain Streak Scene (시간에 따라 변화하는 빗줄기 장면을 이용한 딥러닝 기반 비지도 학습 빗줄기 제거 기법)

  • Cho, Jaehoon;Jang, Hyunsung;Ha, Namkoo;Lee, Seungha;Park, Sungsoon;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.1-9
    • /
    • 2019
  • Single image rain removal is a typical inverse problem which decomposes the image into a background scene and a rain streak. Recent works have witnessed a substantial progress on the task due to the development of convolutional neural network (CNN). However, existing CNN-based approaches train the network with synthetically generated training examples. These data tend to make the network bias to the synthetic scenes. In this paper, we present an unsupervised framework for removing rain streaks from real-world rainy images. We focus on the natural phenomena that static rainy scenes capture a common background but different rain streak. From this observation, we train siamese network with the real rain image pairs, which outputs identical backgrounds from the pairs. To train our network, a real rainy dataset is constructed via web-crawling. We show that our unsupervised framework outperforms the recent CNN-based approaches, which are trained by supervised manner. Experimental results demonstrate that the effectiveness of our framework on both synthetic and real-world datasets, showing improved performance over previous approaches.

Feature Visualization and Error Rate Using Feature Map by Convolutional Neural Networks (CNN 기반 특징맵 사용에 따른 특징점 가시화와 에러율)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.1
    • /
    • pp.1-7
    • /
    • 2021
  • In this paper, we presented the experimental basis for the theoretical background and robustness of the Convolutional Neural Network for object recognition based on artificial intelligence. An experimental result was performed to visualize the weighting filters and feature maps for each layer to determine what characteristics CNN is automatically generating. experimental results were presented on the trend of learning error and identification error rate by checking the relevance of the weight filter and feature map for learning error and identification error. The weighting filter and characteristic map are presented as experimental results. The automatically generated characteristic quantities presented the results of error rates for moving and rotating robustness to geometric changes.

Synthetic data augmentation for pixel-wise steel fatigue crack identification using fully convolutional networks

  • Zhai, Guanghao;Narazaki, Yasutaka;Wang, Shuo;Shajihan, Shaik Althaf V.;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.237-250
    • /
    • 2022
  • Structural health monitoring (SHM) plays an important role in ensuring the safety and functionality of critical civil infrastructure. In recent years, numerous researchers have conducted studies to develop computer vision and machine learning techniques for SHM purposes, offering the potential to reduce the laborious nature and improve the effectiveness of field inspections. However, high-quality vision data from various types of damaged structures is relatively difficult to obtain, because of the rare occurrence of damaged structures. The lack of data is particularly acute for fatigue crack in steel bridge girder. As a result, the lack of data for training purposes is one of the main issues that hinders wider application of these powerful techniques for SHM. To address this problem, the use of synthetic data is proposed in this article to augment real-world datasets used for training neural networks that can identify fatigue cracks in steel structures. First, random textures representing the surface of steel structures with fatigue cracks are created and mapped onto a 3D graphics model. Subsequently, this model is used to generate synthetic images for various lighting conditions and camera angles. A fully convolutional network is then trained for two cases: (1) using only real-word data, and (2) using both synthetic and real-word data. By employing synthetic data augmentation in the training process, the crack identification performance of the neural network for the test dataset is seen to improve from 35% to 40% and 49% to 62% for intersection over union (IoU) and precision, respectively, demonstrating the efficacy of the proposed approach.

Permeability Prediction of Gas Diffusion Layers for PEMFC Using Three-Dimensional Convolutional Neural Networks and Morphological Features Extracted from X-ray Tomography Images (삼차원 합성곱 신경망과 X선 단층 영상에서 추출한 형태학적 특징을 이용한 PEMFC용 가스확산층의 투과도 예측)

  • Hangil You;Gun Jin Yun
    • Composites Research
    • /
    • v.37 no.1
    • /
    • pp.40-45
    • /
    • 2024
  • In this research, we introduce a novel approach that employs a 3D convolutional neural network (CNN) model to predict the permeability of Gas Diffusion Layers (GDLs). For training the model, we create an artificial dataset of GDL representative volume elements (RVEs) by extracting morphological characteristics from actual GDL images obtained through X-ray tomography. These morphological attributes involve statistical distributions of porosity, fiber orientation, and diameter. Subsequently, a permeability analysis using the Lattice Boltzmann Method (LBM) is conducted on a collection of 10,800 RVEs. The 3D CNN model, trained on this artificial dataset, well predicts the permeability of actual GDLs.

S-PRESENT Cryptanalysis through Know-Plaintext Attack Based on Deep Learning (딥러닝 기반의 알려진 평문 공격을 통한 S-PRESENT 분석)

  • Se-jin Lim;Hyun-Ji Kim;Kyung-Bae Jang;Yea-jun Kang;Won-Woong Kim;Yu-Jin Yang;Hwa-Jeong Seo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.193-200
    • /
    • 2023
  • Cryptanalysis can be performed by various techniques such as known plaintext attack, differential attack, side-channel analysis, and the like. Recently, many studies have been conducted on cryptanalysis using deep learning. A known-plaintext attack is a technique that uses a known plaintext and ciphertext pair to find a key. In this paper, we use deep learning technology to perform a known-plaintext attack against S-PRESENT, a reduced version of the lightweight block cipher PRESENT. This paper is significant in that it is the first known-plaintext attack based on deep learning performed on a reduced lightweight block cipher. For cryptanalysis, MLP (Multi-Layer Perceptron) and 1D and 2D CNN(Convolutional Neural Network) models are used and optimized, and the performance of the three models is compared. It showed the highest performance in 2D convolutional neural networks, but it was possible to attack only up to some key spaces. From this, it can be seen that the known-plaintext attack through the MLP model and the convolutional neural network is limited in attackable key bits.

DeepAct: A Deep Neural Network Model for Activity Detection in Untrimmed Videos

  • Song, Yeongtaek;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.150-161
    • /
    • 2018
  • We propose a novel deep neural network model for detecting human activities in untrimmed videos. The process of human activity detection in a video involves two steps: a step to extract features that are effective in recognizing human activities in a long untrimmed video, followed by a step to detect human activities from those extracted features. To extract the rich features from video segments that could express unique patterns for each activity, we employ two different convolutional neural network models, C3D and I-ResNet. For detecting human activities from the sequence of extracted feature vectors, we use BLSTM, a bi-directional recurrent neural network model. By conducting experiments with ActivityNet 200, a large-scale benchmark dataset, we show the high performance of the proposed DeepAct model.

CNN Based 2D and 2.5D Face Recognition For Home Security System (홈보안 시스템을 위한 CNN 기반 2D와 2.5D 얼굴 인식)

  • MaYing, MaYing;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.6
    • /
    • pp.1207-1214
    • /
    • 2019
  • Technologies of the 4th industrial revolution have been unknowingly seeping into our lives. Many IoT based home security systems are using the convolutional neural network(CNN) as good biometrics to recognize a face and protect home and family from intruders since CNN has demonstrated its excellent ability in image recognition. In this paper, three layouts of CNN for 2D and 2.5D image of small dataset with various input image size and filter size are explored. The simulation results show that the layout of CNN with 50*50 input size of 2.5D image, 2 convolution and max pooling layer, and 3*3 filter size for small dataset of 2.5D image is optimal for a home security system with recognition accuracy of 0.966. In addition, the longest CPU time consumption for one input image is 0.057S. The proposed layout of CNN for a face recognition is suitable to control the actuators in the home security system because a home security system requires good face recognition and short recognition time.