• Title/Summary/Keyword: one-dimensional convolutional neural networks

Search Result 11, Processing Time 0.022 seconds

Deep Learning based Frame Synchronization Using Convolutional Neural Network (합성곱 신경망을 이용한 딥러닝 기반의 프레임 동기 기법)

  • Lee, Eui-Soo;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.501-507
    • /
    • 2020
  • This paper proposes a new frame synchronization technique based on convolutional neural network (CNN). The conventional frame synchronizers usually find the matching instance through correlation between the received signal and the preamble. The proposed method converts the 1-dimensional correlator ouput into a 2-dimensional matrix. The 2-dimensional matrix is input to a convolutional neural network, and the convolutional neural network finds the frame arrival time. Specifically, in additive white gaussian noise (AWGN) environments, the received signals are generated with random arrival times and they are used for training data of the CNN. Through computer simulation, the false detection probabilities in various signal-to-noise ratios are investigated and compared between the proposed CNN-based technique and the conventional one. According to the results, the proposed technique shows 2dB better performance than the conventional method.

Neural Networks-Based Method for Electrocardiogram Classification

  • Maksym Kovalchuk;Viktoriia Kharchenko;Andrii Yavorskyi;Igor Bieda;Taras Panchenko
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.186-191
    • /
    • 2023
  • Neural Networks are widely used for huge variety of tasks solution. Machine Learning methods are used also for signal and time series analysis, including electrocardiograms. Contemporary wearable devices, both medical and non-medical type like smart watch, allow to gather the data in real time uninterruptedly. This allows us to transfer these data for analysis or make an analysis on the device, and thus provide preliminary diagnosis, or at least fix some serious deviations. Different methods are being used for this kind of analysis, ranging from medical-oriented using distinctive features of the signal to machine learning and deep learning approaches. Here we will demonstrate a neural network-based approach to this task by building an ensemble of 1D CNN classifiers and a final classifier of selection using logistic regression, random forest or support vector machine, and make the conclusions of the comparison with other approaches.

EPS Gesture Signal Recognition using Deep Learning Model (심층 학습 모델을 이용한 EPS 동작 신호의 인식)

  • Lee, Yu ra;Kim, Soo Hyung;Kim, Young Chul;Na, In Seop
    • Smart Media Journal
    • /
    • v.5 no.3
    • /
    • pp.35-41
    • /
    • 2016
  • In this paper, we propose hand-gesture signal recognition based on EPS(Electronic Potential Sensor) using Deep learning model. Extracted signals which from Electronic field based sensor, EPS have much of the noise, so it must remove in pre-processing. After the noise are removed with filter using frequency feature, the signals are reconstructed with dimensional transformation to overcome limit which have just one-dimension feature with voltage value for using convolution operation. Then, the reconstructed signal data is finally classified and recognized using multiple learning layers model based on deep learning. Since the statistical model based on probability is sensitive to initial parameters, the result can change after training in modeling phase. Deep learning model can overcome this problem because of several layers in training phase. In experiment, we used two different deep learning structures, Convolutional neural networks and Recurrent Neural Network and compared with statistical model algorithm with four kinds of gestures. The recognition result of method using convolutional neural network is better than other algorithms in EPS gesture signal recognition.

Classification Method of Multi-State Appliances in Non-intrusive Load Monitoring Environment based on Gramian Angular Field (Gramian angular field 기반 비간섭 부하 모니터링 환경에서의 다중 상태 가전기기 분류 기법)

  • Seon, Joon-Ho;Sun, Young-Ghyu;Kim, Soo-Hyun;Kyeong, Chanuk;Sim, Issac;Lee, Heung-Jae;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.183-191
    • /
    • 2021
  • Non-intrusive load monitoring is a technology that can be used for predicting and classifying the type of appliances through real-time monitoring of user power consumption, and it has recently got interested as a means of energy-saving. In this paper, we propose a system for classifying appliances from user consumption data by combining GAF(Gramian angular field) technique that can be used for converting one-dimensional data to the two-dimensional matrix with convolutional neural networks. We use REDD(residential energy disaggregation dataset) that is the public appliances power data and confirm the classification accuracy of the GASF(Gramian angular summation field) and GADF(Gramian angular difference field). Simulation results show that both models showed 94% accuracy on appliances with binary-state(on/off) and that GASF showed 93.5% accuracy that is 3% higher than GADF on appliances with multi-state. In later studies, we plan to increase the dataset and optimize the model to improve accuracy and speed.

Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks (국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구)

  • Yang, Hunmin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.1
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

Deep Learning based Raw Audio Signal Bandwidth Extension System (딥러닝 기반 음향 신호 대역 확장 시스템)

  • Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1122-1128
    • /
    • 2020
  • Bandwidth Extension refers to restoring and expanding a narrow band signal(NB) that is damaged or damaged in the encoding and decoding process due to the lack of channel capacity or the characteristics of the codec installed in the mobile communication device. It means converting to a wideband signal(WB). Bandwidth extension research mainly focuses on voice signals and converts high bands into frequency domains, such as SBR (Spectral Band Replication) and IGF (Intelligent Gap Filling), and restores disappeared or damaged high bands based on complex feature extraction processes. In this paper, we propose a model that outputs an bandwidth extended signal based on an autoencoder among deep learning models, using the residual connection of one-dimensional convolutional neural networks (CNN), the bandwidth is extended by inputting a time domain signal of a certain length without complicated pre-processing. In addition, it was confirmed that the damaged high band can be restored even by training on a dataset containing various types of sound sources including music that is not limited to the speech.

Noise-Robust Porcine Respiratory Diseases Classification Using Texture Analysis and CNN (질감 분석과 CNN을 이용한 잡음에 강인한 돼지 호흡기 질병 식별)

  • Choi, Yongju;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.91-98
    • /
    • 2018
  • Automatic detection of pig wasting diseases is an important issue in the management of group-housed pigs. In particular, porcine respiratory diseases are one of the main causes of mortality among pigs and loss of productivity in intensive pig farming. In this paper, we propose a noise-robust system for the early detection and recognition of pig wasting diseases using sound data. In this method, first we convert one-dimensional sound signals to two-dimensional gray-level images by normalization, and extract texture images by means of dominant neighborhood structure technique. Lastly, the texture features are then used as inputs of convolutional neural networks as an early anomaly detector and a respiratory disease classifier. Our experimental results show that this new method can be used to detect pig wasting diseases both economically (low-cost sound sensor) and accurately (over 96% accuracy) even under noise-environmental conditions, either as a standalone solution or to complement known methods to obtain a more accurate solution.

Real-Time Hand Gesture Recognition Based on Deep Learning (딥러닝 기반 실시간 손 제스처 인식)

  • Kim, Gyu-Min;Baek, Joong-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.424-431
    • /
    • 2019
  • In this paper, we propose a real-time hand gesture recognition algorithm to eliminate the inconvenience of using hand controllers in VR applications. The user's 3D hand coordinate information is detected by leap motion sensor and then the coordinates are generated into two dimensional image. We classify hand gestures in real-time by learning the imaged 3D hand coordinate information through SSD(Single Shot multibox Detector) model which is one of CNN(Convolutional Neural Networks) models. We propose to use all 3 channels rather than only one channel. A sliding window technique is also proposed to recognize the gesture in real time when the user actually makes a gesture. An experiment was conducted to measure the recognition rate and learning performance of the proposed model. Our proposed model showed 99.88% recognition accuracy and showed higher usability than the existing algorithm.

Deep learning of sweep signal for damage detection on the surface of concrete

  • Gao Shanga;Jun Chen
    • Computers and Concrete
    • /
    • v.32 no.5
    • /
    • pp.475-486
    • /
    • 2023
  • Nondestructive evaluation (NDE) is an important task of civil engineering structure monitoring and inspection, but minor damage such as small cracks in local structure is difficult to observe. If cracks continued expansion may cause partial or even overall damage to the structure. Therefore, monitoring and detecting the structure in the early stage of crack propagation is important. The crack detection technology based on machine vision has been widely studied, but there are still some problems such as bad recognition effect for small cracks. In this paper, we proposed a deep learning method based on sweep signals to evaluate concrete surface crack with a width less than 1 mm. Two convolutional neural networks (CNNs) are used to analyze the one-dimensional (1D) frequency sweep signal and the two-dimensional (2D) time-frequency image, respectively, and the probability value of average damage (ADPV) is proposed to evaluate the minor damage of structural. Finally, we use the standard deviation of energy ratio change (ERVSD) and infrared thermography (IRT) to compare with ADPV to verify the effectiveness of the method proposed in this paper. The experiment results show that the method proposed in this paper can effectively predict whether the concrete surface is damaged and the severity of damage.

A Vision Transformer Based Recommender System Using Side Information (부가 정보를 활용한 비전 트랜스포머 기반의 추천시스템)

  • Kwon, Yujin;Choi, Minseok;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.119-137
    • /
    • 2022
  • Recent recommendation system studies apply various deep learning models to represent user and item interactions better. One of the noteworthy studies is ONCF(Outer product-based Neural Collaborative Filtering) which builds a two-dimensional interaction map via outer product and employs CNN (Convolutional Neural Networks) to learn high-order correlations from the map. However, ONCF has limitations in recommendation performance due to the problems with CNN and the absence of side information. ONCF using CNN has an inductive bias problem that causes poor performances for data with a distribution that does not appear in the training data. This paper proposes to employ a Vision Transformer (ViT) instead of the vanilla CNN used in ONCF. The reason is that ViT showed better results than state-of-the-art CNN in many image classification cases. In addition, we propose a new architecture to reflect side information that ONCF did not consider. Unlike previous studies that reflect side information in a neural network using simple input combination methods, this study uses an independent auxiliary classifier to reflect side information more effectively in the recommender system. ONCF used a single latent vector for user and item, but in this study, a channel is constructed using multiple vectors to enable the model to learn more diverse expressions and to obtain an ensemble effect. The experiments showed our deep learning model improved performance in recommendation compared to ONCF.