• Title/Summary/Keyword: 1D Convolutional Neural Network

Search Result 77, Processing Time 0.029 seconds

Recognition of Virtual Written Characters Based on Convolutional Neural Network

  • Leem, Seungmin;Kim, Sungyoung
    • Journal of Platform Technology
    • /
    • v.6 no.1
    • /
    • pp.3-8
    • /
    • 2018
  • This paper proposes a technique for recognizing online handwritten cursive data obtained by tracing a motion trajectory while a user is in the 3D space based on a convolution neural network (CNN) algorithm. There is a difficulty in recognizing the virtual character input by the user in the 3D space because it includes both the character stroke and the movement stroke. In this paper, we divide syllable into consonant and vowel units by using labeling technique in addition to the result of localizing letter stroke and movement stroke in the previous study. The coordinate information of the separated consonants and vowels are converted into image data, and Korean handwriting recognition was performed using a convolutional neural network. After learning the neural network using 1,680 syllables written by five hand writers, the accuracy is calculated by using the new hand writers who did not participate in the writing of training data. The accuracy of phoneme-based recognition is 98.9% based on convolutional neural network. The proposed method has the advantage of drastically reducing learning data compared to syllable-based learning.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.

LiDAR Image Segmentation using Convolutional Neural Network Model with Refinement Modules (정제 모듈을 포함한 컨볼루셔널 뉴럴 네트워크 모델을 이용한 라이다 영상의 분할)

  • Park, Byungjae;Seo, Beom-Su;Lee, Sejin
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.8-15
    • /
    • 2018
  • This paper proposes a convolutional neural network model for distinguishing areas occupied by obstacles from a LiDAR image converted from a 3D point cloud. The channels of a LiDAR image used as input consist of the distances to 3D points, the reflectivities of 3D points, and the heights of 3D points from the ground. The proposed model uses a LiDAR image as an input and outputs a result of a segmented LiDAR image. The proposed model adopts refinement modules with skip connections to segment a LiDAR image. The refinement modules with skip connections in the proposed model make it possible to construct a complex structure with a small number of parameters than a convolutional neural network model with a linear structure. Using the proposed model, it is possible to distinguish areas in a LiDAR image occupied by obstacles such as vehicles, pedestrians, and bicyclists. The proposed model can be applied to recognize surrounding obstacles and to search for safe paths.

Effective Hand Gesture Recognition by Key Frame Selection and 3D Neural Network

  • Hoang, Nguyen Ngoc;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • Smart Media Journal
    • /
    • v.9 no.1
    • /
    • pp.23-29
    • /
    • 2020
  • This paper presents an approach for dynamic hand gesture recognition by using algorithm based on 3D Convolutional Neural Network (3D_CNN), which is later extended to 3D Residual Networks (3D_ResNet), and the neural network based key frame selection. Typically, 3D deep neural network is used to classify gestures from the input of image frames, randomly sampled from a video data. In this work, to improve the classification performance, we employ key frames which represent the overall video, as the input of the classification network. The key frames are extracted by SegNet instead of conventional clustering algorithms for video summarization (VSUMM) which require heavy computation. By using a deep neural network, key frame selection can be performed in a real-time system. Experiments are conducted using 3D convolutional kernels such as 3D_CNN, Inflated 3D_CNN (I3D) and 3D_ResNet for gesture classification. Our algorithm achieved up to 97.8% of classification accuracy on the Cambridge gesture dataset. The experimental results show that the proposed approach is efficient and outperforms existing methods.

Power-Efficient DCNN Accelerator Mapping Convolutional Operation with 1-D PE Array (1-D PE 어레이로 컨볼루션 연산을 수행하는 저전력 DCNN 가속기)

  • Lee, Jeonghyeok;Han, Sangwook;Choi, Seungwon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.18 no.2
    • /
    • pp.17-26
    • /
    • 2022
  • In this paper, we propose a novel method of performing convolutional operations on a 2-D Processing Element(PE) array. The conventional method [1] of mapping the convolutional operation using the 2-D PE array lacks flexibility and provides low utilization of PEs. However, by mapping a convolutional operation from a 2-D PE array to a 1-D PE array, the proposed method can increase the number and utilization of active PEs. Consequently, the throughput of the proposed Deep Convolutional Neural Network(DCNN) accelerator can be increased significantly. Furthermore, the power consumption for the transmission of weights between PEs can be saved. Based on the simulation results, the performance of the proposed method provides approximately 4.55%, 13.7%, and 2.27% throughput gains for each of the convolutional layers of AlexNet, VGG16, and ResNet50 using the DCNN accelerator with a (weights size) x (output data size) 2-D PE array compared to the conventional method. Additionally the proposed method provides approximately 63.21%, 52.46%, and 39.23% power savings.

Convolutional Neural Network Based Multi-feature Fusion for Non-rigid 3D Model Retrieval

  • Zeng, Hui;Liu, Yanrong;Li, Siqi;Che, JianYong;Wang, Xiuqing
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.176-190
    • /
    • 2018
  • This paper presents a novel convolutional neural network based multi-feature fusion learning method for non-rigid 3D model retrieval, which can investigate the useful discriminative information of the heat kernel signature (HKS) descriptor and the wave kernel signature (WKS) descriptor. At first, we compute the 2D shape distributions of the two kinds of descriptors to represent the 3D model and use them as the input to the networks. Then we construct two convolutional neural networks for the HKS distribution and the WKS distribution separately, and use the multi-feature fusion layer to connect them. The fusion layer not only can exploit more discriminative characteristics of the two descriptors, but also can complement the correlated information between the two kinds of descriptors. Furthermore, to further improve the performance of the description ability, the cross-connected layer is built to combine the low-level features with high-level features. Extensive experiments have validated the effectiveness of the designed multi-feature fusion learning method.

1D CNN and Machine Learning Methods for Fall Detection (1D CNN과 기계 학습을 사용한 낙상 검출)

  • Kim, Inkyung;Kim, Daehee;Noh, Song;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.85-90
    • /
    • 2021
  • In this paper, fall detection using individual wearable devices for older people is considered. To design a low-cost wearable device for reliable fall detection, we present a comprehensive analysis of two representative models. One is a machine learning model composed of a decision tree, random forest, and Support Vector Machine(SVM). The other is a deep learning model relying on a one-dimensional(1D) Convolutional Neural Network(CNN). By considering data segmentation, preprocessing, and feature extraction methods applied to the input data, we also evaluate the considered models' validity. Simulation results verify the efficacy of the deep learning model showing improved overall performance.

Deep Learning based Frame Synchronization Using Convolutional Neural Network (합성곱 신경망을 이용한 딥러닝 기반의 프레임 동기 기법)

  • Lee, Eui-Soo;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.501-507
    • /
    • 2020
  • This paper proposes a new frame synchronization technique based on convolutional neural network (CNN). The conventional frame synchronizers usually find the matching instance through correlation between the received signal and the preamble. The proposed method converts the 1-dimensional correlator ouput into a 2-dimensional matrix. The 2-dimensional matrix is input to a convolutional neural network, and the convolutional neural network finds the frame arrival time. Specifically, in additive white gaussian noise (AWGN) environments, the received signals are generated with random arrival times and they are used for training data of the CNN. Through computer simulation, the false detection probabilities in various signal-to-noise ratios are investigated and compared between the proposed CNN-based technique and the conventional one. According to the results, the proposed technique shows 2dB better performance than the conventional method.

Design of Arrhythmia Classification System Based on 1-D Convolutional Neural Networks (1차원 합성곱 신경망에 기반한 부정맥 분류 시스템의 설계)

  • Kim, Seong-Woo;Kim, In-Ju;Shin, Seung-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.37-43
    • /
    • 2020
  • Recently, many researches have been actively to diagnose symptoms of heart disease using ECG signal, which is an electrical signal measuring heart status. In particular, the electrocardiogram signal can be used to monitor and diagnose arrhythmias that indicates an abnormal heart status. In this paper, we proposed 1-D convolutional neural network for arrhythmias classification systems. The proposed model consists of deep 11 layers which can learn to extract features and classify 5 types of arrhythmias. The simulation results over MIT-BIH arrhythmia database show that the learned neural network has more than 99% classification accuracy. It is analyzed that the more the number of convolutional kernels the network has, the more detailed characteristics of ECG signal resulted in better performance. Moreover, we implemented a practical application based on the proposed one to classify arrythmias in real-time.

Customized AI Exercise Recommendation Service for the Balanced Physical Activity (균형적인 신체활동을 위한 맞춤형 AI 운동 추천 서비스)

  • Chang-Min Kim;Woo-Beom Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.234-240
    • /
    • 2022
  • This paper proposes a customized AI exercise recommendation service for balancing the relative amount of exercise according to the working environment by each occupation. WISDM database is collected by using acceleration and gyro sensors, and is a dataset that classifies physical activities into 18 categories. Our system recommends a adaptive exercise using the analyzed activity type after classifying 18 physical activities into 3 physical activities types such as whole body, upper body and lower body. 1 Dimensional convolutional neural network is used for classifying a physical activity in this paper. Proposed model is composed of a convolution blocks in which 1D convolution layers with a various sized kernel are connected in parallel. Convolution blocks can extract a detailed local features of input pattern effectively that can be extracted from deep neural network models, as applying multi 1D convolution layers to input pattern. To evaluate performance of the proposed neural network model, as a result of comparing the previous recurrent neural network, our method showed a remarkable 98.4% accuracy.