• Title/Summary/Keyword: Convolutional Neural Networks

Search Result 666, Processing Time 0.029 seconds

Active pulse classification algorithm using convolutional neural networks (콘볼루션 신경회로망을 이용한 능동펄스 식별 알고리즘)

  • Kim, Geunhwan;Choi, Seung-Ryul;Yoon, Kyung-Sik;Lee, Kyun-Kyung;Lee, Donghwa
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.106-113
    • /
    • 2019
  • In this paper, we propose an algorithm to classify the received active pulse when the active sonar system is operated as a non-cooperative mode. The proposed algorithm uses CNN (Convolutional Neural Networks) which shows good performance in various fields. As an input of CNN, time frequency analysis data which performs STFT (Short Time Fourier Transform) of the received signal is used. The CNN used in this paper consists of two convolution and pulling layers. We designed a database based neural network and a pulse feature based neural network according to the output layer design. To verify the performance of the algorithm, the data of 3110 CW (Continuous Wave) pulses and LFM (Linear Frequency Modulated) pulses received from the actual ocean were processed to construct training data and test data. As a result of simulation, the database based neural network showed 99.9 % accuracy and the feature based neural network showed about 96 % accuracy when allowing 2 pixel error.

Performance comparison of lung sound classification using various convolutional neural networks (다양한 합성곱 신경망 방식을 이용한 폐음 분류 방식의 성능 비교)

  • Kim, Gee Yeun;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.5
    • /
    • pp.568-573
    • /
    • 2019
  • In the diagnosis of pulmonary diseases, auscultation technique is simpler than the other methods, and lung sounds can be used for predicting the types of pulmonary diseases as well as identifying patients with pulmonary diseases. Therefore, in this paper, we identify patients with pulmonary diseases and classify lung sounds according to their sound characteristics using various convolutional neural networks, and compare the classification performance of each neural network method. First, lung sounds over affected areas of the chest with pulmonary diseases are collected by using a single-channel lung sound recording device, and spectral features are extracted from the collected sounds in time domain and applied to each neural network. As classification methods, we use general, parallel, and residual convolutional neural network, and compare lung sound classification performance of each neural network through experiments.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.

Analysis of normalization effect for earthquake events classification (지진 이벤트 분류를 위한 정규화 기법 분석)

  • Zhang, Shou;Ku, Bonhwa;Ko, Hansoek
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.130-138
    • /
    • 2021
  • This paper presents an effective structure by applying various normalization to Convolutional Neural Networks (CNN) for seismic event classification. Normalization techniques can not only improve the learning speed of neural networks, but also show robustness to noise. In this paper, we analyze the effect of input data normalization and hidden layer normalization on the deep learning model for seismic event classification. In addition an effective model is derived through various experiments according to the structure of the applied hidden layer. As a result of various experiments, the model that applied input data normalization and weight normalization to the first hidden layer showed the most stable performance improvement.

Convolutional Neural Networks Using Log Mel-Spectrogram Separation for Audio Event Classification with Unknown Devices

  • Soonshin Seo;Changmin Kim;Ji-Hwan Kim
    • Journal of Web Engineering
    • /
    • v.21 no.2
    • /
    • pp.497-522
    • /
    • 2021
  • Audio event classification refers to the detection and classification of non-verbal signals, such as dog and horn sounds included in audio data, by a computer. Recently, deep neural network technology has been applied to audio event classification, exhibiting higher performance when compared to existing models. Among them, a convolutional neural network (CNN)-based training method that receives audio in the form of a spectrogram, which is a two-dimensional image, has been widely used. However, audio event classification has poor performance on test data when it is recorded by a device (unknown device) different from that used to record training data (known device). This is because the frequency range emphasized is different for each device used during recording, and the shapes of the resulting spectrograms generated by known devices and those generated by unknown devices differ. In this study, to improve the performance of the event classification system, a CNN based on the log mel-spectrogram separation technique was applied to the event classification system, and the performance of unknown devices was evaluated. The system can classify 16 types of audio signals. It receives audio data at 0.4-s length, and measures the accuracy of test data generated from unknown devices with a model trained via training data generated from known devices. The experiment showed that the performance compared to the baseline exhibited a relative improvement of up to 37.33%, from 63.63% to 73.33% based on Google Pixel, and from 47.42% to 65.12% based on the LG V50.

A Implementation of Simple Convolution Decoder Using a Temporal Neural Networks

  • Chung, Hee-Tae;Kim, Kyung-Hun
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.4
    • /
    • pp.177-182
    • /
    • 2003
  • Conventional multilayer feedforward artificial neural networks are very effective in dealing with spatial problems. To deal with problems with time dependency, some kinds of memory have to be built in the processing algorithm. In this paper we show how the newly proposed Serial Input Neuron (SIN) convolutional decoders can be derived. As an example, we derive the SIN decoder for rate code with constraint length 3. The SIN is tested in Gaussian channel and the results are compared to the results of the optimal Viterbi decoder. A SIN approach to decode convolutional codes is presented. No supervision is required. The decoder lends itself to pleasing implementations in hardware and processing codes with high speed in a time. However, the speed of the current circuits may set limits to the codes used. With increasing speeds of the circuits in the future, the proposed technique may become a tempting choice for decoding convolutional coding with long constraint lengths.

Deep Adversarial Residual Convolutional Neural Network for Image Generation and Classification

  • Haque, Md Foysal;Kang, Dae-Seong
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.111-120
    • /
    • 2020
  • Generative adversarial networks (GANs) achieved impressive performance on image generation and visual classification applications. However, adversarial networks meet difficulties in combining the generative model and unstable training process. To overcome the problem, we combined the deep residual network with upsampling convolutional layers to construct the generative network. Moreover, the study shows that image generation and classification performance become more prominent when the residual layers include on the generator. The proposed network empirically shows that the ability to generate images with higher visual accuracy provided certain amounts of additional complexity using proper regularization techniques. Experimental evaluation shows that the proposed method is superior to image generation and classification tasks.

Correcting Misclassified Image Features with Convolutional Coding

  • Mun, Ye-Ji;Kim, Nayoung;Lee, Jieun;Kang, Je-Won
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.11-14
    • /
    • 2018
  • The aim of this study is to rectify the misclassified image features and enhance the performance of image classification tasks by incorporating a channel- coding technique, widely used in telecommunication. Specifically, the proposed algorithm employs the error - correcting mechanism of convolutional coding combined with the convolutional neural networks (CNNs) that are the state - of- the- arts image classifier s. We develop an encoder and a decoder to employ the error - correcting capability of the convolutional coding. In the encoder, the label values of the image data are converted to convolutional codes that are used as target outputs of the CNN, and the network is trained to minimize the Euclidean distance between the target output codes and the actual output codes. In order to correct misclassified features, the outputs of the network are decoded through the trellis structure with Viterbi algorithm before determining the final prediction. This paper demonstrates that the proposed architecture advances the performance of the neural networks compared to the traditional one- hot encoding method.

  • PDF

Human Gait Recognition Based on Spatio-Temporal Deep Convolutional Neural Network for Identification

  • Zhang, Ning;Park, Jin-ho;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.927-939
    • /
    • 2020
  • Gait recognition can identify people's identity from a long distance, which is very important for improving the intelligence of the monitoring system. Among many human features, gait features have the advantages of being remotely available, robust, and secure. Traditional gait feature extraction, affected by the development of behavior recognition, can only rely on manual feature extraction, which cannot meet the needs of fine gait recognition. The emergence of deep convolutional neural networks has made researchers get rid of complex feature design engineering, and can automatically learn available features through data, which has been widely used. In this paper,conduct feature metric learning in the three-dimensional space by combining the three-dimensional convolution features of the gait sequence and the Siamese structure. This method can capture the information of spatial dimension and time dimension from the continuous periodic gait sequence, and further improve the accuracy and practicability of gait recognition.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.