• 제목/요약/키워드: 2D Convolutional Neural Network

검색결과 97건 처리시간 0.022초

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • 제19권3호
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • 한국정보전자통신기술학회논문지
    • /
    • 제14권4호
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.

1-D PE 어레이로 컨볼루션 연산을 수행하는 저전력 DCNN 가속기 (Power-Efficient DCNN Accelerator Mapping Convolutional Operation with 1-D PE Array)

  • 이정혁;한상욱;최승원
    • 디지털산업정보학회논문지
    • /
    • 제18권2호
    • /
    • pp.17-26
    • /
    • 2022
  • In this paper, we propose a novel method of performing convolutional operations on a 2-D Processing Element(PE) array. The conventional method [1] of mapping the convolutional operation using the 2-D PE array lacks flexibility and provides low utilization of PEs. However, by mapping a convolutional operation from a 2-D PE array to a 1-D PE array, the proposed method can increase the number and utilization of active PEs. Consequently, the throughput of the proposed Deep Convolutional Neural Network(DCNN) accelerator can be increased significantly. Furthermore, the power consumption for the transmission of weights between PEs can be saved. Based on the simulation results, the performance of the proposed method provides approximately 4.55%, 13.7%, and 2.27% throughput gains for each of the convolutional layers of AlexNet, VGG16, and ResNet50 using the DCNN accelerator with a (weights size) x (output data size) 2-D PE array compared to the conventional method. Additionally the proposed method provides approximately 63.21%, 52.46%, and 39.23% power savings.

Convolutional Neural Network Based Multi-feature Fusion for Non-rigid 3D Model Retrieval

  • Zeng, Hui;Liu, Yanrong;Li, Siqi;Che, JianYong;Wang, Xiuqing
    • Journal of Information Processing Systems
    • /
    • 제14권1호
    • /
    • pp.176-190
    • /
    • 2018
  • This paper presents a novel convolutional neural network based multi-feature fusion learning method for non-rigid 3D model retrieval, which can investigate the useful discriminative information of the heat kernel signature (HKS) descriptor and the wave kernel signature (WKS) descriptor. At first, we compute the 2D shape distributions of the two kinds of descriptors to represent the 3D model and use them as the input to the networks. Then we construct two convolutional neural networks for the HKS distribution and the WKS distribution separately, and use the multi-feature fusion layer to connect them. The fusion layer not only can exploit more discriminative characteristics of the two descriptors, but also can complement the correlated information between the two kinds of descriptors. Furthermore, to further improve the performance of the description ability, the cross-connected layer is built to combine the low-level features with high-level features. Extensive experiments have validated the effectiveness of the designed multi-feature fusion learning method.

합성곱 신경망을 이용한 딥러닝 기반의 프레임 동기 기법 (Deep Learning based Frame Synchronization Using Convolutional Neural Network)

  • 이의수;정의림
    • 한국정보통신학회논문지
    • /
    • 제24권4호
    • /
    • pp.501-507
    • /
    • 2020
  • 본 논문에서는 합성곱 신경망(CNN)에 기반한 프레임 동기 기법을 제안한다. 기존의 프레임 동기 기법은 프리앰블과 수신 신호 사이의 상관을 통해 수신 신호와 프리앰블이 일치하는 지점을 찾는다. 제안하는 기법은 1차원 벡터로 이루어진 상관기 출력 신호를 2차원 행렬로 재구성하며, 이 2차원 행렬을 합성곱 신경망에 입력하고 합성곱 신경망은 프레임 도착 지점을 추정한다. 구체적으로 가산 백색 가우스 잡음(AWGN) 환경에서 무작위로 도착하는 수신 신호를 생성하여 학습 데이터를 만들고, 이 학습 데이터로 합성곱 신경망을 학습시킨다. 컴퓨터 모의실험을 통해 기존의 동기 기법과 제안하는 기법의 프레임 동기 오류 확률을 다양한 신호 대 잡음 비(SNR)에서 비교한다. 모의실험 결과는 제안하는 합성곱 신경망을 이용한 프레임 동기 기법이 기존 기법 대비 약 2dB 우수함을 보인다.

Visual Model of Pattern Design Based on Deep Convolutional Neural Network

  • Jingjing Ye;Jun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권2호
    • /
    • pp.311-326
    • /
    • 2024
  • The rapid development of neural network technology promotes the neural network model driven by big data to overcome the texture effect of complex objects. Due to the limitations in complex scenes, it is necessary to establish custom template matching and apply it to the research of many fields of computational vision technology. The dependence on high-quality small label sample database data is not very strong, and the machine learning system of deep feature connection to complete the task of texture effect inference and speculation is relatively poor. The style transfer algorithm based on neural network collects and preserves the data of patterns, extracts and modernizes their features. Through the algorithm model, it is easier to present the texture color of patterns and display them digitally. In this paper, according to the texture effect reasoning of custom template matching, the 3D visualization of the target is transformed into a 3D model. The high similarity between the scene to be inferred and the user-defined template is calculated by the user-defined template of the multi-dimensional external feature label. The convolutional neural network is adopted to optimize the external area of the object to improve the sampling quality and computational performance of the sample pyramid structure. The results indicate that the proposed algorithm can accurately capture the significant target, achieve more ablation noise, and improve the visualization results. The proposed deep convolutional neural network optimization algorithm has good rapidity, data accuracy and robustness. The proposed algorithm can adapt to the calculation of more task scenes, display the redundant vision-related information of image conversion, enhance the powerful computing power, and further improve the computational efficiency and accuracy of convolutional networks, which has a high research significance for the study of image information conversion.

컨볼루션 인공신경망을 이용한 2차원 게임 이미지 색상 합성 시스템 (2D Game Image Color Synthesis System Using Convolutional Neural Network)

  • 홍승진;강신진;조성현
    • 한국게임학회 논문지
    • /
    • 제18권2호
    • /
    • pp.89-98
    • /
    • 2018
  • 최근의 인공 신경망(Neural Network) 기법은 전통적인 분류 문제와 군집화 문제 해결에서 벗어나 이미지 생성 같은 컨텐츠 생성에서도 좋은 성능을 보이고 있다. 본 연구에서는 차세대 컨텐츠 생성 기법으로 인공신경망을 이용한 이미지 생성기법을 제안한다. 제안하는 인공신경망 모델은 두개의 이미지를 입력받아서 하나의 이미지에서는 색상을, 다른 이미지에서는 모양을 가져와 새로운 이미지로 조합해낸다. 이 모델은 컨볼루션 인공신경망(Convolutional Neural Network)으로 제작되었으며 각각 이미지에서 색상과 모양을 추출해내는 두 개의 인코더와 각 인코더의 값을 모두 넘겨받아 하나의 조합이 되는 이미지를 생성해내는 하나의 디코더로 구성이 되어있다. 본 연구의 성과는 저비용으로 게임 개발 프로세스 상 다양한 2차원 이미지 생성 및 보정 작업에 활용될 수 있다.

딥러닝 알고리즘과 2D Lidar 센서를 이용한 이미지 분류 (Image Classification using Deep Learning Algorithm and 2D Lidar Sensor)

  • 이준호;장혁준
    • 전기전자학회논문지
    • /
    • 제23권4호
    • /
    • pp.1302-1308
    • /
    • 2019
  • 본 논문은 CNN (Convolutional Neural Network)와 2D Lidar 센서에서 획득한 위치 데이터를 이용하여 이미지를 분류하는 방법을 제시한다. Lidar 센서는 데이터 정확도, 형상 왜곡 및 광 변화에 대한 강인성 측면에서의 이점으로 인해 무인 장치에 널리 사용되어 왔다. CNN 알고리즘은 하나 이상의 컨볼루션 및 풀링 레이어로 구성되며 이미지 분류에 만족스러운 성능을 보여 왔다. 본 논문에서는 학습 방법에 따라 다른 유형의 CNN 아키텍처들인 Gradient Descent (GD) 및 Levenberg-arquardt (LM)를 구현하였다. LM 방법에는 학습 파라메터를 업데이트하는 요소 중 하나인 Hessian 행렬 근사 빈도에 따라 두 가지 유형이 있다. LM 알고리즘의 시뮬레이션 결과는 GD 알고리즘보다 이미지 데이터의 분류 성능이 우수하였다. 또한 Hessian 행렬 근사가 더 빈번한 LM 알고리즘은 다른 유형의 LM 알고리즘보다 작은 오류를 보여주었다.

센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식 (A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System)

  • 조형기;조해민;이성원;김은태
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

비디오 얼굴 식별 성능개선을 위한 다중 심층합성곱신경망 결합 구조 개발 (Development of Combined Architecture of Multiple Deep Convolutional Neural Networks for Improving Video Face Identification)

  • 김경태;최재영
    • 한국멀티미디어학회논문지
    • /
    • 제22권6호
    • /
    • pp.655-664
    • /
    • 2019
  • In this paper, we propose a novel way of combining multiple deep convolutional neural network (DCNN) architectures which work well for accurate video face identification by adopting a serial combination of 3D and 2D DCNNs. The proposed method first divides an input video sequence (to be recognized) into a number of sub-video sequences. The resulting sub-video sequences are used as input to the 3D DCNN so as to obtain the class-confidence scores for a given input video sequence by considering both temporal and spatial face feature characteristics of input video sequence. The class-confidence scores obtained from corresponding sub-video sequences is combined by forming our proposed class-confidence matrix. The resulting class-confidence matrix is then used as an input for learning 2D DCNN learning which is serially linked to 3D DCNN. Finally, fine-tuned, serially combined DCNN framework is applied for recognizing the identity present in a given test video sequence. To verify the effectiveness of our proposed method, extensive and comparative experiments have been conducted to evaluate our method on COX face databases with their standard face identification protocols. Experimental results showed that our method can achieve better or comparable identification rate compared to other state-of-the-art video FR methods.