• Title/Summary/Keyword: Spatial convolution

Search Result 92, Processing Time 0.02 seconds

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Decomposed "Spatial and Temporal" Convolution for Human Action Recognition in Videos

  • Sediqi, Khwaja Monib;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.455-457
    • /
    • 2019
  • In this paper we study the effect of decomposed spatiotemporal convolutions for action recognition in videos. Our motivation emerges from the empirical observation that spatial convolution applied on solo frames of the video provide good performance in action recognition. In this research we empirically show the accuracy of factorized convolution on individual frames of video for action classification. We take 3D ResNet-18 as base line model for our experiment, factorize its 3D convolution to 2D (Spatial) and 1D (Temporal) convolution. We train the model from scratch using Kinetics video dataset. We then fine-tune the model on UCF-101 dataset and evaluate the performance. Our results show good accuracy similar to that of the state of the art algorithms on Kinetics and UCF-101 datasets.

Calculation of the Mutual Radiation Impedance by the Spatial Convolution in the Cylindrical Structure (원통 구조에서 공간 콘볼루션을 이용한 상호 방사 임피던스 계산)

  • Bok, Tae-Hoon;Li, Ying;Paeng, Dong-Guk;Lee, Jong-Kil;Shin, Ku-Kyun;Joh, Chee-Yong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1
    • /
    • pp.1-9
    • /
    • 2010
  • The mutual radiation impedance was calculated using the spatial convolution in the cylindrical structure. The Cartesian coordinate was transformed into the cylindrical coordinate using the spatial convolution for the cylindrical array structure. This method cannot consider the cylindrical baffle, but can reduce the computation time. The error for not considering the cylindrical baffle was analyzed by the comparison of the spatial convolution method with the quadruple integration method in the cylindrical structure. The mutual radiation resistance in the cylindrical structure was compared with the one in the planar baffle. Based on two kinds of the comparison, we presented the error of the suggesting method in this paper, confirming that the spatial convolution method could be applied to compute the mutual radiation impedance in the cylindrical structure at certain conditions.

Crime amount prediction based on 2D convolution and long short-term memory neural network

  • Dong, Qifen;Ye, Ruihui;Li, Guojun
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.208-219
    • /
    • 2022
  • Crime amount prediction is crucial for optimizing the police patrols' arrangement in each region of a city. First, we analyzed spatiotemporal correlations of the crime data and the relationships between crime and related auxiliary data, including points-of-interest (POI), public service complaints, and demographics. Then, we proposed a crime amount prediction model based on 2D convolution and long short-term memory neural network (2DCONV-LSTM). The proposed model captures the spatiotemporal correlations in the crime data, and the crime-related auxiliary data are used to enhance the regional spatial features. Extensive experiments on real-world datasets are conducted. Results demonstrated that capturing both temporal and spatial correlations in crime data and using auxiliary data to extract regional spatial features improve the prediction performance. In the best case scenario, the proposed model reduces the prediction error by at least 17.8% and 8.2% compared with support vector regression (SVR) and LSTM, respectively. Moreover, excessive auxiliary data reduce model performance because of the presence of redundant information.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.

Crack detection based on ResNet with spatial attention

  • Yang, Qiaoning;Jiang, Si;Chen, Juan;Lin, Weiguo
    • Computers and Concrete
    • /
    • v.26 no.5
    • /
    • pp.411-420
    • /
    • 2020
  • Deep Convolution neural network (DCNN) has been widely used in the healthy maintenance of civil infrastructure. Using DCNN to improve crack detection performance has attracted many researchers' attention. In this paper, a light-weight spatial attention network module is proposed to strengthen the representation capability of ResNet and improve the crack detection performance. It utilizes attention mechanism to strengthen the interested objects in global receptive field of ResNet convolution layers. Global average spatial information over all channels are used to construct an attention scalar. The scalar is combined with adaptive weighted sigmoid function to activate the output of each channel's feature maps. Salient objects in feature maps are refined by the attention scalar. The proposed spatial attention module is stacked in ResNet50 to detect crack. Experiments results show that the proposed module can got significant performance improvement in crack detection.

Traffic Flow Prediction Model Based on Spatio-Temporal Dilated Graph Convolution

  • Sun, Xiufang;Li, Jianbo;Lv, Zhiqiang;Dong, Chuanhao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3598-3614
    • /
    • 2020
  • With the increase of motor vehicles and tourism demand, some traffic problems gradually appear, such as traffic congestion, safety accidents and insufficient allocation of traffic resources. Facing these challenges, a model of Spatio-Temporal Dilated Convolutional Network (STDGCN) is proposed for assistance of extracting highly nonlinear and complex characteristics to accurately predict the future traffic flow. In particular, we model the traffic as undirected graphs, on which graph convolutions are built to extract spatial feature informations. Furthermore, a dilated convolution is deployed into graph convolution for capturing multi-scale contextual messages. The proposed STDGCN integrates the dilated convolution into the graph convolution, which realizes the extraction of the spatial and temporal characteristics of traffic flow data, as well as features of road occupancy. To observe the performance of the proposed model, we compare with it with four rivals. We also employ four indicators for evaluation. The experimental results show STDGCN's effectiveness. The prediction accuracy is improved by 17% in comparison with the traditional prediction methods on various real-world traffic datasets.

A Study on the Optimization of Convolution Operation Speed through FFT Algorithm (FFT 적용을 통한 Convolution 연산속도 향상에 관한 연구)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1552-1559
    • /
    • 2021
  • Convolution neural networks (CNNs) show notable performance in image processing and are used as representative core models. CNNs extract and learn features from large amounts of train dataset. In general, it has a structure in which a convolution layer and a fully connected layer are stacked. The core of CNN is the convolution layer. The size of the kernel used for feature extraction and the number that affect the depth of the feature map determine the amount of weight parameters of the CNN that can be learned. These parameters are the main causes of increasing the computational complexity and memory usage of the entire neural network. The most computationally expensive components in CNNs are fully connected and spatial convolution computations. In this paper, we propose a Fourier Convolution Neural Network that performs the operation of the convolution layer in the Fourier domain. We work on modifying and improving the amount of computation by applying the fast fourier transform method. Using the MNIST dataset, the performance was similar to that of the general CNN in terms of accuracy. In terms of operation speed, 7.2% faster operation speed was achieved. An average of 19% faster speed was achieved in experiments using 1024x1024 images and various sizes of kernels.

A Pansharpening Algorithm of KOMPSAT-3A Satellite Imagery by Using Dilated Residual Convolutional Neural Network (팽창된 잔차 합성곱신경망을 이용한 KOMPSAT-3A 위성영상의 융합 기법)

  • Choi, Hoseong;Seo, Doochun;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.961-973
    • /
    • 2020
  • In this manuscript, a new pansharpening model based on Convolutional Neural Network (CNN) was developed. Dilated convolution, which is one of the representative convolution technologies in CNN, was applied to the model by making it deep and complex to improve the performance of the deep learning architecture. Based on the dilated convolution, the residual network is used to enhance the efficiency of training process. In addition, we consider the spatial correlation coefficient in the loss function with traditional L1 norm. We experimented with Dilated Residual Networks (DRNet), which is applied to the structure using only a panchromatic (PAN) image and using both a PAN and multispectral (MS) image. In the experiments using KOMPSAT-3A, DRNet using both a PAN and MS image tended to overfit the spectral characteristics, and DRNet using only a PAN image showed a spatial resolution improvement over existing CNN-based models.

A Reconsideration of the Causality Requirement in Proving the z-Transform of a Discrete Convolution Sum (이산 Convolution 적산의 z변환의 증명을 위한 인과성의 필요에 대한 재고)

  • Chung Tae-Sang;Lee Jae Seok
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.1
    • /
    • pp.51-54
    • /
    • 2003
  • The z-transform method is a basic mathematical tool in analyzing and designing digital signal processing systems for discrete input and output signals. There are may cases where the output signal is in the form of a discrete convolution sum of an input function and a designed digital processing algorithm function. It is well known that the z-transform of the convolution sum becomes the product of the two z-transforms of the input function and the digital processing function, whose proofs require the causality of the digital signal processing function in the almost all the available references. However, not all of the convolution sum functions are based on the causality. Many digital signal processing systems such as image processing system may depend not on the time information but on the spatial information, which has nothing to do with causality requirement. Thus, the application of the causality-based z-transform theorem on the convolution sum cannot be used without difficulty in this case. This paper proves the z-transform theorem on the discrete convolution sum without causality requirement, and make it possible for the theorem to be used in analysis and desing for any cases.