• Title/Summary/Keyword: convolution layer

Search Result 138, Processing Time 0.025 seconds

Image Translation using Pseudo-Morphological Operator (의사 형태학적 연산을 사용한 이미지 변환)

  • Jo, Janghun;Lee, HoYeon;Shin, MyeongWoo;Kim, Kyungsup
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.799-802
    • /
    • 2017
  • We attempt to combines concepts of Morphological Operator(MO) and Convolutional Neural Networks(CNN) to improve image-to-image translation. To do this, we propose an operation that approximates morphological operations. Also we propose S-Convolution, an operation that extends the operation to use multiple filters like CNN. The experiment result shows that it can learn MO with big filter using multiple S-convolution layer of small filter. To validate effectiveness of the proposed layer in image-to-image translation we experiment with GAN with S-convolution applied. The result showed that GAN with S-convolution can achieve distinct result from that of GAN with CNN.

CNN Applied Modified Residual Block Structure (변형된 잔차블록을 적용한 CNN)

  • Kwak, Nae-Joung;Shin, Hyeon-Jun;Yang, Jong-Seop;Song, Teuk-Seob
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.7
    • /
    • pp.803-811
    • /
    • 2020
  • This paper proposes an image classification algorithm that transforms the number of convolution layers in the residual block of ResNet, CNN's representative method. The proposed method modified the structure of 34/50 layer of ResNet structure. First, we analyzed the performance of small and many convolution layers for the structure consisting of only shortcut and 3 × 3 convolution layers for 34 and 50 layers. And then the performance was analyzed in the case of small and many cases of convolutional layers for the bottleneck structure of 50 layers. By applying the results, the best classification method in the residual block was applied to construct a 34-layer simple structure and a 50-layer bottleneck image classification model. To evaluate the performance of the proposed image classification model, the results were analyzed by applying to the cifar10 dataset. The proposed 34-layer simple structure and 50-layer bottleneck showed improved performance over the ResNet-110 and Densnet-40 models.

Human Action Recognition Based on 3D Convolutional Neural Network from Hybrid Feature

  • Wu, Tingting;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.12
    • /
    • pp.1457-1465
    • /
    • 2019
  • 3D convolution is to stack multiple consecutive frames to form a cube, and then apply the 3D convolution kernel in the cube. In this structure, each feature map of the convolutional layer is connected to multiple adjacent sequential frames in the previous layer, thus capturing the motion information. However, due to the changes of pedestrian posture, motion and position, the convolution at the same place is inappropriate, and when the 3D convolution kernel is convoluted in the time domain, only time domain features of three consecutive frames can be extracted, which is not a good enough to get action information. This paper proposes an action recognition method based on feature fusion of 3D convolutional neural network. Based on the VGG16 network model, sending a pre-acquired optical flow image for learning, then get the time domain features, and then the feature of the time domain is extracted from the features extracted by the 3D convolutional neural network. Finally, the behavior classification is done by the SVM classifier.

Deep Learning Based Gray Image Generation from 3D LiDAR Reflection Intensity (딥러닝 기반 3차원 라이다의 반사율 세기 신호를 이용한 흑백 영상 생성 기법)

  • Kim, Hyun-Koo;Yoo, Kook-Yeol;Park, Ju H.;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.1
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a method of generating a 2D gray image from LiDAR 3D reflection intensity. The proposed method uses the Fully Convolutional Network (FCN) to generate the gray image from 2D reflection intensity which is projected from LiDAR 3D intensity. Both encoder and decoder of FCN are configured with several convolution blocks in the symmetric fashion. Each convolution block consists of a convolution layer with $3{\times}3$ filter, batch normalization layer and activation function. The performance of the proposed method architecture is empirically evaluated by varying depths of convolution blocks. The well-known KITTI data set for various scenarios is used for training and performance evaluation. The simulation results show that the proposed method produces the improvements of 8.56 dB in peak signal-to-noise ratio and 0.33 in structural similarity index measure compared with conventional interpolation methods such as inverse distance weighted and nearest neighbor. The proposed method can be possibly used as an assistance tool in the night-time driving system for autonomous vehicles.

Design of new CNN structure with internal FC layer (내부 FC층을 갖는 새로운 CNN 구조의 설계)

  • Park, Hee-mun;Park, Sung-chan;Hwang, Kwang-bok;Choi, Young-kiu;Park, Jin-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.466-467
    • /
    • 2018
  • Recently, artificial intelligence has been applied to various fields such as image recognition, image recognition speech recognition, and natural language processing, and interest in Deep Learning technology is increasing. Many researches on Convolutional Neural Network(CNN), which is one of the most representative algorithms among Deep Learning, have strong advantages in image recognition and classification and are widely used in various fields. In this paper, we propose a new network structure that transforms the general CNN structure. A typical CNN structure consists of a convolution layer, ReLU layer, and a pooling layer. Therefore in this paper, We intend to construct a new network by adding fully connected layer inside a general CNN structure. This modification is intended to increase the learning and accuracy of the convoluted image by including the generalization which is an advantage of the neural network.

  • PDF

Development and Speed Comparison of Convolutional Neural Network Using CUDA (CUDA를 이용한 Convolutional Neural Network의 구현 및 속도 비교)

  • Ki, Cheol-min;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.335-338
    • /
    • 2017
  • Currently Artificial Inteligence and Deep Learning are social issues, and These technologies are applied to various fields. A good method among the various algorithms in Artificial Inteligence is Convolutional Neural Network. Convolutional Neural Network is a form that adds convolution layers that extracts features by convolution operation on a general neural network method. If you use Convolutional Neural Network as small amount of data, or if the structure of layers is not complicated, you don't have to pay attention to speed. But the learning time is long as the size of the learning data is large and the structure of layers is complicated. So, GPU-based parallel processing is a lot. In this paper, we developed Convolutional Neural Network using CUDA and Learning speed is faster and more efficient than the method using the CPU.

  • PDF

Image Label Prediction Algorithm based on Convolution Neural Network with Collaborative Layer (협업 계층을 적용한 합성곱 신경망 기반의 이미지 라벨 예측 알고리즘)

  • Lee, Hyun-ho;Lee, Won-jin
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.6
    • /
    • pp.756-764
    • /
    • 2020
  • A typical algorithm used for image analysis is the Convolutional Neural Network(CNN). R-CNN, Fast R-CNN, Faster R-CNN, etc. have been studied to improve the performance of the CNN, but they essentially require large amounts of data and high algorithmic complexity., making them inappropriate for small and medium-sized services. Therefore, in this paper, the image label prediction algorithm based on CNN with collaborative layer with low complexity, high accuracy, and small amount of data was proposed. The proposed algorithm was designed to replace the part of the neural network that is performed to predict the final label in the existing deep learning algorithm by implementing collaborative filtering as a layer. It is expected that the proposed algorithm can contribute greatly to small and medium-sized content services that is unsuitable to apply the existing deep learning algorithm with high complexity and high server cost.

Implementation of MNIST classification CNN with zero-skipping (Zero-skipping을 적용한 MNIST 분류 CNN 구현)

  • Han, Seong-hyeon;Jung, Jun-mo
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1238-1241
    • /
    • 2018
  • In this paper, MNIST classification CNN with zero skipping is implemented. Activation of CNN results in 30% to 40% zero. Since 0 does not affect the MAC operation, skipping 0 through a branch can improve performance. However, at the convolution layer, skipping over a branch causes a performance degradation. Accordingly, in the convolution layer, an operation is skipped by giving a NOP that does not affect the operation. Fully connected layer is skipped through the branch. We have seen performance improvements of about 1.5 times that of existing CNN.

Hybrid All-Reduce Strategy with Layer Overlapping for Reducing Communication Overhead in Distributed Deep Learning (분산 딥러닝에서 통신 오버헤드를 줄이기 위해 레이어를 오버래핑하는 하이브리드 올-리듀스 기법)

  • Kim, Daehyun;Yeo, Sangho;Oh, Sangyoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.7
    • /
    • pp.191-198
    • /
    • 2021
  • Since the size of training dataset become large and the model is getting deeper to achieve high accuracy in deep learning, the deep neural network training requires a lot of computation and it takes too much time with a single node. Therefore, distributed deep learning is proposed to reduce the training time by distributing computation across multiple nodes. In this study, we propose hybrid allreduce strategy that considers the characteristics of each layer and communication and computational overlapping technique for synchronization of distributed deep learning. Since the convolution layer has fewer parameters than the fully-connected layer as well as it is located at the upper, only short overlapping time is allowed. Thus, butterfly allreduce is used to synchronize the convolution layer. On the other hand, fully-connecter layer is synchronized using ring all-reduce. The empirical experiment results on PyTorch with our proposed scheme shows that the proposed method reduced the training time by up to 33% compared to the baseline PyTorch.

Efficient Thread Allocation Method of Convolutional Neural Network based on GPGPU (GPGPU 기반 Convolutional Neural Network의 효율적인 스레드 할당 기법)

  • Kim, Mincheol;Lee, Kwangyeob
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.10
    • /
    • pp.935-943
    • /
    • 2017
  • CNN (Convolution neural network), which is used for image classification and speech recognition among neural networks learning based on positive data, has been continuously developed to have a high performance structure to date. There are many difficulties to utilize in an embedded system with limited resources. Therefore, we use GPU (General-Purpose Computing on Graphics Processing Units), which is used for general-purpose operation of GPU to solve the problem because we use pre-learned weights but there are still limitations. Since CNN performs simple and iterative operations, the computation speed varies greatly depending on the thread allocation and utilization method in the Single Instruction Multiple Thread (SIMT) based GPGPU. To solve this problem, there is a thread that needs to be relaxed when performing Convolution and Pooling operations with threads. The remaining threads have increased the operation speed by using the method used in the following feature maps and kernel calculations.