• Title/Summary/Keyword: convolutional network

Search Result 1,569, Processing Time 0.041 seconds

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

Implementation of Handwriting Number Recognition using Convolutional Neural Network (콘볼류션 신경망을 이용한 손글씨 숫자 인식 구현)

  • Park, Tae-Ju;Song, Teuk-Seob
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.561-562
    • /
    • 2021
  • CNN (Convolutional Neural Network) is widely used to recognize various images. In this presentation, a single digit handwritten by humans was recognized by applying the CNN technique of deep learning. The deep learning network consists of a convolutional layer, a pooling layer, and a platen layer, and finally, we set an optimization method, learning rate and loss functions.

  • PDF

Facial Expression Classification Using Deep Convolutional Neural Network (깊은 Convolutional Neural Network를 이용한 얼굴표정 분류 기법)

  • Choi, In-kyu;Song, Hyok;Lee, Sangyong;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.162-172
    • /
    • 2017
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. To overcome the disadvantages of existing facial expression databases, various databases are used. In the proposed technique, we construct six facial expression data sets such as 'expressionless', 'happiness', 'sadness', 'angry', 'surprise', and 'disgust'. Pre-processing and data augmentation techniques are also applied to improve efficient learning and classification performance. In the existing CNN structure, the optimal CNN structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of fully-connected layer nodes. Experimental results show that the proposed scheme achieves the highest classification performance of 96.88% while it takes the least time to pass through the CNN structure compared to other models.

Convolutional neural network based amphibian sound classification using covariance and modulogram (공분산과 모듈로그램을 이용한 콘볼루션 신경망 기반 양서류 울음소리 구별)

  • Ko, Kyungdeuk;Park, Sangwook;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.1
    • /
    • pp.60-65
    • /
    • 2018
  • In this paper, a covariance matrix and modulogram are proposed for realizing amphibian sound classification using CNN (Convolutional Neural Network). First of all, a database is established by collecting amphibians sounds including endangered species in natural environment. In order to apply the database to CNN, it is necessary to standardize acoustic signals with different lengths. To standardize the acoustic signals, covariance matrix that gives distribution information and modulogram that contains the information about change over time are extracted and used as input to CNN. The experiment is conducted by varying the number of a convolutional layer and a fully-connected layer. For performance assessment, several conventional methods are considered representing various feature extraction and classification approaches. From the results, it is confirmed that convolutional layer has a greater impact on performance than the fully-connected layer. Also, the performance based on CNN shows attaining the highest recognition rate with 99.07 % among the considered methods.

Image Caption Generation using Recurrent Neural Network (Recurrent Neural Network를 이용한 이미지 캡션 생성)

  • Lee, Changki
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.878-882
    • /
    • 2016
  • Automatic generation of captions for an image is a very difficult task, due to the necessity of computer vision and natural language processing technologies. However, this task has many important applications, such as early childhood education, image retrieval, and navigation for blind. In this paper, we describe a Recurrent Neural Network (RNN) model for generating image captions, which takes image features extracted from a Convolutional Neural Network (CNN). We demonstrate that our models produce state of the art results in image caption generation experiments on the Flickr 8K, Flickr 30K, and MS COCO datasets.

Deep Adversarial Residual Convolutional Neural Network for Image Generation and Classification

  • Haque, Md Foysal;Kang, Dae-Seong
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.111-120
    • /
    • 2020
  • Generative adversarial networks (GANs) achieved impressive performance on image generation and visual classification applications. However, adversarial networks meet difficulties in combining the generative model and unstable training process. To overcome the problem, we combined the deep residual network with upsampling convolutional layers to construct the generative network. Moreover, the study shows that image generation and classification performance become more prominent when the residual layers include on the generator. The proposed network empirically shows that the ability to generate images with higher visual accuracy provided certain amounts of additional complexity using proper regularization techniques. Experimental evaluation shows that the proposed method is superior to image generation and classification tasks.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Explainable radionuclide identification algorithm based on the convolutional neural network and class activation mapping

  • Yu Wang;Qingxu Yao;Quanhu Zhang;He Zhang;Yunfeng Lu;Qimeng Fan;Nan Jiang;Wangtao Yu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.12
    • /
    • pp.4684-4692
    • /
    • 2022
  • Radionuclide identification is an important part of the nuclear material identification system. The development of artificial intelligence and machine learning has made nuclide identification rapid and automatic. However, many methods directly use existing deep learning models to analyze the gamma-ray spectrum, which lacks interpretability for researchers. This study proposes an explainable radionuclide identification algorithm based on the convolutional neural network and class activation mapping. This method shows the area of interest of the neural network on the gamma-ray spectrum by generating a class activation map. We analyzed the class activation map of the gamma-ray spectrum of different types, different gross counts, and different signal-to-noise ratios. The results show that the convolutional neural network attempted to learn the relationship between the input gamma-ray spectrum and the nuclide type, and could identify the nuclide based on the photoelectric peak and Compton edge. Furthermore, the results explain why the neural network could identify gamma-ray spectra with low counts and low signal-to-noise ratios. Thus, the findings improve researchers' confidence in the ability of neural networks to identify nuclides and promote the application of artificial intelligence methods in the field of nuclide identification.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Comparison of Convolutional Neural Network Models for Image Super Resolution

  • Jian, Chen;Yu, Songhyun;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.63-66
    • /
    • 2018
  • Recently, a convolutional neural network (CNN) models at single image super-resolution have been very successful. Residual learning improves training stability and network performance in CNN. In this paper, we compare four convolutional neural network models for super-resolution (SR) to learn nonlinear mapping from low-resolution (LR) input image to high-resolution (HR) target image. Four models include general CNN model, global residual learning CNN model, local residual learning CNN model, and the CNN model with global and local residual learning. Experiment results show that the results are greatly affected by how skip connections are connected at the basic CNN network, and network trained with only global residual learning generates highest performance among four models at objective and subjective evaluations.

  • PDF