• Title/Summary/Keyword: Convolutional Recurrent Neural Network

Search Result 96, Processing Time 0.024 seconds

Image Caption Generation using Recurrent Neural Network (Recurrent Neural Network를 이용한 이미지 캡션 생성)

  • Lee, Changki
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.878-882
    • /
    • 2016
  • Automatic generation of captions for an image is a very difficult task, due to the necessity of computer vision and natural language processing technologies. However, this task has many important applications, such as early childhood education, image retrieval, and navigation for blind. In this paper, we describe a Recurrent Neural Network (RNN) model for generating image captions, which takes image features extracted from a Convolutional Neural Network (CNN). We demonstrate that our models produce state of the art results in image caption generation experiments on the Flickr 8K, Flickr 30K, and MS COCO datasets.

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.

Earthquake events classification using convolutional recurrent neural network (합성곱 순환 신경망 구조를 이용한 지진 이벤트 분류 기법)

  • Ku, Bonhwa;Kim, Gwantae;Jang, Su;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.6
    • /
    • pp.592-599
    • /
    • 2020
  • This paper proposes a Convolutional Recurrent Neural Net (CRNN) structure that can simultaneously reflect both static and dynamic characteristics of seismic waveforms for various earthquake events classification. Addressing various earthquake events, including not only micro-earthquakes and artificial-earthquakes but also macro-earthquakes, requires both effective feature extraction and a classifier that can discriminate seismic waveform under noisy environment. First, we extract the static characteristics of seismic waveform through an attention-based convolution layer. Then, the extracted feature-map is sequentially injected as input to a multi-input single-output Long Short-Term Memory (LSTM) network structure to extract the dynamic characteristic for various seismic event classifications. Subsequently, we perform earthquake events classification through two fully connected layers and softmax function. Representative experimental results using domestic and foreign earthquake database show that the proposed model provides an effective structure for various earthquake events classification.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

An Intrusion Detection Model based on a Convolutional Neural Network

  • Kim, Jiyeon;Shin, Yulim;Choi, Eunjung
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.165-172
    • /
    • 2019
  • Machine-learning techniques have been actively employed to information security in recent years. Traditional rule-based security solutions are vulnerable to advanced attacks due to unpredictable behaviors and unknown vulnerabilities. By employing ML techniques, we are able to develop intrusion detection systems (IDS) based on anomaly detection instead of misuse detection. Moreover, threshold issues in anomaly detection can also be resolved through machine-learning. There are very few datasets for network intrusion detection compared to datasets for malicious code. KDD CUP 99 (KDD) is the most widely used dataset for the evaluation of IDS. Numerous studies on ML-based IDS have been using KDD or the upgraded versions of KDD. In this work, we develop an IDS model using CSE-CIC-IDS 2018, a dataset containing the most up-to-date common network attacks. We employ deep-learning techniques and develop a convolutional neural network (CNN) model for CSE-CIC-IDS 2018. We then evaluate its performance comparing with a recurrent neural network (RNN) model. Our experimental results show that the performance of our CNN model is higher than that of the RNN model when applied to CSE-CIC-IDS 2018 dataset. Furthermore, we suggest a way of improving the performance of our model.

Multi-channel EEG classification method according to music tempo stimuli using 3D convolutional bidirectional gated recurrent neural network (3차원 합성곱 양방향 게이트 순환 신경망을 이용한 음악 템포 자극에 따른 다채널 뇌파 분류 방식)

  • Kim, Min-Soo;Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.228-233
    • /
    • 2021
  • In this paper, we propose a method to extract and classify features of multi-channel ElectroEncephalo Graphy (EEG) that change according to various musical tempo stimuli. In the proposed method, a 3D convolutional bidirectional gated recurrent neural network extracts spatio-temporal and long time-dependent features from the 3D EEG input representation transformed through the preprocessing. The experimental results show that the proposed tempo stimuli classification method is superior to the existing method and the possibility of constructing a music-based brain-computer interface.

Forecasting realized volatility using data normalization and recurrent neural network

  • Yoonjoo Lee;Dong Wan Shin;Ji Eun Choi
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.1
    • /
    • pp.105-127
    • /
    • 2024
  • We propose recurrent neural network (RNN) methods for forecasting realized volatility (RV). The data are RVs of ten major stock price indices, four from the US, and six from the EU. Forecasts are made for relative ratio of adjacent RVs instead of the RV itself in order to avoid the out-of-scale issue. Forecasts of RV ratios distribution are first constructed from which those of RVs are computed which are shown to be better than forecasts constructed directly from RV. The apparent asymmetry of RV ratio is addressed by the Piecewise Min-max (PM) normalization. The serial dependence of the ratio data renders us to consider two architectures, long short-term memory (LSTM) and gated recurrent unit (GRU). The hyperparameters of LSTM and GRU are tuned by the nested cross validation. The RNN forecast with the PM normalization and ratio transformation is shown to outperform other forecasts by other RNN models and by benchmarking models of the AR model, the support vector machine (SVM), the deep neural network (DNN), and the convolutional neural network (CNN).

Extraction and classification of tempo stimuli from electroencephalography recordings using convolutional recurrent attention model

  • Lee, Gi Yong;Kim, Min-Soo;Kim, Hyoung-Gook
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1081-1092
    • /
    • 2021
  • Electroencephalography (EEG) recordings taken during the perception of music tempo contain information that estimates the tempo of a music piece. If information about this tempo stimulus in EEG recordings can be extracted and classified, it can be effectively used to construct a music-based brain-computer interface. This study proposes a novel convolutional recurrent attention model (CRAM) to extract and classify features corresponding to tempo stimuli from EEG recordings of listeners who listened with concentration to the tempo of musics. The proposed CRAM is composed of six modules, namely, network inputs, two-dimensional convolutional bidirectional gated recurrent unit-based sample encoder, sample-level intuitive attention, segment encoder, segment-level intuitive attention, and softmax layer, to effectively model spatiotemporal features and improve the classification accuracy of tempo stimuli. To evaluate the proposed method's performance, we conducted experiments on two benchmark datasets. The proposed method achieves promising results, outperforming recent methods.

A SE Approach for Machine Learning Prediction of the Response of an NPP Undergoing CEA Ejection Accident

  • Ditsietsi Malale;Aya Diab
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.19 no.2
    • /
    • pp.18-31
    • /
    • 2023
  • Exploring artificial intelligence and machine learning for nuclear safety has witnessed increased interest in recent years. To contribute to this area of research, a machine learning model capable of accurately predicting nuclear power plant response with minimal computational cost is proposed. To develop a robust machine learning model, the Best Estimate Plus Uncertainty (BEPU) approach was used to generate a database to train three models and select the best of the three. The BEPU analysis was performed by coupling Dakota platform with the best estimate thermal hydraulics code RELAP/SCDAPSIM/MOD 3.4. The Code Scaling Applicability and Uncertainty approach was adopted, along with Wilks' theorem to obtain a statistically representative sample that satisfies the USNRC 95/95 rule with 95% probability and 95% confidence level. The generated database was used to train three models based on Recurrent Neural Networks; specifically, Long Short-Term Memory, Gated Recurrent Unit, and a hybrid model with Long Short-Term Memory coupled to Convolutional Neural Network. In this paper, the System Engineering approach was utilized to identify requirements, stakeholders, and functional and physical architecture to develop this project and ensure success in verification and validation activities necessary to ensure the efficient development of ML meta-models capable of predicting of the nuclear power plant response.