• Title/Summary/Keyword: RNN(Recurrent Neural Network)

Search Result 228, Processing Time 0.027 seconds

A Study on Person Re-Identification System using Enhanced RNN (확장된 RNN을 활용한 사람재인식 시스템에 관한 연구)

  • Choi, Seok-Gyu;Xu, Wenjie
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.15-23
    • /
    • 2017
  • The person Re-identification is the most challenging part of computer vision due to the significant changes in human pose and background clutter with occlusions. The picture from non-overlapping cameras enhance the difficulty to distinguish some person from the other. To reach a better performance match, most methods use feature selection and distance metrics separately to get discriminative representations and proper distance to describe the similarity between person and kind of ignoring some significant features. This situation has encouraged us to consider a novel method to deal with this problem. In this paper, we proposed an enhanced recurrent neural network with three-tier hierarchical network for person re-identification. Specifically, the proposed recurrent neural network (RNN) model contain an iterative expectation maximum (EM) algorithm and three-tier Hierarchical network to jointly learn both the discriminative features and metrics distance. The iterative EM algorithm can fully use of the feature extraction ability of convolutional neural network (CNN) which is in series before the RNN. By unsupervised learning, the EM framework can change the labels of the patches and train larger datasets. Through the three-tier hierarchical network, the convolutional neural network, recurrent network and pooling layer can jointly be a feature extractor to better train the network. The experimental result shows that comparing with other researchers' approaches in this field, this method also can get a competitive accuracy. The influence of different component of this method will be analyzed and evaluated in the future research.

A Study on Performance Improvement of Recurrent Neural Networks Algorithm using Word Group Expansion Technique (단어그룹 확장 기법을 활용한 순환신경망 알고리즘 성능개선 연구)

  • Park, Dae Seung;Sung, Yeol Woo;Kim, Cheong Ghil
    • Journal of Industrial Convergence
    • /
    • v.20 no.4
    • /
    • pp.23-30
    • /
    • 2022
  • Recently, with the development of artificial intelligence (AI) and deep learning, the importance of conversational artificial intelligence chatbots is being highlighted. In addition, chatbot research is being conducted in various fields. To build a chatbot, it is developed using an open source platform or a commercial platform for ease of development. These chatbot platforms mainly use RNN and application algorithms. The RNN algorithm has the advantages of fast learning speed, ease of monitoring and verification, and good inference performance. In this paper, a method for improving the inference performance of RNNs and applied algorithms was studied. The proposed method used the word group expansion learning technique of key words for each sentence when RNN and applied algorithm were applied. As a result of this study, the RNN, GRU, and LSTM three algorithms with a cyclic structure achieved a minimum of 0.37% and a maximum of 1.25% inference performance improvement. The research results obtained through this study can accelerate the adoption of artificial intelligence chatbots in related industries. In addition, it can contribute to utilizing various RNN application algorithms. In future research, it will be necessary to study the effect of various activation functions on the performance improvement of artificial neural network algorithms.

Parameter Estimation of Recurrent Neural Networks Using A Unscented Kalman Filter Training Algorithm and Its Applications to Nonlinear Channel Equalization (언센티드 칼만필터 훈련 알고리즘에 의한 순환신경망의 파라미터 추정 및 비선형 채널 등화에의 응용)

  • Kwon Oh-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.552-559
    • /
    • 2005
  • Recurrent neural networks(RNNs) trained with gradient based such as real time recurrent learning(RTRL) has a drawback of slor convergence rate. This algorithm also needs the derivative calculation which is not trivialized in error back propagation process. In this paper a derivative free Kalman filter, so called the unscented Kalman filter(UKF), for training a fully connected RNN is presented in a state space formulation of the system. A derivative free Kalman filler learning algorithm makes the RNN have fast convergence speed and good tracking performance without the derivative computation. Through experiments of nonlinear channel equalization, performance of the RNNs with a derivative free Kalman filter teaming algorithm is evaluated.

Artificial neural network for classifying with epilepsy MEG data (뇌전증 환자의 MEG 데이터에 대한 분류를 위한 인공신경망 적용 연구)

  • Yujin Han;Junsik Kim;Jaehee Kim
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.139-155
    • /
    • 2024
  • This study performed a multi-classification task to classify mesial temporal lobe epilepsy with left hippocampal sclerosis patients (left mTLE), mesial temporal lobe epilepsy with right hippocampal sclerosis (right mTLE), and healthy controls (HC) using magnetoencephalography (MEG) data. We applied various artificial neural networks and compared the results. As a result of modeling with convolutional neural networks (CNN), recurrent neural networks (RNN), and graph neural networks (GNN), the average k-fold accuracy was excellent in the order of CNN-based model, GNN-based model, and RNN-based model. The wall time was excellent in the order of RNN-based model, GNN-based model, and CNN-based model. The graph neural network, which shows good figures in accuracy, performance, and time, and has excellent scalability of network data, is the most suitable model for brain research in the future.

Predicting the number of disease occurrence using recurrent neural network (순환신경망을 이용한 질병발생건수 예측)

  • Lee, Seunghyeon;Yeo, In-Kwon
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.627-637
    • /
    • 2020
  • In this paper, the 1.24 million elderly patient medical data (HIRA-APS-2014-0053) provided by the Health Insurance Review and Assessment Service and weather data are analyzed with generalized estimating equation (GEE) model and long short term memory (LSTM) based recurrent neural network (RNN) model to predict the number of disease occurrence. To this end, we estimate the patient's residence as the area of the served medical institution, and the local weather data and medical data were merged. The status of disease occurrence is divided into three categories(occurrence of disease of interest, occurrence of other disease, no occurrence) during a week. The probabilities of categories are estimated by the GEE model and the RNN model. The number of cases of categories are predicted by adding the probabilities of categories. The comparison result shows that predictions of RNN model are more accurate than that of GEE model.

S2-Net: Machine reading comprehension with SRU-based self-matching networks

  • Park, Cheoneum;Lee, Changki;Hong, Lynn;Hwang, Yigyu;Yoo, Taejoon;Jang, Jaeyong;Hong, Yunki;Bae, Kyung-Hoon;Kim, Hyun-Ki
    • ETRI Journal
    • /
    • v.41 no.3
    • /
    • pp.371-382
    • /
    • 2019
  • Machine reading comprehension is the task of understanding a given context and finding the correct response in that context. A simple recurrent unit (SRU) is a model that solves the vanishing gradient problem in a recurrent neural network (RNN) using a neural gate, such as a gated recurrent unit (GRU) and long short-term memory (LSTM); moreover, it removes the previous hidden state from the input gate to improve the speed compared to GRU and LSTM. A self-matching network, used in R-Net, can have a similar effect to coreference resolution because the self-matching network can obtain context information of a similar meaning by calculating the attention weight for its own RNN sequence. In this paper, we construct a dataset for Korean machine reading comprehension and propose an $S^2-Net$ model that adds a self-matching layer to an encoder RNN using multilayer SRU. The experimental results show that the proposed $S^2-Net$ model has performance of single 68.82% EM and 81.25% F1, and ensemble 70.81% EM, 82.48% F1 in the Korean machine reading comprehension test dataset, and has single 71.30% EM and 80.37% F1 and ensemble 73.29% EM and 81.54% F1 performance in the SQuAD dev dataset.

Analyzing Performance and Dynamics of Echo State Networks Given Various Structures of Hidden Neuron Connections (Echo State Network 모델의 은닉 뉴런 간 연결구조에 따른 성능과 동역학적 특성 분석)

  • Yoon, Sangwoong;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.4
    • /
    • pp.338-342
    • /
    • 2015
  • Recurrent Neural Network (RNN), a machine learning model which can handle time-series data, can possess more varied structures than a feed-forward neural network, since a RNN allows hidden-to-hidden connections. This research focuses on the network structure among hidden neurons, and discusses the information processing capability of RNN. Time-series learning potential and dynamics of RNNs are investigated upon several well-established network structure models. Hidden neuron network structure is found to have significant impact on the performance of a model, and the performance variations are generally correlated with the criticality of the network dynamics. Especially Preferential Attachment Network model showed an interesting behavior. These findings provide clues for performance improvement of the RNN.

Load Variation Compensated Neural Network Speed Controller for Induction Motor Drives

  • Oh, Won-Seok;Cho, Kyu-Min;Kim, Young-Tae;Kim, Hee-Jun
    • KIEE International Transaction on Electrical Machinery and Energy Conversion Systems
    • /
    • v.3B no.2
    • /
    • pp.97-102
    • /
    • 2003
  • In this paper, a recurrent artificial neural network (RNN) based self-tuning speed controller is proposed for the high-performance drives of induction motors. The RNN provides a nonlinear modeling of a motor drive system and could provide the controller with information regarding the load variation system noise, and parameter variation of the induction motor through the on-line estimated weights of the corresponding RNN. Thus, the proposed self-tuning controller can change the gains of the controller according to system conditions. The gain is composed with the weights of the RNN. For the on-line estimation of the RNN weights, an extended Kalman filter (EKF) algorithm is used. A self-tuning controller is designed that is adequate for the speed control of the induction motor The availability of the proposed controller is verified through MATLAB simulations and is compared with the conventional PI controller.

Identification of Finite Automata Using Recurrent Neural Networks

  • Won, Sung-Hwan;Park, Cheol-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.667-668
    • /
    • 2008
  • This paper demonstrates that the recurrent neural networks can be used successfully for the identification of finite automata (FAs). A new type of recurrent neural network (RNN) is proposed and the offline training algorithm, regulated Levenberg-Marquadt (LM) algorithm, for the network is developed. Simulation result shows that the identification and the extraction of FAs are practically achievable.

  • PDF

Imputation of Missing SST Observation Data Using Multivariate Bidirectional RNN (다변수 Bidirectional RNN을 이용한 표층수온 결측 데이터 보간)

  • Shin, YongTak;Kim, Dong-Hoon;Kim, Hyeon-Jae;Lim, Chaewook;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.34 no.4
    • /
    • pp.109-118
    • /
    • 2022
  • The data of the missing section among the vertex surface sea temperature observation data was imputed using the Bidirectional Recurrent Neural Network(BiRNN). Among artificial intelligence techniques, Recurrent Neural Networks (RNNs), which are commonly used for time series data, only estimate in the direction of time flow or in the reverse direction to the missing estimation position, so the estimation performance is poor in the long-term missing section. On the other hand, in this study, estimation performance can be improved even for long-term missing data by estimating in both directions before and after the missing section. Also, by using all available data around the observation point (sea surface temperature, temperature, wind field, atmospheric pressure, humidity), the imputation performance was further improved by estimating the imputation data from these correlations together. For performance verification, a statistical model, Multivariate Imputation by Chained Equations (MICE), a machine learning-based Random Forest model, and an RNN model using Long Short-Term Memory (LSTM) were compared. For imputation of long-term missing for 7 days, the average accuracy of the BiRNN/statistical models is 70.8%/61.2%, respectively, and the average error is 0.28 degrees/0.44 degrees, respectively, so the BiRNN model performs better than other models. By applying a temporal decay factor representing the missing pattern, it is judged that the BiRNN technique has better imputation performance than the existing method as the missing section becomes longer.