• Title/Summary/Keyword: Sequential neural networks

Search Result 54, Processing Time 0.026 seconds

Psalm Text Generator Comparison Between English and Korean Using LSTM Blocks in a Recurrent Neural Network (순환 신경망에서 LSTM 블록을 사용한 영어와 한국어의 시편 생성기 비교)

  • Snowberger, Aaron Daniel;Lee, Choong Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.269-271
    • /
    • 2022
  • In recent years, RNN networks with LSTM blocks have been used extensively in machine learning tasks that process sequential data. These networks have proven to be particularly good at sequential language processing tasks by being more able to accurately predict the next most likely word in a given sequence than traditional neural networks. This study trained an RNN / LSTM neural network on three different translations of 150 biblical Psalms - in both English and Korean. The resulting model is then fed an input word and a length number from which it automatically generates a new Psalm of the desired length based on the patterns it recognized while training. The results of training the network on both English text and Korean text are compared and discussed.

  • PDF

Emotion Detection Model based on Sequential Neural Networks in Smart Exhibition Environment (스마트 전시환경에서 순차적 인공신경망에 기반한 감정인식 모델)

  • Jung, Min Kyu;Choi, Il Young;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.109-126
    • /
    • 2017
  • In the various kinds of intelligent services, many studies for detecting emotion are in progress. Particularly, studies on emotion recognition at the particular time have been conducted in order to provide personalized experiences to the audience in the field of exhibition though facial expressions change as time passes. So, the aim of this paper is to build a model to predict the audience's emotion from the changes of facial expressions while watching an exhibit. The proposed model is based on both sequential neural network and the Valence-Arousal model. To validate the usefulness of the proposed model, we performed an experiment to compare the proposed model with the standard neural-network-based model to compare their performance. The results confirmed that the proposed model considering time sequence had better prediction accuracy.

Analysis of Statistical Neurodynamics for the Effests of the Hysteretic Property on the Performance of Sequential Associative Neural Nets (히스테리시스 특성이 계열연상에 미치는 영향에 대한 통계 신경역학적 해석)

  • Kim, Eung-Su;O, Chun-Seok
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.4
    • /
    • pp.1035-1045
    • /
    • 1997
  • It is important to understand how we can deal with doements for the modeling of neural networks when we are unbestigating the dynamical performance and the information procoessing capabilitids.The information procewssing capabkities of model neural networks will change for different response, synaptic weights or learning rules. Using the staritical neurodyamics method, we evalute the capabikities of neural networks in order to understand the basic conept ofr parallel distributed processing. In this paper, we explain the reuslts of theoretical anaysis of the effests of the hysteretic property on the performance of wuquential associative neral networks.

  • PDF

Generalized Binary Second-order Recurrent Neural Networks Equivalent to Regular Grammars (정규문법과 동등한 일반화된 이진 이차 재귀 신경망)

  • Jung Soon-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.107-123
    • /
    • 2006
  • We propose the Generalized Binary Second-order Recurrent Neural Networks(GBSRNNf) being equivalent to regular grammars and ?how the implementation of lexical analyzer recognizing the regular languages by using it. All the equivalent representations of regular grammars can be implemented in circuits by using GSBRNN, since it has binary-valued components and shows the structural relationship of a regular grammar. For a regular grammar with the number of symbols m, the number of terminals p, the number of nonterminals q, and the length of input string k, the size of the corresponding GBSRNN is $O(m(p+q)^2)$ and its parallel processing time is O(k) and its sequential processing time, $O(k(p+q)^2)$.

  • PDF

Understanding recurrent neural network for texts using English-Korean corpora

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.313-326
    • /
    • 2020
  • Deep Learning is the most important key to the development of Artificial Intelligence (AI). There are several distinguishable architectures of neural networks such as MLP, CNN, and RNN. Among them, we try to understand one of the main architectures called Recurrent Neural Network (RNN) that differs from other networks in handling sequential data, including time series and texts. As one of the main tasks recently in Natural Language Processing (NLP), we consider Neural Machine Translation (NMT) using RNNs. We also summarize fundamental structures of the recurrent networks, and some topics of representing natural words to reasonable numeric vectors. We organize topics to understand estimation procedures from representing input source sequences to predict target translated sequences. In addition, we apply multiple translation models with Gated Recurrent Unites (GRUs) in Keras on English-Korean sentences that contain about 26,000 pairwise sequences in total from two different corpora, colloquialism and news. We verified some crucial factors that influence the quality of training. We found that loss decreases with more recurrent dimensions and using bidirectional RNN in the encoder when dealing with short sequences. We also computed BLEU scores which are the main measures of the translation performance, and compared them with the score from Google Translate using the same test sentences. We sum up some difficulties when training a proper translation model as well as dealing with Korean language. The use of Keras in Python for overall tasks from processing raw texts to evaluating the translation model also allows us to include some useful functions and vocabulary libraries as well.

A Discourse-based Compositional Approach to Overcome Drawbacks of Sequence-based Composition in Text Modeling via Neural Networks (신경망 기반 텍스트 모델링에 있어 순차적 결합 방법의 한계점과 이를 극복하기 위한 담화 기반의 결합 방법)

  • Lee, Kangwook;Han, Sanggyu;Myaeng, Sung-Hyon
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.12
    • /
    • pp.698-702
    • /
    • 2017
  • Since the introduction of Deep Neural Networks to the Natural Language Processing field, two major approaches have been considered for modeling text. One method involved learning embeddings, i.e. the distributed representations containing abstract semantics of words or sentences, with the textual context. The other strategy consisted of composing the embeddings trained by the above to get embeddings of longer texts. However, most studies of the composition methods just adopt word embeddings without consideration of the optimal embedding unit and the optimal method of composition. In this paper, we conducted experiments to analyze the optimal embedding unit and the optimal composition method for modeling longer texts, such as documents. In addition, we suggest a new discourse-based composition to overcome the limitation of the sequential composition method on composing sentence embeddings.

Inference of Context-Free Grammars using Binary Third-order Recurrent Neural Networks with Genetic Algorithm (이진 삼차 재귀 신경망과 유전자 알고리즘을 이용한 문맥-자유 문법의 추론)

  • Jung, Soon-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.3
    • /
    • pp.11-25
    • /
    • 2012
  • We present the method to infer Context-Free Grammars by applying genetic algorithm to the Binary Third-order Recurrent Neural Networks(BTRNN). BTRNN is a multiple-layered architecture of recurrent neural networks, each of which is corresponding to an input symbol, and is combined with external stack. All parameters of BTRNN are represented as binary numbers and each state transition is performed with any stack operation simultaneously. We apply Genetic Algorithm to BTRNN chromosomes and obtain the optimal BTRNN inferring context-free grammar of positive and negative input patterns. This proposed method infers BTRNN, which includes the number of its states equal to or less than those of existing methods of Discrete Recurrent Neural Networks, with less examples and less learning trials. Also BTRNN is superior to the recent method of chromosomes representing grammars at recognition time complexity because of performing deterministic state transitions and stack operations at parsing process. If the number of non-terminals is p, the number of terminals q, the length of an input string k, and the max number of BTRNN states m, the parallel processing time is O(k) and the sequential processing time is O(km).

Comparing the Performance of Artificial Neural Networks and Long Short-Term Memory Networks for Rainfall-runoff Analysis (인공신경망과 장단기메모리 모형의 유출량 모의 성능 분석)

  • Kim, JiHye;Kang, Moon Seong;Kim, Seok Hyeon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.320-320
    • /
    • 2019
  • 유역의 수문 자료를 정확하게 분석하는 것은 수리 구조물을 효율적으로 운영하기 위한 중요한 요소이다. 인공신경망(Artificial Neural Networks, ANNs) 모형은 입 출력 자료의 비선형적인 관계를 해석할 수 있는 모형으로 강우-유출 해석 등 수문 분야에 다양하게 적용되어 왔다. 이후 기존의 인공신경망 모형을 연속적인(sequential) 자료의 분석에 더 적합하도록 개선한 회귀신경망(Recurrent Neural Networks, RNNs) 모형과 회귀신경망 모형의 '장기 의존성 문제'를 개선한 장단기메모리(Long Short-Term Memory Networks, 이하 LSTM)가 차례로 제안되었다. LSTM은 최근에 주목받는 딥 러닝(Deep learning) 기법의 하나로 수문 자료와 같은 시계열 자료의 분석에 뛰어난 성능을 보일 것으로 예상되며, 수문 분야에서 이에 대한 적용성 평가가 요구되고 있다. 본 연구에서는 인공신경망 모형과 LSTM 모형으로 유출량을 모의하여 두 모형의 성능을 비교하고 향후 LSTM 모형의 활용 가능성을 검토하고자 하였다. 나주 수위관측소의 수위 자료와 인접한 기상관측소의 강우량 자료로 모형의 입 출력 자료를 구성하여 강우 사상에 대한 시간별 유출량을 모의하였다. 연구 결과, 1시간 후의 유출량에 대해서는 두 모형 모두 뛰어난 모의 능력을 보였으나, 선행 시간이 길어질수록 LSTM의 정확성은 유지되는 반면 인공신경망 모형의 정확성은 점차 떨어지는 것으로 나타났다. 앞으로의 연구에서 유역 내 다양한 수리 구조물에 의한 유 출입량을 추가로 고려한다면 LSTM 모형의 활용성을 보다 더 확장할 수 있을 것이다.

  • PDF

Anomaly Detection Performance Analysis of Neural Networks using Soundex Algorithm and N-gram Techniques based on System Calls (시스템 호출 기반의 사운덱스 알고리즘을 이용한 신경망과 N-gram 기법에 대한 이상 탐지 성능 분석)

  • Park, Bong-Goo
    • Journal of Internet Computing and Services
    • /
    • v.6 no.5
    • /
    • pp.45-56
    • /
    • 2005
  • The weak foundation of the computing environment caused information leakage and hacking to be uncontrollable, Therefore, dynamic control of security threats and real-time reaction to identical or similar types of accidents after intrusion are considered to be important, h one of the solutions to solve the problem, studies on intrusion detection systems are actively being conducted. To improve the anomaly IDS using system calls, this study focuses on neural networks learning using the soundex algorithm which is designed to change feature selection and variable length data into a fixed length learning pattern, That Is, by changing variable length sequential system call data into a fixed iength behavior pattern using the soundex algorithm, this study conducted neural networks learning by using a backpropagation algorithm. The backpropagation neural networks technique is applied for anomaly detection of system calls using Sendmail Data of UNM to demonstrate its performance.

  • PDF

Large-Scale Text Classification with Deep Neural Networks (깊은 신경망 기반 대용량 텍스트 데이터 분류 기술)

  • Jo, Hwiyeol;Kim, Jin-Hwa;Kim, Kyung-Min;Chang, Jeong-Ho;Eom, Jae-Hong;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.322-327
    • /
    • 2017
  • The classification problem in the field of Natural Language Processing has been studied for a long time. Continuing forward with our previous research, which classifies large-scale text using Convolutional Neural Networks (CNN), we implemented Recurrent Neural Networks (RNN), Long-Short Term Memory (LSTM) and Gated Recurrent Units (GRU). The experiment's result revealed that the performance of classification algorithms was Multinomial Naïve Bayesian Classifier < Support Vector Machine (SVM) < LSTM < CNN < GRU, in order. The result can be interpreted as follows: First, the result of CNN was better than LSTM. Therefore, the text classification problem might be related more to feature extraction problem than to natural language understanding problems. Second, judging from the results the GRU showed better performance in feature extraction than LSTM. Finally, the result that the GRU was better than CNN implies that text classification algorithms should consider feature extraction and sequential information. We presented the results of fine-tuning in deep neural networks to provide some intuition regard natural language processing to future researchers.