• Title/Summary/Keyword: sequence learning

Search Result 461, Processing Time 0.025 seconds

Motivation based Behavior Sequence Learning for an Autonomous Agent in Virtual Reality

  • Song, Wei;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.12
    • /
    • pp.1819-1826
    • /
    • 2009
  • To enhance the automatic performance of existing predicting and planning algorithms that require a predefined probability of the states' transition, this paper proposes a multiple sequence generation system. When interacting with unknown environments, a virtual agent needs to decide which action or action order can result in a good state and determine the transition probability based on the current state and the action taken. We describe a sequential behavior generation method motivated from the change in the agent's state in order to help the virtual agent learn how to adapt to unknown environments. In a sequence learning process, the sensed states are grouped by a set of proposed motivation filters in order to reduce the learning computation of the large state space. In order to accomplish a goal with a high payoff, the learning agent makes a decision based on the observation of states' transitions. The proposed multiple sequence behaviors generation system increases the complexity and heightens the automatic planning of the virtual agent for interacting with the dynamic unknown environment. This model was tested in a virtual library to elucidate the process of the system.

  • PDF

An Improved Reinforcement Learning Technique for Mission Completion (임무수행을 위한 개선된 강화학습 방법)

  • 권우영;이상훈;서일홍
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.9
    • /
    • pp.533-539
    • /
    • 2003
  • Reinforcement learning (RL) has been widely used as a learning mechanism of an artificial life system. However, RL usually suffers from slow convergence to the optimum state-action sequence or a sequence of stimulus-response (SR) behaviors, and may not correctly work in non-Markov processes. In this paper, first, to cope with slow-convergence problem, if some state-action pairs are considered as disturbance for optimum sequence, then they no to be eliminated in long-term memory (LTM), where such disturbances are found by a shortest path-finding algorithm. This process is shown to let the system get an enhanced learning speed. Second, to partly solve a non-Markov problem, if a stimulus is frequently met in a searching-process, then the stimulus will be classified as a sequential percept for a non-Markov hidden state. And thus, a correct behavior for a non-Markov hidden state can be learned as in a Markov environment. To show the validity of our proposed learning technologies, several simulation result j will be illustrated.

Two-Agent Scheduling with Sequence-Dependent Exponential Learning Effects Consideration (처리순서기반 지수함수 학습효과를 고려한 2-에이전트 스케줄링)

  • Choi, Jin Young
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.36 no.4
    • /
    • pp.130-137
    • /
    • 2013
  • In this paper, we consider a two-agent scheduling with sequence-dependent exponential learning effects consideration, where two agents A and B have to share a single machine for processing their jobs. The objective function for agent A is to minimize the total completion time of jobs for agent A subject to a given upper bound on the objective function of agent B, representing the makespan of jobs for agent B. By assuming that the learning ratios for all jobs are the same, we suggest an enumeration-based backward allocation scheduling for finding an optimal solution and exemplify it by using a small numerical example. This problem has various applications in production systems as well as in operations management.

Roman-to-Korean Conversion System for Korean Company Names Based on Sequence-to-sequence learning (Sequence-to-sequence 모델을 이용한 로마자-한글 상호(商號) 표기 변환 시스템)

  • Kim, Tae-Hyun;Jung, Hyun-Guen;Kim, Jae-Hwa;Kim, Jeong-Gil
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.67-70
    • /
    • 2017
  • 상호(商號)란 상인이나 회사가 영업 활동을 위해 자기를 표시하는데 쓰는 명칭을 말한다. 일반적으로 국내 기업의 상호 표기법은 한글과 로마자를 혼용함으로 상호 검색 시스템에서 단어 불일치 문제를 발생시킨다. 본 연구에서는 이러한 단어 불일치 문제를 해결하기 위해 Sequence-to-sequence 모델을 이용하여 로마자 상호를 이에 대응하는 한글 상호로 변환하고 그 후보들을 생성하는 시스템을 제안한다. 실험 결과 본 연구에서 구축한 시스템은 57.82%의 단어 정확도, 90.73%의 자소 정확도를 보였다.

  • PDF

Roman-to-Korean Conversion System for Korean Company Names Based on Sequence-to-sequence learning (Sequence-to-sequence 모델을 이용한 로마자-한글 상호(商號) 표기 변환 시스템)

  • Kim, Tae-Hyun;Jung, Hyun-Guen;Kim, Jae-Hwa;Kim, Jeong-Gil
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.67-70
    • /
    • 2017
  • 상호(商號)란 상인이나 회사가 영업 활동을 위해 자기를 표시하는데 쓰는 명칭을 말한다. 일반적으로 국내 기업의 상호 표기법은 한글과 로마자를 혼용함으로 상호 검색 시스템에서 단어 불일치 문제를 발생시킨다. 본 연구에서는 이러한 단어 불일치 문제를 해결하기 위해 Sequence-to-sequence 모델을 이용하여 로마자 상호를 이에 대응하는 한글 상호로 변환하고 그 후보들을 생성하는 시스템을 제안한다. 실험 결과 본 연구에서 구축한 시스템은 57.82%의 단어 정확도, 90.73%의 자소 정확도를 보였다.

  • PDF

A Study on Characteristics of Serious Game User through Implementation of Mobile Sequence Game (모바일 수열 게임 개발을 통한 기능성 게임 사용자의 특성에 관한 연구)

  • Hong, Min;Lee, Hwa-Min
    • The KIPS Transactions:PartA
    • /
    • v.19A no.3
    • /
    • pp.155-160
    • /
    • 2012
  • This paper designed a smartphone application with sequence problems which users can improve their learning ability and this application is implemented as serious game which is designed for the special purposes of education with entertainment and game-like fun at anytime and anywhere during the spare time. Also to prove learning effects through sequence of number application under ubiquitous environment which is popular these days, the proposed serious game which has various types of sequence questions is implemented based on the iphone and android environments. User characteristics and learning effects which are based on game record of proposed application are analyzed according to socio-demographic characteristics.

Feature Selection with Ensemble Learning for Prostate Cancer Prediction from Gene Expression

  • Abass, Yusuf Aleshinloye;Adeshina, Steve A.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.526-538
    • /
    • 2021
  • Machine and deep learning-based models are emerging techniques that are being used to address prediction problems in biomedical data analysis. DNA sequence prediction is a critical problem that has attracted a great deal of attention in the biomedical domain. Machine and deep learning-based models have been shown to provide more accurate results when compared to conventional regression-based models. The prediction of the gene sequence that leads to cancerous diseases, such as prostate cancer, is crucial. Identifying the most important features in a gene sequence is a challenging task. Extracting the components of the gene sequence that can provide an insight into the types of mutation in the gene is of great importance as it will lead to effective drug design and the promotion of the new concept of personalised medicine. In this work, we extracted the exons in the prostate gene sequences that were used in the experiment. We built a Deep Neural Network (DNN) and Bi-directional Long-Short Term Memory (Bi-LSTM) model using a k-mer encoding for the DNA sequence and one-hot encoding for the class label. The models were evaluated using different classification metrics. Our experimental results show that DNN model prediction offers a training accuracy of 99 percent and validation accuracy of 96 percent. The bi-LSTM model also has a training accuracy of 95 percent and validation accuracy of 91 percent.

LSTM based sequence-to-sequence Model for Korean Automatic Word-spacing (LSTM 기반의 sequence-to-sequence 모델을 이용한 한글 자동 띄어쓰기)

  • Lee, Tae Seok;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.17-23
    • /
    • 2018
  • We proposed a LSTM-based RNN model that can effectively perform the automatic spacing characteristics. For those long or noisy sentences which are known to be difficult to handle within Neural Network Learning, we defined a proper input data format and decoding data format, and added dropout, bidirectional multi-layer LSTM, layer normalization, and attention mechanism to improve the performance. Despite of the fact that Sejong corpus contains some spacing errors, a noise-robust learning model developed in this study with no overfitting through a dropout method helped training and returned meaningful results of Korean word spacing and its patterns. The experimental results showed that the performance of LSTM sequence-to-sequence model is 0.94 in F1-measure, which is better than the rule-based deep-learning method of GRU-CRF.

Predictive Convolutional Networks for Learning Stream Data (스트림 데이터 학습을 위한 예측적 컨볼루션 신경망)

  • Heo, Min-Oh;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.11
    • /
    • pp.614-618
    • /
    • 2016
  • As information on the internet and the data from smart devices are growing, the amount of stream data is also increasing in the real world. The stream data, which is a potentially large data, requires online learnable models and algorithms. In this paper, we propose a novel class of models: predictive convolutional neural networks to be able to perform online learning. These models are designed to deal with longer patterns as the layers become higher due to layering convolutional operations: detection and max-pooling on the time axis. As a preliminary check of the concept, we chose two-month gathered GPS data sequence as an observation sequence. On learning them with the proposed method, we compared the original sequence and the regenerated sequence from the abstract information of the models. The result shows that the models can encode long-range patterns, and can generate a raw observation sequence within a low error.

Automatic Document Title Generation with RNN and Reinforcement Learning (RNN과 강화 학습을 이용한 자동 문서 제목 생성)

  • Cho, Sung-Min;Kim, Wooseng
    • Journal of Information Technology Applications and Management
    • /
    • v.27 no.1
    • /
    • pp.49-58
    • /
    • 2020
  • Lately, a large amount of textual data have been poured out of the Internet and the technology to refine them is needed. Most of these data are long text and often have no title. Therefore, in this paper, we propose a technique to combine the sequence-to-sequence model of RNN and the REINFORCE algorithm to generate the title of the long text automatically. In addition, the TextRank algorithm was applied to extract a summarized text to minimize information loss in order to protect the shortcomings of the sequence-to-sequence model in which an information is lost when long texts are used. Through the experiment, the techniques proposed in this study are shown to be superior to the existing ones.