• Title/Summary/Keyword: Markov Decision Processing

Search Result 27, Processing Time 0.02 seconds

A Study on the Emotional Text Generation using Generative Adversarial Network (Generative Adversarial Network 학습을 통한 감정 텍스트 생성에 관한 연구)

  • Kim, Woo-seong;Kim, Hyeoncheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.380-382
    • /
    • 2019
  • GAN(Generative Adversarial Network)은 정해진 학습 데이터에서 정해진 생성자와 구분자가 서로 각각에게 적대적인 관계를 유지하며 동시에 서로에게 생산적인 관계를 유지하며 가능한 긍정적인 영향을 주며 학습하는 기계학습 분야이다. 전통적인 문장 생성은 단어의 통계적 분포를 기반으로 한 마르코프 결정 과정(Markov Decision Process)과 순환적 신경 모델(Recurrent Neural Network)을 사용하여 학습시킨다. 이러한 방법은 문장 생성과 같은 연속된 데이터를 기반으로 한 모델들의 표준 모델이 되었다. GAN은 표준모델이 존재하는 해당 분야에 새로운 모델로써 다양한 시도가 시도되고 있다. 하지만 이러한 모델의 시도에도 불구하고, 지금까지 해결하지 못하고 있는 다양한 문제점이 존재한다. 이 논문에서는 다음과 같은 두 가지 문제점에 집중하고자 한다. 첫째, Sequential 한 데이터 처리에 어려움을 겪는다. 둘째, 무작위로 생성하기 때문에 사용자가 원하는 데이터만 출력되지 않는다. 본 논문에서는 이러한 문제점을 해결하고자, 부분적인 정답 제공을 통한 조건별 생산적 적대 생성망을 설계하여 이 방법을 사용하여 해결하였다. 첫째, Sequence to Sequence 모델을 도입하여 Sequential한 데이터를 처리할 수 있도록 하여 원시적인 텍스트를 생성할 수 있게 하였다. 둘째, 부분적인 정답 제공을 통하여 문장의 생성 조건을 구분하였다. 결과적으로, 제안하는 기법들로 원시적인 감정 텍스트를 생성할 수 있었다.

Evaluating SR-Based Reinforcement Learning Algorithm Under the Highly Uncertain Decision Task (불확실성이 높은 의사결정 환경에서 SR 기반 강화학습 알고리즘의 성능 분석)

  • Kim, So Hyeon;Lee, Jee Hang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.331-338
    • /
    • 2022
  • Successor representation (SR) is a model of human reinforcement learning (RL) mimicking the underlying mechanism of hippocampal cells constructing cognitive maps. SR utilizes these learned features to adaptively respond to the frequent reward changes. In this paper, we evaluated the performance of SR under the context where changes in latent variables of environments trigger the reward structure changes. For a benchmark test, we adopted SR-Dyna, an integration of SR into goal-driven Dyna RL algorithm in the 2-stage Markov Decision Task (MDT) in which we can intentionally manipulate the latent variables - state transition uncertainty and goal-condition. To precisely investigate the characteristics of SR, we conducted the experiments while controlling each latent variable that affects the changes in reward structure. Evaluation results showed that SR-Dyna could learn to respond to the reward changes in relation to the changes in latent variables, but could not learn rapidly in that situation. This brings about the necessity to build more robust RL models that can rapidly learn to respond to the frequent changes in the environment in which latent variables and reward structure change at the same time.

A Contrast Enhancement Method using the Contrast Measure in the Laplacian Pyramid for Digital Mammogram (디지털 맘모그램을 위한 라플라시안 피라미드에서 대비 척도를 이용한 대비 향상 방법)

  • Jeon, Geum-Sang;Lee, Won-Chang;Kim, Sang-Hee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.2
    • /
    • pp.24-29
    • /
    • 2014
  • Digital mammography is the most common technique for the early detection of breast cancer. To diagnose the breast cancer in early stages and treat efficiently, many image enhancement methods have been developed. This paper presents a multi-scale contrast enhancement method in the Laplacian pyramid for the digital mammogram. The proposed method decomposes the image into the contrast measures by the Gaussian and Laplacian pyramid, and the pyramid coefficients of decomposed multi-resolution image are defined as the frequency limited local contrast measures by the ratio of high frequency components and low frequency components. The decomposed pyramid coefficients are modified by the contrast measure for enhancing the contrast, and the final enhanced image is obtained by the composition process of the pyramid using the modified coefficients. The proposed method is compared with other existing methods, and demonstrated to have quantitatively good performance in the contrast measure algorithm.

Long-Term Arrival Time Estimation Model Based on Service Time (버스의 정차시간을 고려한 장기 도착시간 예측 모델)

  • Park, Chul Young;Kim, Hong Geun;Shin, Chang Sun;Cho, Yong Yun;Park, Jang Woo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.7
    • /
    • pp.297-306
    • /
    • 2017
  • Citizens want more accurate forecast information using Bus Information System. However, most bus information systems that use an average based short-term prediction algorithm include many errors because they do not consider the effects of the traffic flow, signal period, and halting time. In this paper, we try to improve the precision of forecast information by analyzing the influencing factors of the error, thereby making the convenience of the citizens. We analyzed the influence factors of the error using BIS data. It is shown in the analyzed data that the effects of the time characteristics and geographical conditions are mixed, and that effects on halting time and passes speed is different. Therefore, the halt time is constructed using Generalized Additive Model with explanatory variable such as hour, GPS coordinate and number of routes, and we used Hidden Markov Model to construct a pattern considering the influence of traffic flow on the unit section. As a result of the pattern construction, accurate real-time forecasting and long-term prediction of route travel time were possible. Finally, it is shown that this model is suitable for travel time prediction through statistical test between observed data and predicted data. As a result of this paper, we can provide more precise forecast information to the citizens, and we think that long-term forecasting can play an important role in decision making such as route scheduling.

Bayesian Network-Based Analysis on Clinical Data of Infertility Patients (베이지안 망에 기초한 불임환자 임상데이터의 분석)

  • Jung, Yong-Gyu;Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.625-634
    • /
    • 2002
  • In this paper, we conducted various experiments with Bayesian networks in order to analyze clinical data of infertility patients. With these experiments, we tried to find out inter-dependencies among important factors playing the key role in clinical pregnancy, and to compare 3 different kinds of Bayesian network classifiers (including NBN, BAN, GBN) in terms of classification performance. As a result of experiments, we found the fact that the most important features playing the key role in clinical pregnancy (Clin) are indication (IND), stimulation, age of female partner (FA), number of ova (ICT), and use of Wallace (ETM), and then discovered inter-dependencies among these features. And we made sure that BAN and GBN, which are more general Bayesian network classifiers permitting inter-dependencies among features, show higher performance than NBN. By comparing Bayesian classifiers based on probabilistic representation and reasoning with other classifiers such as decision trees and k-nearest neighbor methods, we found that the former show higher performance than the latter due to inherent characteristics of clinical domain. finally, we suggested a feature reduction method in which all features except only some ones within Markov blanket of the class node are removed, and investigated by experiments whether such feature reduction can increase the performance of Bayesian classifiers.

Reinforcement Learning for Minimizing Tardiness and Set-Up Change in Parallel Machine Scheduling Problems for Profile Shops in Shipyard (조선소 병렬 기계 공정에서의 납기 지연 및 셋업 변경 최소화를 위한 강화학습 기반의 생산라인 투입순서 결정)

  • So-Hyun Nam;Young-In Cho;Jong Hun Woo
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.60 no.3
    • /
    • pp.202-211
    • /
    • 2023
  • The profile shops in shipyards produce section steels required for block production of ships. Due to the limitations of shipyard's production capacity, a considerable amount of work is already outsourced. In addition, the need to improve the productivity of the profile shops is growing because the production volume is expected to increase due to the recent boom in the shipbuilding industry. In this study, a scheduling optimization was conducted for a parallel welding line of the profile process, with the aim of minimizing tardiness and the number of set-up changes as objective functions to achieve productivity improvements. In particular, this study applied a dynamic scheduling method to determine the job sequence considering variability of processing time. A Markov decision process model was proposed for the job sequence problem, considering the trade-off relationship between two objective functions. Deep reinforcement learning was also used to learn the optimal scheduling policy. The developed algorithm was evaluated by comparing its performance with priority rules (SSPT, ATCS, MDD, COVERT rule) in test scenarios constructed by the sampling data. As a result, the proposed scheduling algorithms outperformed than the priority rules in terms of set-up ratio, tardiness, and makespan.

Task offloading scheme based on the DRL of Connected Home using MEC (MEC를 활용한 커넥티드 홈의 DRL 기반 태스크 오프로딩 기법)

  • Ducsun Lim;Kyu-Seek Sohn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.61-67
    • /
    • 2023
  • The rise of 5G and the proliferation of smart devices have underscored the significance of multi-access edge computing (MEC). Amidst this trend, interest in effectively processing computation-intensive and latency-sensitive applications has increased. This study investigated a novel task offloading strategy considering the probabilistic MEC environment to address these challenges. Initially, we considered the frequency of dynamic task requests and the unstable conditions of wireless channels to propose a method for minimizing vehicle power consumption and latency. Subsequently, our research delved into a deep reinforcement learning (DRL) based offloading technique, offering a way to achieve equilibrium between local computation and offloading transmission power. We analyzed the power consumption and queuing latency of vehicles using the deep deterministic policy gradient (DDPG) and deep Q-network (DQN) techniques. Finally, we derived and validated the optimal performance enhancement strategy in a vehicle based MEC environment.