• Title/Summary/Keyword: 에이전트 시뮬레이션

Search Result 272, Processing Time 0.018 seconds

Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity (오프 폴리시 강화학습에서 몬테 칼로와 시간차 학습의 균형을 사용한 적은 샘플 복잡도)

  • Kim, Chayoung;Park, Seohee;Lee, Woosik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.1-7
    • /
    • 2020
  • Deep neural networks(DNN), which are used as approximation functions in reinforcement learning (RN), theoretically can be attributed to realistic results. In empirical benchmark works, time difference learning (TD) shows better results than Monte-Carlo learning (MC). However, among some previous works show that MC is better than TD when the reward is very rare or delayed. Also, another recent research shows when the information observed by the agent from the environment is partial on complex control works, it indicates that the MC prediction is superior to the TD-based methods. Most of these environments can be regarded as 5-step Q-learning or 20-step Q-learning, where the experiment continues without long roll-outs for alleviating reduce performance degradation. In other words, for networks with a noise, a representative network that is regardless of the controlled roll-outs, it is better to learn MC, which is robust to noisy rewards than TD, or almost identical to MC. These studies provide a break with that TD is better than MC. These recent research results show that the way combining MC and TD is better than the theoretical one. Therefore, in this study, based on the results shown in previous studies, we attempt to exploit a random balance with a mixture of TD and MC in RL without any complicated formulas by rewards used in those studies do. Compared to the DQN using the MC and TD random mixture and the well-known DQN using only the TD-based learning, we demonstrate that a well-performed TD learning are also granted special favor of the mixture of TD and MC through an experiments in OpenAI Gym.

Energy Efficient Distributed Intrusion Detection Architecture using mHEED on Sensor Networks (센서 네트워크에서 mHEED를 이용한 에너지 효율적인 분산 침입탐지 구조)

  • Kim, Mi-Hui;Kim, Ji-Sun;Chae, Ki-Joon
    • The KIPS Transactions:PartC
    • /
    • v.16C no.2
    • /
    • pp.151-164
    • /
    • 2009
  • The importance of sensor networks as a base of ubiquitous computing realization is being highlighted, and espicially the security is recognized as an important research isuue, because of their characteristics.Several efforts are underway to provide security services in sensor networks, but most of them are preventive approaches based on cryptography. However, sensor nodes are extremely vulnerable to capture or key compromise. To ensure the security of the network, it is critical to develop security Intrusion Detection System (IDS) that can survive malicious attacks from "insiders" who have access to keying materials or the full control of some nodes, taking their charateristics into consideration. In this perper, we design a distributed and adaptive IDS architecture on sensor networks, respecting both of energy efficiency and IDS efficiency. Utilizing a modified HEED algorithm, a clustering algorithm, distributed IDS nodes (dIDS) are selected according to node's residual energy and degree. Then the monitoring results of dIDSswith detection codes are transferred to dIDSs in next round, in order to perform consecutive and integrated IDS process and urgent report are sent through high priority messages. With the simulation we show that the superiorities of our architecture in the the efficiency, overhead, and detection capability view, in comparison with a recent existent research, adaptive IDS.