DOI QR코드

DOI QR Code

Path selection algorithm for multi-path system based on deep Q learning

Deep Q 학습 기반의 다중경로 시스템 경로 선택 알고리즘

  • Received : 2020.10.30
  • Accepted : 2020.11.23
  • Published : 2021.01.31

Abstract

Multi-path system is a system in which utilizes various networks simultaneously. It is expected that multi-path system can enhance communication speed, reliability, security of network. In this paper, we focus on path selection in multi-path system. To select optimal path, we propose deep reinforcement learning algorithm which is rewarded by the round-trip-time (RTT) of each networks. Unlike multi-armed bandit model, deep Q learning is applied to consider rapidly changing situations. Due to the delay of RTT data, we also suggest compensation algorithm of the delayed reward. Moreover, we implement testbed learning server to evaluate the performance of proposed algorithm. The learning server contains distributed database and tensorflow module to efficiently operate deep learning algorithm. By means of simulation, we showed that the proposed algorithm has better performance than lowest RTT about 20%.

다중경로 시스템은 유선망, LTE망, 위성망 등 다양한 망을 동시에 활용하여 데이터를 전송하는 시스템으로, 통신망의 전송속도, 신뢰도, 보안성 등을 높이기 위해 제안되었다. 본 논문에서는 이 시스템에서 각 망의 지연시간을 보상으로 하는 강화학습 기반 경로 선택 방안을 제안하고자 한다. 기존의 강화학습 모델과는 다르게, deep Q 학습을 이용하여 망의 변화하는 환경에 즉각적으로 대응하도록 알고리즘을 설계하였다. 네트워크 환경에서는 보상 정보를 일정 지연시간이 지나야 얻을 수 있으므로 이를 보정하는 방안 또한 함께 제안하였다. 성능을 평가하기 위해, 분산 데이터베이스와 텐서플로우 모듈 등을 포함한 테스트베드 학습 서버를 개발하였다. 시뮬레이션 결과, 제안 알고리즘이 RTT 감소 측면에서 최저 지연시간을 선택하는 방안보다 20% 가량 좋은 성능을 가지는 것을 확인하였다.

Keywords

References

  1. M. S. Kim, J. Y. Lee, and B. C. Kim, "Design of MPTCP congestion control based on BW measurement for wireless networks," Journal of the Korea Institute of Information and Communication Engineering, vol. 21, no. 6, pp. 1127-1136, Jun. 2017. https://doi.org/10.6109/jkiice.2017.21.6.1127
  2. K. Jang, "Reinforcement learning for node-disjoint path problem in wireless ad-hoc networks," Journal of the Korea Institute of Information and Communication Engineering, vol. 23, no. 8, pp. 1011-1017, Aug. 2019. https://doi.org/10.6109/JKIICE.2019.23.8.1011
  3. S. Park, B. Lim, and H. Jung, "CNN-Based Toxic Plant Identification System," Journal of the Korea Institute of Information and Communication Engineering, vol. 24, no. 8, pp. 993-998, Aug. 2020. https://doi.org/10.6109/JKIICE.2020.24.8.993
  4. V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, "Human-level control through deep reinforcement learning," Nature, vol. 518, pp. 529-533, Feb. 2015. https://doi.org/10.1038/nature14236
  5. B. C. Chung and D. H. Cho, "Semidynamic cell-clustering algorithm based on reinforcement learning in cooperative transmission system," IEEE Systems Journal, vol. 12, no. 4, pp. 3853-3856, Dec. 2018. https://doi.org/10.1109/JSYST.2017.2769679
  6. S. P. M. Choi and D. Y. Yeung, "Predictive Q-Routing: A Memory-based Reinforcement Learning Approach to Adaptive Traffic Control," in Proceedings of Advances in Neural Information Processing Systems, pp. 945-951, 1996.
  7. S. Hoceini, A. Mellouk, and Y.Amirat, "K-Shortest Paths Q-Routing: A New QoS Routing Algorithm in Telecommunication Networks," in Proceedings of International Conference on Networking 2005, pp. 164-172, 2005.
  8. Apache HBase, [Internet]. Available: https://hbase.apache.org/.
  9. M. Series, "Guidelines for evaluation of radio interface technologies for IMT-Advanced," International Telecommunication Union (ITU), Geneva, Switzerland, Technical Report ITU-R M.2135-1, Dec. 2009.
  10. C. Raiciu, M. Handley, and D. Wischik, "Coupled congestion control for multipath transport protocols," RFC 6356, IETF, Oct. 2011. [Internet]. Available: https://tools.ietf.org/html/rfc6356.