Browse > Article
http://dx.doi.org/10.22156/CS4SMB.2021.11.01.012

Learning-Backoff based Wireless Channel Access for Tactical Airborne Networks  

Byun, JungHun (Department of Computer Science, Chungbuk National University)
Park, Sangjun (Department of Electrical Engineering, Korea Military Academy)
Yoon, Joonhyeok (Department of Electrical Engineering, Korea Military Academy)
Kim, Yongchul (Department of Electrical Engineering, Korea Military Academy)
Lee, Wonwoo (Department of Electrical Engineering, Korea Military Academy)
Jo, Ohyun (Department of Computer Science, Chungbuk National University)
Joo, Taehwan (Agency for Defense Development(ADD))
Publication Information
Journal of Convergence for Information Technology / v.11, no.1, 2021 , pp. 12-19 More about this Journal
Abstract
For strengthening the national defense, the function of tactical network is essential. tactics and strategies in wartime situations are based on numerous information. Therefore, various reconnaissance devices and resources are used to collect a huge amount of information, and they transmit the information through tactical networks. In tactical networks that which use contention based channel access scheme, high-speed nodes such as recon aircraft may have performance degradation problems due to unnecessary channel occupation. In this paper, we propose a learning-backoff method, which empirically learns the size of the contention window to determine channel access time. The proposed method shows that the network throughput can be increased up to 25% as the number of high-speed mobility nodes are increases.
Keywords
Reinforcement learning; Q-learning; Tactical Airborne Networks; Learning-Backoff Communication; CSMA/CA;
Citations & Related Records
연도 인용수 순위
  • Reference
1 S. Galzarano, A. Liotta & G. Fortino. (2013, December). QL-MAC: A Q-learning based MAC for wireless sensor networks. In International Conference on Algorithms and Architectures for Parallel Processing (pp. 267-275). Springer, Cham.
2 N. Aihara, K. Adachi, O. Takyu, M. Ohta & T. Fujii. (2019). Q-learning aided resource allocation and environment recognition in LoRaWAN with CSMA/CA. IEEE Access, 7, 152126-152137. DOI : 10.1109/ACCESS.2019.2948111   DOI
3 S. Bao & T. Fujii. (2011, November). Q-learning based p-pesistent csma mac protcol for secondary user of cognitive radio networks. In 2011 Third International Conference on Intelligent Networking and Collaborative Systems (pp. 336-337). IEEE. DOI : 10.1109/INCoS.2011.140   DOI
4 S. Cho. (2020). Rate adaptation with Q-learning in CSMA/CA wireless networks. Journal of Information Processing Systems, 16(5), 1048-1063. DOI : 10.3745/JIPS.03.0148   DOI
5 S. Hayat, E. Yanmaz & R. Muzaffar. (2016). Survey on unmanned aerial vehicle networks for civil applications: A communications viewpoint. IEEE Communications Surveys & Tutorials, 18(4), 2624-2661. DOI : 10.1109/COMST.2016.2560343   DOI
6 C. I. Yeo, Y. S. Heo, J. H. Ryu, S. W. Park, S. C. Kim, H. S. Kang & G. H. Lee. (2020). Recent R&D Trends in Wireless Network Technology based on UAV-assisted FSO Technique. [ETRI] Electronics and Telecommunications Trends, 35(2), 38-49.
7 W. Fawaz, C. Abou-Rjeily & C. Assi. (2018). UAV-aided cooperation for FSO communication systems. IEEE Communications Magazine, 56(1), 70-75. DOI : 10.1109/MCOM.2017.1700320   DOI
8 G. Bianchi, L. Fratta & M. Oliveri. (1996, October). Performance evaluation and enhancement of the CSMA/CA MAC protocol for 802.11 wireless LANs. In Proceedings of PIMRC'96-7th International Symposium on Personal, Indoor, and Mobile Communications (Vol. 2, pp. 392-396). IEEE. DOI : 10.1109/PIMRC.1996.567423   DOI
9 C. J. Watkins & P. Dayan. (1992). Q-learning. Machine learning, 8(3-4), 279-292.   DOI
10 R. S. Sutton & A. G. Barto. (2018). Reinforcement learning: An introduction. MIT press.
11 S. H. Park & O. Jo. (2020). Q-NAV: NAV Setting Method based on Reinforcement Learning in Underwater Wireless Networks. arXiv preprint arXiv:2005.13521.
12 R. Trafton & S. V. Pizzi. (2006, October). The joint airborne network services suite. In MILCOM 2006-2006 IEEE Military Communications conference (pp. 1-5). IEEE. DOI : 10.1109/MILCOM.2006.302496   DOI
13 S. H. Park, K. Shin & O. Jo. (2020). AQ-NAV: Reinforced Learning Based Channel Access Method Using Distance Estimation in Underwater Communication. Journal of Convergence for Information Technology, 10(7), 33-40. DOI : 10.22156/CS4SMB.2020.10.07.033   DOI
14 B. Moision et al. (2017, February). Demonstration of free-space optical communication for long-range data links between balloons on Project Loon. In Free-Space Laser Communication and Atmospheric Propagation XXIX (Vol. 10096, p. 100960Z). International Society for Optics and Photonics.
15 P. Wolfowitz. (2002). Global Information Grid (GIG) Overarching Policy. US Department of Defense, directive, (8100.1).