Acknowledgement
This work was supported by the Technological Innovation R&D Program (S3264239) funded by the Ministry of SMEs and Startups, and the Technological Innovation R&D Program (S3154675) funded by the Ministry of SMEs and Startups(MSS, Korea).
References
- C. Kim, H. Cho, T. Yun, H. Shin and H. Park, "RFID-based Shortest Time Algorithm linetracer," J. of the Korea Institute of Electronic Communication Sciences, vol. 17, no. 6, Dec. 2022, pp. 1221-1228.
- S. Zhou, X. Liu, Y. Xu and J. Guo, "A Deep Q-network (DQN) Based Path Planning Method for Mobile Robots," 2018 IEEE International Conference on Information and Automation (ICIA), Wuyishan, China, 2018.
- Q. Liu, L. Shi, L. Sun, J. Li, M. Ding and F. Shu, "Path Planning for UAV-Mounted Mobile Edge Computing With Deep Reinforcement Learning," IEEE Transactions on Vehicular Technology, vol. 69, no. 5, May. 2020, pp. 5723-5728. https://doi.org/10.1109/TVT.2020.2982508
- A. Villanueva and A. Fajardo, "Deep Reinforcement Learning with Noise Injection for UAV Path Planning," 2019 IEEE 6th International Conference on Engineering Technologies and Applied Sciences (ICETAS), Kuala Lumpur, Malaysia, 2019.
- T. Ribeiro, F. Goncalves, I. Garcia, G. Lopes and A. F. Ribeiro, "Q-Learning for Autonomous Mobile Robot Obstacle Avoidance," 2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Porto, Portugal, 2019.
- C. C. Chang, Y. H. Juan, C. L. Huang and H. J. Chen, "Scenario Analysis for Road Following Using JetBot," 2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE), Yunlin, Taiwan, 2020.
- S. Yong, H Park, Y. You and I. Moon, "Q-Learning Policy and Reward Design for Efficient Path Selection," Journal of Advanced Navigation Technology, vol. 26, no. 2, Apr. 2022, pp. 72-77. https://doi.org/10.12673/JANT.2022.26.2.72
- L. Pinto, M. Andrychowicz, P. Welinder, W. Zaremba and P. Abbeel, "Asymmetric Actor Critic for Image-Based Robot Learning," arXiv:1710.06542v1, 2017.
- A. Singla, S. Padakandla and S. Bhatnagar, "Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 1, Jan. 2021, pp. 107-118. https://doi.org/10.1109/TITS.2019.2954952
- Y. F. Chen, M. Liu, M. Everett and J. P. How, "Decentralized noncommunicating multiagent collision avoidance with deep reinforcement learning," IEEE International Conference on Robotics and Automation, Singapore, 2017.
- B. Q. Huang, G. Y. Cao and M. Guo, "Reinforcement Learning Neural Network to the Problem of Autonomous Mobile Robot Obstacle Avoidance," 2005 Int. Conf. Mach. Learn. Cybern, Guangzhou, China, 2005.
- D. Kim, S. Park and D. Kim, "The Classification Scheme of ADHD for children based on the CNN Model," J. of the Korea Institute of Electronic Communication Sciences, vol. 17, no. 5, Oct. 2022, pp. 809-814.