References
- Core Technologies and Trends of Indoor Autonomous Flight Drone, https://www.kari.re.kr/cop/bbs/BBSMSTR_000000000063/selectBoardArticle.do?nttId=5490&kind=&mno=sitemap_02&pageIndex=1&searchCnd=0&searchWrd=, (accessed Jan., 4, 2018).
- H. Ha and B.-Y. Hwang, "Machine Learning Model of Gyro Sensor Data for Drone Flight Control," Journal of Korea Multimedia Society, Vol. 20, No. 6, pp. 927-934, 2017. https://doi.org/10.9717/KMMS.2017.20.6.927
- A. Giusti, J. Guzzi, D.C. Ciresan, F.L. He, J.P. Rodriguez, J. Schmidhuber, and et al., “A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots,” Institute of Electrical and Electronics Engineering Robotics and Automation Letters, Vol. 1, No. 2, pp. 661-667, 2016.
- S. Ross, N. Melik-Barkhudarov, K.S. Shankar, A. Wendel, D. Dey, J.A. Bagnell, and et al., "Learning Monocular Reactive UAV Control in Cluttered Natural Environments," Proceeding of Institute of Electrical and Electronics Engineering International Conference on Robotics and Automation, pp. 1765-1772, 2013.
- J. Hwangbo, I. Sa, R. Siegwart, and M. Hutter, “Control of a Quadrotor with Reinforcement Learning,” Institute of Electrical and Electronics Engineering Robotics and Automation Letters, Vol. 2, No. 4, pp. 2096-2103, 2017.
- W.W. Lee, H.R. Yang, G.W. Kim, Y.M. Lee, and U.R. Lee, Reinforcement Learning with Python and Keras, Wikibooks, Paju-si, Gyeonggi-do, 2017.
- R.J.Williams, “Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning,” Machine Learning, Vol. 8, No. 23, pp. 229-256, 1992.
- T. Nakashima, M.C. Chang, and S.K. Hong, "Design and Performance of a Complementary Filter for Inverted Pendulum Control with Inertial Sensors," Proceedings of the Korean Institute of Electrical Engineers Conference, pp. 544-546, 2004.
- J.H. Jang, E.T. Jeung, and S.H. Kwon, "A Study on Control for the Two-Rotor System Using Inertial Sensors," Journal of Institute of Control Robotics and Systems, Vol. 19, No. 3, pp. 190-194, 2013. https://doi.org/10.5302/J.ICROS.2013.12.1811
- P. Roan, N. Deshpande, Y. Wang, and B. Pitzer, "Manipulator State Estimation with Low Cost Accelerometers and Gyroscopes," Proceeding of Institute of Electrical and Electronics Engineering/Robotics Society of J apan International Conference, pp. 4822-4827, 2012.
- R.S. Sutton and A.G. Barto, Reinforcement Learning: An Introduction, MIT press, Cambridge, 1998.
- R.S. Sutton, D. McAllester, S. Singh, and Y. Mansour, "Policy Gradient Methods for Reinforcement Learning with Function Approximation," Advances in Neural Information Processing Systems, pp. 1057-1063, 2000.
- J. Peters and S. Schaal, "Policy Gradient Methods for Robotics," Proceeding of Institute of Electrical and Electronics Engineering/Robotics Society of Japan International Conference on, Intelligent Robots and Systems, pp. 2219-2225, 2006.
- N. Srivastava, G. Hinton, A. Krizhevsky, I Sutskever, and R. Salakhutdinov, "Dropout: A Simple Way to Prevent Neural Networks from Overfitting," Journal of Machine Learning Research, Vol. 15, No. 1, pp. 1929-1958, 2014.