Acknowledgement
본 연구는 데이터 기반 유동 모델링 특화연구실 프로그램의 일환으로 방위사업청과 국방과학연구소의 지원으로 수행되었음.
References
- Almeida, J. S., "Predictive Non-linear Modeling of Complex Data by Artificial Neural Networks," Current Opinion in Biotechnology, Vol. 13, No. 1, 2002, pp. 72~76. https://doi.org/10.1016/S0958-1669(02)00288-4
- Sekar, V., Jiang, Q., Shu, C. and Khoo, B. C., "Fast Flow Field Prediction over Airfoils Using Deep Learning Approach," Physics of Fluids, Vol. 31, No. 5, 2019, 057103. https://doi.org/10.1063/1.5094943
- Kang, T. Y., Park, K. K., Kim, J. H. and Ryoo, C. K., "Real-Time Estimation of Missile Debris Predicted Impact Point and Dispersion Using Deep Neural Network," Journal of The Korean Society for Aeronautical and Space Sciences, Vol. 49, No. 3, 2021, pp. 197~204. https://doi.org/10.5139/JKSAS.2021.49.3.197
- Penchalaiah, D., Kumar, G. N. and Ghosh, A. K., "Missile Drag Coefficients Segregation Using Artificial Neural Network," 6 th Symposium on Applied Aerodynamics and Design of Aerospace Vehicles (SAROD), November 2013, pp. 21~23.
- Ritz, S. G., Hartfield, R. J., Dahlen, J. A., Burkhalter, J. E. and Woltosz, W. S., "Rapid Calculation of Missile Aerodynamic Coefficients Using Artificial Neural Networks," 2015 IEEE Aerospace Conference, March 2015, pp. 1~19.
- Blake, W. B., Missile DATCOM: User's Manual-1997 FORTRAN 90 Revision, Air Force Research Lab Wright-Patterson AFB OH Air Vehicles Directorate, Oklahoma, 1998.
- Wang, S. C., Interdisciplinary Computing in Java Programming, Springer, Boston, 2003, pp. 81~100.
- Van Dyke, M. D., "First-and Second-order Theory of Supersonic Flow Past Bodies of Revolution," Journal of the Aeronautical Sciences, Vol. 18, No. 3, 1951, pp. 161~178. https://doi.org/10.2514/8.1896
- Moore, F. G., Armistead, M. A., Rowles, S. H. and DeJarnette, F., R., "Second-Order ShockExpansion Theory Extended to Include Real Gas Effects," NAVSWC TR90-683, Naval Surface Warfare Center Dahlgren Div., Dahlgren, 1992.
- Yu, L., Wang, S. and Lai, K. K., "An Integrated Data Preparation Scheme for Neural Network Data Analysis," IEEE Transactions on Knowledge and Data Engineering, Vol. 18, No. 2, 2005, pp. 217~230.
- Feurer, M. and Hutter, F., Automated Machine Learning, Springer, Cham, 2019, pp. 3~33.
- Hawkins, D. M., "The Problem of Overfitting," Journal of Chemical Information and Computer Sciences, Vol. 44, No. 1, 2004, pp. 1~12. https://doi.org/10.1021/ci0342472
- Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting," The Journal of Machine Learning Research, Vol. 15, No. 1, 2014, pp. 1929~1958.
- Klambauer, G., Unterthiner, T., Mayr, A. and Hochreiter, S., "Self-normalizing Neural Networks," Proceedings of the 31st International Conference on Neural Information Processing Systems, December 2017, pp. 972~981.
- Douglas, S. C. and Yu, J., "Why RELU Units Sometimes Die: Analysis of Single-unit Error Backpropagation in Neural Networks," 52nd Asilomar Conference on Signals, Systems, and Computers, October 2018, pp. 864~868.
- LeCun, Y. A., Bottou, L., Orr, G. B. and Mul er, K. R., Neural Networks: Tricks of the Trade, Springer, Berlin, Heidelberg, 2012, pp. 9~48.
- Bock, S. and Weiss, M., "A Proof of Local Convergence for the Adam Optimizer," 2019 International Joint Conference on Neural Networks, July 2019, pp. 1~8.
- Ketkar, N., Deep Learning with Python: A Hands-on Introduction, Apress, Berkeley, 2017, pp. 97~111.