DOI QR코드

DOI QR Code

Application of Artificial Neural Network to Predict Aerodynamic Coefficients of the Nose Section of the Missiles

인공신경망 기반의 유도탄 노즈 공력계수 예측 연구

  • Lee, Jeongyong (Interdisciplinary Program in Space Systems, Seoul National University) ;
  • Lee, Bok Jik (Department of Aerospace Engineering, Seoul National University)
  • Received : 2021.08.17
  • Accepted : 2021.10.25
  • Published : 2021.11.01

Abstract

The present study introduces an artificial neural network (ANN) that can predict the missile aerodynamic coefficients for various missile nose shapes and flow conditions such as Mach number and angle of attack. A semi-empirical missile aerodynamics code is utilized to generate a dataset comprised of the geometric description of the nose section of the missiles, flow conditions, and aerodynamic coefficients. Data normalization is performed during the data preprocessing step to improve the performance of the ANN. Dropout is used during the training phase to prevent overfitting. For the missile nose shape and flow conditions not included in the training dataset, the aerodynamic coefficients are predicted through ANN to verify the performance of the ANN. The result shows that not only the ANN predictions are very similar to the aerodynamic coefficients produced by the semi-empirical missile aerodynamics code, but also ANN can predict missile aerodynamic coefficients for the untrained nose section of the missile and flow conditions.

본 연구에서는 다양한 유도탄 노즈 형상과 유동조건에 대한 공력계수를 예측할 수 있는 인공신경망 기반의 공력 산출 기법을 제시한다. Missile DATCOM를 통해 유도탄 노즈 형상, 유동조건, 유도탄 공력계수로 구성된 학습 데이터셋을 구축하였다. 인공신경망의 예측 성능을 향상시키기 위해 데이터 전처리 과정으로 데이터 정규화를 진행하였고, 과대적합을 방지하기 위해 신경망 학습 과정 중 드롭아웃 기법을 사용하였다. 신경망을 통해 학습하지 않은 유도탄 노즈 형상과 유동조건에 대한 공력계수를 예측하였고 이를 Missile DATCOM 해석 결과와 비교하여 신경망의 성능을 검증하였다. 그 결과 본 연구에서 구축한 신경망은 학습하지 않은 유도탄 노즈 형상과 유동조건에 대한 유도탄 공력계수를 정확하게 산출할 수 있음을 확인하였다.

Keywords

Acknowledgement

본 연구는 데이터 기반 유동 모델링 특화연구실 프로그램의 일환으로 방위사업청과 국방과학연구소의 지원으로 수행되었음.

References

  1. Almeida, J. S., "Predictive Non-linear Modeling of Complex Data by Artificial Neural Networks," Current Opinion in Biotechnology, Vol. 13, No. 1, 2002, pp. 72~76. https://doi.org/10.1016/S0958-1669(02)00288-4
  2. Sekar, V., Jiang, Q., Shu, C. and Khoo, B. C., "Fast Flow Field Prediction over Airfoils Using Deep Learning Approach," Physics of Fluids, Vol. 31, No. 5, 2019, 057103. https://doi.org/10.1063/1.5094943
  3. Kang, T. Y., Park, K. K., Kim, J. H. and Ryoo, C. K., "Real-Time Estimation of Missile Debris Predicted Impact Point and Dispersion Using Deep Neural Network," Journal of The Korean Society for Aeronautical and Space Sciences, Vol. 49, No. 3, 2021, pp. 197~204. https://doi.org/10.5139/JKSAS.2021.49.3.197
  4. Penchalaiah, D., Kumar, G. N. and Ghosh, A. K., "Missile Drag Coefficients Segregation Using Artificial Neural Network," 6 th Symposium on Applied Aerodynamics and Design of Aerospace Vehicles (SAROD), November 2013, pp. 21~23.
  5. Ritz, S. G., Hartfield, R. J., Dahlen, J. A., Burkhalter, J. E. and Woltosz, W. S., "Rapid Calculation of Missile Aerodynamic Coefficients Using Artificial Neural Networks," 2015 IEEE Aerospace Conference, March 2015, pp. 1~19.
  6. Blake, W. B., Missile DATCOM: User's Manual-1997 FORTRAN 90 Revision, Air Force Research Lab Wright-Patterson AFB OH Air Vehicles Directorate, Oklahoma, 1998.
  7. Wang, S. C., Interdisciplinary Computing in Java Programming, Springer, Boston, 2003, pp. 81~100.
  8. Van Dyke, M. D., "First-and Second-order Theory of Supersonic Flow Past Bodies of Revolution," Journal of the Aeronautical Sciences, Vol. 18, No. 3, 1951, pp. 161~178. https://doi.org/10.2514/8.1896
  9. Moore, F. G., Armistead, M. A., Rowles, S. H. and DeJarnette, F., R., "Second-Order ShockExpansion Theory Extended to Include Real Gas Effects," NAVSWC TR90-683, Naval Surface Warfare Center Dahlgren Div., Dahlgren, 1992.
  10. Yu, L., Wang, S. and Lai, K. K., "An Integrated Data Preparation Scheme for Neural Network Data Analysis," IEEE Transactions on Knowledge and Data Engineering, Vol. 18, No. 2, 2005, pp. 217~230.
  11. Feurer, M. and Hutter, F., Automated Machine Learning, Springer, Cham, 2019, pp. 3~33.
  12. Hawkins, D. M., "The Problem of Overfitting," Journal of Chemical Information and Computer Sciences, Vol. 44, No. 1, 2004, pp. 1~12. https://doi.org/10.1021/ci0342472
  13. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting," The Journal of Machine Learning Research, Vol. 15, No. 1, 2014, pp. 1929~1958.
  14. Klambauer, G., Unterthiner, T., Mayr, A. and Hochreiter, S., "Self-normalizing Neural Networks," Proceedings of the 31st International Conference on Neural Information Processing Systems, December 2017, pp. 972~981.
  15. Douglas, S. C. and Yu, J., "Why RELU Units Sometimes Die: Analysis of Single-unit Error Backpropagation in Neural Networks," 52nd Asilomar Conference on Signals, Systems, and Computers, October 2018, pp. 864~868.
  16. LeCun, Y. A., Bottou, L., Orr, G. B. and Mul er, K. R., Neural Networks: Tricks of the Trade, Springer, Berlin, Heidelberg, 2012, pp. 9~48.
  17. Bock, S. and Weiss, M., "A Proof of Local Convergence for the Adam Optimizer," 2019 International Joint Conference on Neural Networks, July 2019, pp. 1~8.
  18. Ketkar, N., Deep Learning with Python: A Hands-on Introduction, Apress, Berkeley, 2017, pp. 97~111.