DOI QR코드

DOI QR Code

Reproduction of wind speed time series in a two-dimensional numerical multiple-fan wind tunnel using deep reinforcement learning

  • Qingshan Yang (Key Laboratory of New Technology for Construction of Cities in Mountain Area (Chongqing University), Ministry of Education) ;
  • Zhenzhi Luo (School of Civil Engineering, Chongqing University) ;
  • Ke Li (Key Laboratory of New Technology for Construction of Cities in Mountain Area (Chongqing University), Ministry of Education) ;
  • Teng Wu (Department of Civil, Structural & Environmental Engineering, The University at Buffalo)
  • Received : 2024.06.03
  • Accepted : 2024.09.12
  • Published : 2024.10.25

Abstract

The multiple-fan wind tunnel is an important facility for reproducing target wind field. Existing control methods for the multiple-fan wind tunnel can generate wind speeds that satisfy the target statistical characteristics of a wind field (e.g., power spectrum). However, the frequency-domain features cannot well represent the nonstationary winds of extreme storms (e.g., downburst). Therefore, this study proposes a multiple-fan wind tunnel control scheme based on Deep Reinforcement Learning (DRL), which will completely transform into a data-driven closed-loop control problem, to reproduce the target wind field in the time domain. Specifically, the control scheme adopts the Deep Deterministic Policy Gradient (DDPG) paradigm in which the strong fitting ability of Deep Neural Networks (DNN) is utilized. It can establish the complex relationship between the target wind speed time series and the current control strategy in the DRL-agent. To address the fluid memory effect of the wind field, this study innovatively designs the system state and control reward to improve the reproduction performance based on historical data. To validate the performance of the model, we established a simplified flow field based on Navier Stokes equations to simulate a two-dimensional numerical multiple-fan wind tunnel environment. Using the strategy of DRL decision maker, we generated a wind speed time series with minor error from the target under low Reynolds number conditions. This is the first time that the AI methods have been used to generate target wind speed time series in a multiple-fan wind tunnel environment. The hyperparameters in the DDPG paradigm are analyzed to identify a set of optimal parameters. With these efforts, the trained DRL-agent can simultaneously reproduce the wind speed time series in multiple positions.

Keywords

Acknowledgement

This work was supported by the Science Fund for Creative Research Groups of the National Natural Science Foundation of China (52221002), the 111 Project (B18062), the Natural Science Foundation of Chongqing, China (cstc2020jcyj-msxmX0773), the Fundamental Research Key Laboratory for Wind Resistance Technology of Bridges (Shanghai), Ministry of Transport, PRC (KLWRTBMC22-01).

References

  1. Arulkumaran, K., Deisenroth, M.P., Brundage, M. and Bharath, A. A. (2017), "Deep reinforcement learning: A brief survey", IEEE Signal Processing Magazine, 34(6), 26-38.
  2. Bradshaw, V. (2010), The Building Environment: Active and Passive Control Systems. John Wiley and Sons, New York, NY, USA.
  3. Cao, S.Y., Nishi, A., Hirano, K., Ozono, S., Miyagi, H. and Kikugawa, H. (2001), "An actively controlled wind tunnel and its application to the reproduction of the atmospheric boundary layer", Bound. Lay. Meteorol., 101(1), 61-76.
  4. Cao, S.Y., Nishi, A., Kikugawa, H. and Matsuda, Y. (2002), "Reproduction of wind velocity history in a multiple fan wind tunnel", J. Wind Eng. Ind. Aerod., 90(12-15), 1719-1729.
  5. Cui, W., Zhao, L., Cao, S.Y. and Ge, Y.J. (2021), "Generating unconventional wind flow in an actively controlled multi-fan wind tunnel", Wind Struct., 33(2), 115-122.
  6. De Bortoli, M.E., Natalini, B., Paluch, M.J. and Natalini, M.B. (2002), "Part-depth wind tunnel simulations of the atmospheric boundary layer", J. Wind Eng. Ind. Aerod., 90(4), 281-291.
  7. Esteva, A., Robicquet, A., Ramsundar, B., Chou, K. and Dean, J. (2019). "A guide to deep learning in healthcare", Nature Medicine, 25(1), 24-29.
  8. Fujiyoshi, H., Hirakawa, T. and Yamashita, T. (2019), "Deep learning-based image recognition for autonomous driving", IATSS Res., 43(4), 244-252.
  9. Ghia, U.K.N.G., Ghia, K.N. and Shin, C.T. (1982), "High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method", J. Comput. Phys., 48(3), 387-411.
  10. Hambly, B., Xu, R. and Yang, H. (2023), "Recent advances in reinforcement learning in finance", Mathem. Finance, 33(3), 437-503.
  11. Hambly, B., Xu, R. and Yang, H. (2023), "Recent advances in reinforcement learning in finance", Mathematical Finance, 33(3), 437-503
  12. Ibarz, J., Tan, J., Finn, C., Kalakrishnan, M., Pastor, P. and Levine, S. (2021), "How to train your robot with deep reinforcement learning: lessons we have learned", Int. J. Robotics Res., 40(4-5), 698-721.
  13. Kobayashi, H. and Hatanaka, A. (1992), "Active generation of wind gust in a two-dimensional wind tunnel", J. Wind Eng. Ind. Aerod., 42(1-3), 959-970.
  14. Li, S., Snaiki, R. and Wu, T. (2021), "Active simulation of transient wind field in a multiple-fan wind tunnel via deep reinforcement learning", J. Eng. Mech., 147(9).
  15. Ma, T.T., Zhao, L., Cao, S.Y., Ge, Y.J. and Miyagi, H. (2013), "Investigations of aerodynamic effects on streamlined box girder using two-dimensional actively-controlled oncoming flow", J. Wind Eng. Ind. Aerod., 122, 118-129.
  16. Marina, L. and Sandu, A. (2017), "Deep reinforcement learning for autonomous vehicles-state of the art. Bulletin of the Transilvania University of Brasov", Series I-Eng. Sci., 195-202.
  17. Meng, T.L. and Khushi, M. (2019), "Reinforcement learning in financial markets", Data, 4(3), 110.
  18. Nishi, A., Kikugawa, H., Matsuda, Y. and Tashiro, D. (1999), "Active control of turbulence for an atmospheric boundary layer model in a wind tunnel", J. Wind Eng. Ind. Aerod., 83(1-3), 409-419.
  19. Ohya, Y. (2001), "Wind-tunnel study of atmospheric stable boundary layers over a rough surface", Bound. Lay. Meteorol., 98, 57-82.
  20. Pang, J.B., Ge, Y.J. and Lu, Y. (2002), "Methods for analysis of turbulence integral length in atmospheric boundary-layer", J. TongJi Univ., 30(5), 622-626.
  21. Rabault, J., Kuchta, M., Jensen, A., Reglade, U. and Cerardi, N. (2019), "Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control", J. Fluid Mech., 865, 281-302.
  22. Scott, R.C. and Pado, L.E. (2000), "Active control of wind-tunnel model aeroelastic response using neural networks", J. Guidance, Control, Dyn., 23(6), 1100-1108.
  23. Sill, B.L. (1988), "Turbulent boundary layer profiles over uniform rough surfaces", J. Wind Eng. Ind. Aerod., 31(2-3), 147-163.
  24. Silver, D., Huang, A., Maddison, C.J. and Hassabis, D. (2016), "Mastering the game of Go with deep neural networks and tree search", Nature, 529(7587), 484-489.
  25. Snaiki, R., and Wu, T. (2022). "Knowledge-enhanced deep learning for simulation of extratropical cyclone wind risk". Atmosphere, 13(5), 757.
  26. Takakura, S., Suyama, Y., Aoki, T., Yoshimura, N. and Takahashi, S. (1993), "Introduction of boundary-layer wind tunnel", Wind Eng., 1993(54), 31-38.
  27. Teunissen, H.W. (1975), "Simulation of the planetary boundary layer in a multiple-jet wind tunnel", Atmos. Environ., 9(2), 145-174.
  28. Wang, J.Y., Meng, Q.H., Luo, B. and Zeng, M. (2018), "A multiple-fan active control wind tunnel for outdoor wind speed and direction simulation", Rev. Sci. Instrument., 89(3).
  29. Wang, J.Y., Zeng, M. and Meng, Q.H. (2019), "Latticed mode: A new control strategy for wind field simulation in a multiple-fan wind tunnel", Rev. Sci. Instruments, 90(8).
  30. Wu, T., He, J.C. and Li, S.P. (2023), "Active flutter control of long-span bridges via deep reinforcement learning: A proof of concept", Wind Struct., 36(5), 321-331.
  31. Yu, C., Liu, J., Nemati, S. and Yin, G. (2021), "Reinforcement learning in healthcare: A survey", ACM Computing Surveys (CSUR), 55(1), 1-36.
  32. Zhang, M.J., Zhang, J.X., Li, Y.L. and Yu, J.S. (2020), "Wind characteristics in the high-altitude difference at bridge site by wind tunnel tests", Wind Struct., 30(6), 548-557.
  33. Zhang, Z., Zhang, D. and Qiu, R.C. (2019), "Deep reinforcement learning for power system applications: An overview", CSEE J. Power Energy Syst., 6(1), 213-225.
  34. Ziada, S., Ng, H. and Blake, C.E. (2003), "Flow excited resonance of a confined shallow cavity in low Mach number flow and its control", J. Fluids Struct., 18(1), 79-92.