Acknowledgement
This work was supported by the Science Fund for Creative Research Groups of the National Natural Science Foundation of China (52221002), the 111 Project (B18062), the Natural Science Foundation of Chongqing, China (cstc2020jcyj-msxmX0773), the Fundamental Research Key Laboratory for Wind Resistance Technology of Bridges (Shanghai), Ministry of Transport, PRC (KLWRTBMC22-01).
References
- Arulkumaran, K., Deisenroth, M.P., Brundage, M. and Bharath, A. A. (2017), "Deep reinforcement learning: A brief survey", IEEE Signal Processing Magazine, 34(6), 26-38.
- Bradshaw, V. (2010), The Building Environment: Active and Passive Control Systems. John Wiley and Sons, New York, NY, USA.
- Cao, S.Y., Nishi, A., Hirano, K., Ozono, S., Miyagi, H. and Kikugawa, H. (2001), "An actively controlled wind tunnel and its application to the reproduction of the atmospheric boundary layer", Bound. Lay. Meteorol., 101(1), 61-76.
- Cao, S.Y., Nishi, A., Kikugawa, H. and Matsuda, Y. (2002), "Reproduction of wind velocity history in a multiple fan wind tunnel", J. Wind Eng. Ind. Aerod., 90(12-15), 1719-1729.
- Cui, W., Zhao, L., Cao, S.Y. and Ge, Y.J. (2021), "Generating unconventional wind flow in an actively controlled multi-fan wind tunnel", Wind Struct., 33(2), 115-122.
- De Bortoli, M.E., Natalini, B., Paluch, M.J. and Natalini, M.B. (2002), "Part-depth wind tunnel simulations of the atmospheric boundary layer", J. Wind Eng. Ind. Aerod., 90(4), 281-291.
- Esteva, A., Robicquet, A., Ramsundar, B., Chou, K. and Dean, J. (2019). "A guide to deep learning in healthcare", Nature Medicine, 25(1), 24-29.
- Fujiyoshi, H., Hirakawa, T. and Yamashita, T. (2019), "Deep learning-based image recognition for autonomous driving", IATSS Res., 43(4), 244-252.
- Ghia, U.K.N.G., Ghia, K.N. and Shin, C.T. (1982), "High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method", J. Comput. Phys., 48(3), 387-411.
- Hambly, B., Xu, R. and Yang, H. (2023), "Recent advances in reinforcement learning in finance", Mathem. Finance, 33(3), 437-503.
- Hambly, B., Xu, R. and Yang, H. (2023), "Recent advances in reinforcement learning in finance", Mathematical Finance, 33(3), 437-503
- Ibarz, J., Tan, J., Finn, C., Kalakrishnan, M., Pastor, P. and Levine, S. (2021), "How to train your robot with deep reinforcement learning: lessons we have learned", Int. J. Robotics Res., 40(4-5), 698-721.
- Kobayashi, H. and Hatanaka, A. (1992), "Active generation of wind gust in a two-dimensional wind tunnel", J. Wind Eng. Ind. Aerod., 42(1-3), 959-970.
- Li, S., Snaiki, R. and Wu, T. (2021), "Active simulation of transient wind field in a multiple-fan wind tunnel via deep reinforcement learning", J. Eng. Mech., 147(9).
- Ma, T.T., Zhao, L., Cao, S.Y., Ge, Y.J. and Miyagi, H. (2013), "Investigations of aerodynamic effects on streamlined box girder using two-dimensional actively-controlled oncoming flow", J. Wind Eng. Ind. Aerod., 122, 118-129.
- Marina, L. and Sandu, A. (2017), "Deep reinforcement learning for autonomous vehicles-state of the art. Bulletin of the Transilvania University of Brasov", Series I-Eng. Sci., 195-202.
- Meng, T.L. and Khushi, M. (2019), "Reinforcement learning in financial markets", Data, 4(3), 110.
- Nishi, A., Kikugawa, H., Matsuda, Y. and Tashiro, D. (1999), "Active control of turbulence for an atmospheric boundary layer model in a wind tunnel", J. Wind Eng. Ind. Aerod., 83(1-3), 409-419.
- Ohya, Y. (2001), "Wind-tunnel study of atmospheric stable boundary layers over a rough surface", Bound. Lay. Meteorol., 98, 57-82.
- Pang, J.B., Ge, Y.J. and Lu, Y. (2002), "Methods for analysis of turbulence integral length in atmospheric boundary-layer", J. TongJi Univ., 30(5), 622-626.
- Rabault, J., Kuchta, M., Jensen, A., Reglade, U. and Cerardi, N. (2019), "Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control", J. Fluid Mech., 865, 281-302.
- Scott, R.C. and Pado, L.E. (2000), "Active control of wind-tunnel model aeroelastic response using neural networks", J. Guidance, Control, Dyn., 23(6), 1100-1108.
- Sill, B.L. (1988), "Turbulent boundary layer profiles over uniform rough surfaces", J. Wind Eng. Ind. Aerod., 31(2-3), 147-163.
- Silver, D., Huang, A., Maddison, C.J. and Hassabis, D. (2016), "Mastering the game of Go with deep neural networks and tree search", Nature, 529(7587), 484-489.
- Snaiki, R., and Wu, T. (2022). "Knowledge-enhanced deep learning for simulation of extratropical cyclone wind risk". Atmosphere, 13(5), 757.
- Takakura, S., Suyama, Y., Aoki, T., Yoshimura, N. and Takahashi, S. (1993), "Introduction of boundary-layer wind tunnel", Wind Eng., 1993(54), 31-38.
- Teunissen, H.W. (1975), "Simulation of the planetary boundary layer in a multiple-jet wind tunnel", Atmos. Environ., 9(2), 145-174.
- Wang, J.Y., Meng, Q.H., Luo, B. and Zeng, M. (2018), "A multiple-fan active control wind tunnel for outdoor wind speed and direction simulation", Rev. Sci. Instrument., 89(3).
- Wang, J.Y., Zeng, M. and Meng, Q.H. (2019), "Latticed mode: A new control strategy for wind field simulation in a multiple-fan wind tunnel", Rev. Sci. Instruments, 90(8).
- Wu, T., He, J.C. and Li, S.P. (2023), "Active flutter control of long-span bridges via deep reinforcement learning: A proof of concept", Wind Struct., 36(5), 321-331.
- Yu, C., Liu, J., Nemati, S. and Yin, G. (2021), "Reinforcement learning in healthcare: A survey", ACM Computing Surveys (CSUR), 55(1), 1-36.
- Zhang, M.J., Zhang, J.X., Li, Y.L. and Yu, J.S. (2020), "Wind characteristics in the high-altitude difference at bridge site by wind tunnel tests", Wind Struct., 30(6), 548-557.
- Zhang, Z., Zhang, D. and Qiu, R.C. (2019), "Deep reinforcement learning for power system applications: An overview", CSEE J. Power Energy Syst., 6(1), 213-225.
- Ziada, S., Ng, H. and Blake, C.E. (2003), "Flow excited resonance of a confined shallow cavity in low Mach number flow and its control", J. Fluids Struct., 18(1), 79-92.