DOI QR코드

DOI QR Code

이항 반응 실험의 확률적 전역최적화 기법연구

A Study on the Stochastic Optimization of Binary-response Experimentation

  • 투고 : 2022.12.02
  • 심사 : 2023.02.27
  • 발행 : 2023.03.31

초록

본 논문의 목적은 이항출력 실험을 이용할 경우에 확률적 전역 최적화 방법론들을 검토하고 알고리즘들간의 성능을 비교하기 위한 것이다. 모 성공확률은 알수 없고 확률적 특성을 갖기 때문에 확률적 전역 최적화 방법론에서는 모 성공확률 대신 성공확률의 추정치를 이용한다. 언덕오르기 알고리즘 , 단순랜덤탐색, 랜덤재출발 랜덤탐색, 랜덤 최적화, 담금질 기법 및 군집기반의 알고리즘인 입자 군집 최적화 알고리즘을 확률적 전역 최적화 알고리즘으로 사용하였다. 알고리즘의 비교를 위하여 두가지 테스트 함수(하나는 단봉이고 나머지는 다봉임)가 제안되었고 몬테카를로 시뮬레이션을 이용하여 알고리즘의 성능을 평가하였다. 단순 테스트 함수에 대하여는 모든 알고리즘이 유사한 성능을 보이고 있다. 복잡한 다봉의 테스트 함수에 대하여는 랜덤재출발 랜덤최적화, 담금질 기법과 군집 기반의 입자군집 알고리즘이 훨씬 더 좋은 성능을 보임을 알 수 있다.

The purpose of this paper is to review global stochastic optimization algorithms(GSOA) in case binary response experimentation is used and to compare the performances of them. GSOAs utilise estimator of probability of success $\^p$ instead of population probability of success p, since p is unknown and only known by its estimator which has stochastic characteristics. Hill climbing algorithm algorithm, simple random search, random search with random restart, random optimization, simulated annealing and particle swarm algorithm as a population based algorithm are considered as global stochastic optimization algorithms. For the purpose of comparing the algorithms, two types of test functions(one is simple uni-modal the other is complex multi-modal) are proposed and Monte Carlo simulation study is done to measure the performances of the algorithms. All algorithms show similar performances for simple test function. Less greedy algorithms such as Random optimization with Random Restart and Simulated Annealing, Particle Swarm Optimization(PSO) based on population show much better performances for complex multi-modal function.

키워드

참고문헌

  1. Akimoto Y., S. A. Morales, and O. Teytaud(2015), "Analysis of runtime of optimization algorithms for noisy functions over discrete codomains", Theoretical Computer Science, vol. 605, pp. 42-50.
  2. Alarie S., et al(2021), "Digabel Two decades of blackbox optimization applications", EURO Journal on Computational Optimization 9.
  3. Back T.(1996), "Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming", Genetic Algorithms. Oxford University Press, Oxford, UK.
  4. Back T., and U. Hammel(1994), "Evolution strategies applied to perturbed objective functions," in Proceedings of IEEE Congress on Evolutionary Computation, pp. 40-45.
  5. Brooks S. H.(1958), "A Discussion of Random Methods for Seeking Maxima," Operations Research, 6, 244-251. https://doi.org/10.1287/opre.6.2.244
  6. Cai T., F. Pan, and J. Chen(2004), "Adaptive particle swarm optimization algorithm." In Proceedings of Fifth World Congress on Intelligent Control and Automation, 2004. WCICA 2004, volume 3, pages"2245-2247
  7. Chandran S. and R. R. Rhinehart.(2002) "Heuristic random optimizer-version II." In Proceedings of the American Control Conference, volume 4, pages 2589-2594. IEEE, Piscataway NJ, US
  8. Eberhart R. C. and J. Kennedy(1995), "A new optimizer using particle swarm theory." In Proceedings of the Sixth International Symposium on Micro Machine and Human Science MHS'95, pages 39-43. IEEE Press
  9. Gao Y. and Y. Duan(2007), "An adaptive particle swarm optimization algorithm with new random inertia weight." In Advanced Intelligent Computing Theories and Applications. With Aspects of Contemporary Intelligent Computing Techniques, volume 2, pages 342-350. https://doi.org/10.1007/978-3-540-74282-1_39
  10. Gao Y. and Z. Ren(2007), "Adaptive particle swarm optimization algorithm with genetic mutation operation." In Proceedings of the Third International Conference on Natural Computation (ICNC 2007), volume 2, pages 211-215.
  11. Glover F.(1986), "Future Paths for Integer Programming and Links to Artificial Intelligence". Computers and Operations Research, 13(5), 533-549. https://doi.org/10.1016/0305-0548(86)90048-1
  12. Gurin and A. R. Leonard(1965), "Convergence of the random search method in the presence of noise", Automation and Remote Control, 26, 1505-1511.
  13. Kirkpatrick S., C. D. Gelatt, Jr., and M. P. Vecchi (1983), "Optimization by simulated annealing." Science, 220(4598), 671-680. https://doi.org/10.1126/science.220.4598.671
  14. Li J. and R. R. Rhinehart(1998). "Heuristic random optimization." Computers and Chemical Engineering, 22(3), 427-444. https://doi.org/10.1016/S0098-1354(97)00005-7
  15. Matyas J.(1965) "Random optimization." Automation and Remote Control, 26(2), 244-251.
  16. Metropolis N., et. al.(1953), "Equation of state calculations by fast computing machines." The Journal of Chemical Physics, 21(6), 1087-1092.
  17. Nelder J. A. and R. Mead,(1965), "A simplex method for function minimization", Comput. J., 7, pp. 308-313. https://doi.org/10.1093/comjnl/7.4.308
  18. Rastrigin L. A.(1963), "The convergence of the random search method in the extremal control of manyparameter system". Automation and Remote Control, 24, 1337-1342.
  19. Russel S. and P. Novic(2002), Artificial Intelligence: A Modern Approach, Prentice Hall, second edition, December.
  20. Schumer M. A. and K. Steiglitz(1968), "Adaptive step size random search." IEEE Transactions on Automatic Control, AC-13(3), 270-276. https://doi.org/10.1109/TAC.1968.1098903
  21. Schumer M. A(1967). Optimization by adaptive random search. PhD thesis, Princeton University, NJ, Supervisor Kenneth Steiglitz.
  22. Spall J. C.(2000), Introduction to Stochastic Search and Optimization.
  23. Estimation, Simulation, and Control (2003), Wiley-Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons, first edition.
  24. Venter G. and J. Sobieszczanski-Sobieski(2003), "Particle swarm optimization." AIAA Journal, 41(8), 1583-1589. https://doi.org/10.2514/2.2111
  25. Worakul N., W. Wongpoowarak, and P. Boonme(2002) "Optimization in development of acetaminophen syrup formulation." Drug Development and Industrial Pharmacy, 28(3), 345-351. https://doi.org/10.1081/DDC-120002850
  26. Yuret D. and Michael de la Maza(1993), "Dynamic hill climbing: Overcoming the limitations of optimization techniques", In Proceedings of the Second Turkish Symposium on Artificial Intelligence and Neural Networks, pages 208-212, June 24-25, Bogazici University, Istanbul, Turky.