DOI QR코드

DOI QR Code

Combining Multiple Strategies for Sleeping Bandits with Stochastic Rewards and Availability

확률적 보상과 유효성을 갖는 Sleeping Bandits의 다수의 전략을 융합하는 기법

  • Received : 2016.09.19
  • Accepted : 2016.10.29
  • Published : 2017.01.15

Abstract

This paper considers the problem of combining multiple strategies for solving sleeping bandit problems with stochastic rewards and stochastic availability. It also proposes an algorithm, called sleepComb(${\Phi}$), the idea of which is to select an appropriate strategy for each time step based on ${\epsilon}_t$-probabilistic switching. ${\epsilon}_t$-probabilistic switching is used in a well-known parameter-based heuristic ${\epsilon}_t$-greedy strategy. The algorithm also converges to the "best" strategy properly defined on the sleeping bandit problem. In the experimental results, it is shown that sleepComb(${\Phi}$) has convergence, and it converges to the "best" strategy rapidly compared to other combining algorithms. Also, we can see that it chooses the "best" strategy more frequently.

본 논문에서는 확률적 보상과 유효성을 갖고, 매 시간 유효한 arm들의 집합이 변하는 sleeping bandit 문제를 해결하는 다수의 전략들의 집합 ${\Phi}$가 주어졌을 때, 이들을 융합하는 문제를 고려하고, 이 문제를 해결하기 위한 융합 알고리즘 sleepComb(${\Phi}$)를 제안한다. 제안된 알고리즘인 sleepComb(${\Phi}$)는 확률적(stochastic) multi-armed bandit 문제를 해결하는 매개변수 기반 휴리스틱으로 잘 알려진 ${\epsilon}_t$-greedy의 확률적 스위칭 기법을 바탕으로 매 시간 적절한 전략을 선택하는 알고리즘이다. 시퀀스 {${\epsilon}_t$}와 전략들에 대한 적절한 조건이 주어졌을 때, 알고리즘 sleepComb(${\Phi}$)는 sleeping bandit 문제에 대해 적절히 정의된 "best" 전략으로 수렴한다. 실험을 통해 이 알고리즘이 "best" 전략으로 수렴한다는 사실을 확인하고, 기존의 다른 융합 알고리즘보다 "best" 전략으로 더 빠르게 수렴함과 "best" 전략을 선택하는 비율이 더 높음을 보인다.

Keywords

References

  1. P. Auer, N. Cesa-Bianchi, and P. Fisher, "Finitetime analysis of the multiarmed bandit problem," Machine Learning, Vol. 47, pp. 235-256, 2002. https://doi.org/10.1023/A:1013689704352
  2. P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, "The nonstochastic multiarmed bandit problems," SIAM J. Comput., Vol. 32, No. 1, pp. 48- 77, 2002. https://doi.org/10.1137/S0097539701398375
  3. J. Bather, "Randomized allocation of treatment in sequential trials," Advances in Applied Probability, Vol. 12, No. 1, pp. 174-182, 1980. https://doi.org/10.1017/S0001867800033449
  4. A. Blum and Y. Monsour, "From external to internal regret," Conf. on Learning Theory(COLT), pp. 621-636, 2005.
  5. S. Bubeck and N. Cesa-Bianchi, "Regret analysis of stochastic and nonstochastic multi-armed bandit problems," Foundations and Trends in Machine Learning, Vol. 5, No. 1, pp. 1-122, 2012. https://doi.org/10.1561/2200000024
  6. D. P. de Farias and N. Megiddo, "Combining expert advice in reactive environments," J. of the ACM, Vol. 53, No. 5, pp. 762-799, 2006. https://doi.org/10.1145/1183907.1183911
  7. H. S. Chang and S. H. Choe, "Combining multiple strategies for multi-armed bandits problems and asymptotic optimality," Journal of Control Science and Engineering, Vol. 2015, Article ID 264953, 7 pages, Mar. 2015.
  8. Y. Freund, R. E. Schapire, Y. Singer, and M. K. Warmuth, "Using and combining predictors that specialize," Proc. of the 22nd annual ACM symp. on Theory of comput., pp. 334-343, 1997.
  9. J. C. Gittins and D. M. Jones, "A dynamic allocation index for sequential design of experiments," Progress in Statistics, Euro. Meet. Statis., Vol. 1, pp. 241-266, 1972.
  10. V. Kanade, B. McMahan, and B. Bryan, "Sleeping experts and bandits with stochastic action availability and adversarial rewards," Inter. Conf. on Art. Int. and Stat., pp. 272-279, 2009.
  11. R. D. Kleinberg, A. Niculescu-Mizil, and Y. Sharma, "Regret bounds for sleeping experts and bandits," Machine learning, Vol. 80, No. 2-3, pp. 245-272, 2010. https://doi.org/10.1007/s10994-010-5178-7
  12. T. L. Lai and Herbert Robbins, "Asymptotically efficient adaptive allocations rules," Adv. in appl. Math., Vol. 6, pp. 4-22, 1985. https://doi.org/10.1016/0196-8858(85)90002-8
  13. H. B. McMahan and M. Streeter, "Tighter bounds for multi-armed bandits with expert advice," Proc. of the 22nd Conf. on Learning Theory, 2009.
  14. G. Neu and M. Valko, "Online combinatorial optimization with stochastic decision sets and adversarial losses," Advances in Neural Information Processing Systems, pp. 2780-2788, 2014.
  15. H. Robbins, "Some aspects of the sequential design of experiments," Bull. Amer. Math. Soc., Vol. 58, pp. 527-535, 1952. https://doi.org/10.1090/S0002-9904-1952-09620-8