Metalearning Linear Bandits by Prior Update

Amit Peleg1, Naama Pearl2, Ron Meir1
1Viterbi Faculty of ECE, Technion
Israel
2University of Haifa
Israel

AISTATS 2022

Abstract

Fully Bayesian approaches to sequential decision-making assume that problem parameters are generated from a known prior. In practice, such information is often lacking. This problem is exacerbated in setups with partial information, where a misspecified prior may lead to poor exploration and performance. In this work we prove, in the context of stochastic linear bandits and Gaussian priors, that as long as the prior is sufficiently close to the true prior, the performance of the applied algorithm is close to that of the algorithm that uses the true prior. Furthermore, we address the task of learning the prior through metalearning, where a learner updates her estimate of the prior across multiple task instances in order to improve performance on future tasks. We provide an algorithm and regret bounds, demonstrate its effectiveness in comparison to an algorithm that knows the correct prior, and support our theoretical results empirically. Our theoretical results hold for a broad class of algorithms, including Thompson Sampling and Information Directed Sampling.

BibTeX


            @inproceedings{peleg2022metalearning,
              title={Metalearning linear bandits by prior update},
              author={Peleg, Amit and Pearl, Naama and Meir, Ron},
              booktitle={International Conference on Artificial Intelligence and Statistics},
              pages={2885--2926},
              year={2022},
              organization={PMLR}
            }