Learning to Explore with Lagrangians for Bandits under Unknown Linear Constraints - Université de Lille
Communication Dans Un Congrès Année : 2024

Learning to Explore with Lagrangians for Bandits under Unknown Linear Constraints

Résumé

Pure exploration in bandits models multiple real-world problems, such as tuning hyper-parameters or conducting user studies, where different safety, resource, and fairness constraints on the decision space naturally appear. We study these problems as pure exploration in multi-armed bandits with unknown linear constraints, where the aim is to identify an $r$$\textit{-good feasible policy}$. First, we propose a Lagrangian relaxation of the sample complexity lower bound for pure exploration under constraints. We show how this lower bound evolves with the sequential estimation of constraints. Second, we leverage the Lagrangian lower bound and the properties of convex optimisation to propose two computationally efficient extensions of Track-and-Stop and Gamified Explorer, namely LATS and LAGEX. To this end, we propose a constraint-adaptive stopping rule, and while tracking the lower bound, use pessimistic estimate of the feasible set at each step. We show that these algorithms achieve asymptotically optimal sample complexity upper bounds up to constraint-dependent constants. Finally, we conduct numerical experiments with different reward distributions and constraints that validate efficient performance of LAGEX and LATS with respect to baselines.

Dates et versions

hal-04784911 , version 1 (15-11-2024)

Licence

Identifiants

Citer

Udvas Das, Debabrota Basu. Learning to Explore with Lagrangians for Bandits under Unknown Linear Constraints. Seventeenth European Workshop on Reinforcement Learning (EWRL), Oct 2024, Toulouse, France. ⟨hal-04784911⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More