[1901.04884] Optimistic optimization of a Brownian
We proved that the sample complexity of rgb]0.5,0.2,0OOB is of order O log2 (1/) which improves over the previously best known bound (Calvin et al., 2017).As we are not aware of a lower bound for the Brownian motion optimization, we do not know whether O log2 (1/) is the minimax rate of the sample complexity or if there exists an algorithm that can do even better
Abstract: We address the problem of optimizing a Brownian motion. We consider a
(random) realization $W$ of a Brownian motion with input space in $[0,1]$.
Given $W$, our goal is to return an $\epsilon$-approximation of its maximum
using the smallest possible number of function evaluations, the sample
complexity of the algorithm. We provide an algorithm with sample complexity of
order $\log^2(1/\epsilon)$. This improves over previous results of Al-Mharmah
and Calvin (1996) and Calvin et al. (2017) which provided only polynomial
rates. Our algorithm is adaptive---each query depends on previous values---and
is an instance of the optimism-in-the-face-of-uncertainty principle.