Last time we introduced Projected Online Gradient Descent: And we proved the following regret guarantee: Theorem 1 Let $latex {V\subseteq {\mathbb R}^d}&fg=000000$ a closed non-empty convex set with diameter $latex {D}&fg=000000$, i.e. $latex {\max_{{\boldsymbol x},{\boldsymbol y}\in V} \|{\boldsymbol x}-{\boldsymbol y}\|_2 \leq D}&fg=000000$. Let $latex {\ell_1, \cdots, \ell_T}&fg=000000$ an arbitrary sequence of convex functions $latex {\ell_t:{\mathbb …

Continue reading "Subgradients and Online-to-Batch conversion"