The talk consists of four stories. The first story is about the relation between Stochastic Averaging (online approach) and Stochastic Average Approximation (Empirical Risk Minimization – offline approach). The second story is about (strongly) convex-concave saddle-point problems and Accelerated methods. The third story is about state-of-art results in homogeneous federated learning, and the last story describe decentralized optimization from the point of view of standard optimization with affine constraints.
Alexander Gasnikov is a professor at the Moscow Institute of Physics and Technology. He received a Doctor degree (Habilitation) in Mathematics in 2016 from the Faculty of Control and Applied Mathematics of Moscow Institute of Physics and Technology and received a B.Sc., M.Sc., Ph.D. in Mathematics in 2004, 2006, and 2007 from the same university. In 2019, he received Yahoo Faculty Research Engagement Program (jointly with Cesar Uribe). In 2020, he received the Yandex Award (Ilya Segalovich Award). In 2021 he received the Award for Young Scientists from the Moscow Government (jointly with Pavel Dvurechensky). His main area of research is optimization algorithms. He regularly publishes in top-tier machine learning conferences such as NeurIPS, ICML and has papers in Q1 journals such as EJOR, OMS, and JOTA.