Eduard Gorbunov - MBZUAI MBZUAI

Eduard Gorbunov

Assistant Professor of Statistics and Data Science

Research Interests

Professor Gorbunov’s research lies at the intersection of optimization, statistics, and machine learning, with a particular focus on developing efficient and robust optimization methods for large-scale learning systems. His work addresses theoretical and practical challenges in training models under real-world constraints such as decentralization, noise, and privacy.

Email

Prior to joining MBZUAI, Professor Gorbunov was a Postdoctoral Fellow and later a Research Scientist in the Machine Learning Department at MBZUAI. He has also held a research consultancy position at Mila – Quebec AI Institute, collaborating with the group of Professor Gauthier Gidel.

Professor Gorbunov earned his Ph.D. in Computer Science from the Moscow Institute of Physics and Technology (MIPT), where he worked with Professors Alexander Gasnikov and Peter Richtárik on distributed optimization methods, gradient compression, and local update techniques. He also holds a Master’s and Bachelor’s degree in Applied Mathematics and Physics from MIPT.

Throughout his academic career, Professor Gorbunov has held research positions at institutions including MIPT, Huawei, Yandex Research, and the Higher School of Economics (HSE). His research has contributed to key developments in stochastic optimization, gradient compression, and learning under heavy-tailed noise.

He is the recipient of several prestigious honors, including the Ilya Segalovich Award from Yandex, and multiple Outstanding Reviewer Awards from top-tier conferences such as NeurIPS, ICML, and ICLR. During his studies, he received several competitive scholarships in recognition of his academic excellence.
  • Postdoctoral Fellow, MBZUAI
  • Ph.D. in Computer Science from Moscow Institute of Physics and Technology (MIPT)
  • Master of Science in Applied Mathematics and Physics from Moscow Institute of Physics and Technology (MIPT)
  • Bachelor of Science in Applied Mathematics and Physics from Moscow Institute of Physics and Technology (MIPT)
  • Ilya Segalovich Award for Scientific Achievements – Yandex, April 2019 One of only 9 winners across Russia, Belarus, and Kazakhstan; includes research funding, internship offer at Yandex Research, and international travel support.
  • Outstanding Reviewer Awards: NeurIPS (2020, 2021, 2022), ICML (2021, 2022), ICLR (2021)
  • A. M. Raigorodskii Scholarship for contributions to numerical optimization methods: September 2021 – January 2022; February 2021 – June 2021
  • Huawei Scholarship for academic excellence – January 2020 Awarded to top-performing MIPT students in bachelor’s and master’s programs.
  • Increased State Academic Scholarship for Scientific Achievements, MIPT Awarded multiple times between 2017–2020 during both undergraduate and graduate studies.

Key areas of interest include:

  • Stochastic and distributed optimization
  • Federated learning and personalized model training
  • Robustness to Byzantine failures and heavy-tailed noise
  • Differential privacy and privacy-preserving learning algorithms
  • Adaptive methods and gradient clipping techniques
  • Variational inequalities and game-theoretic approaches in machine learning
  • ICLR 2025: Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization Y. Demidovich, P. Ostroukhov, G. Malinovsky, S. Horváth, M. Takác, P. Richtárik, E. Gorbunov
  • ICLR 2025: Methods for Convex (L₀, L₁)-Smooth Optimization: Clipping, Acceleration, and Adaptivity E. Gorbunov, N. Tupitsa, S. Choudhury, A. Aliev, P. Richtárik, S. Horváth, M. Takác
  • NeurIPS 2024 (Spotlight): Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities A. Agafonov, P. Ostroukhov, R. Mozhaev, K. Yakovlev, E. Gorbunov, M. Takác, A. Gasnikov, D. Kamzolov
  • NeurIPS 2024: Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad S. Choudhury, N. Tupitsa, N. Loizou, S. Horváth, M. Takác, E. Gorbunov
  • NeurIPS 2024: Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences G. Malinovsky, P. Richtárik, S. Horváth, E. Gorbunov
  • EMNLP 2024 (Findings): Low-Resource Machine Translation through the Lens of Personalized Federated Learning V. Moskvoretskii, N. Tupitsa, C. Biemann, S. Horváth, E. Gorbunov, I. Nikishina
  • ICML 2024 (Oral): High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise E. Gorbunov, A. Sadiev, M. Danilova, S. Horváth, G. Gidel, P. Dvurechensky, A. Gasnikov, P. Richtárik
 

Contact faculty affairs

Interested in working with our renowned faculty?
Fill out the below form and we will get back to you.