Praneeth Vepakomma

Assistant Professor of Machine Learning 

Research interests

The “big problem” that Professor Vepakomma’s research wants to solve is motivated by this question: “How can one effectively enable individual, organizational, regional, and global collaboration through intelligence sharing across device eco-systems without infringing privacy, security, safety, trust, and regulation while incentivizing the entire workflow?” This includes a major focus on responsible/trustworthy AI, distributed and private computation for machine learning, statistical inference and data science at large. 

Email

Prior to joining MBZUAI, Professor Vepakomma gained extensive industrial experience across Meta, Apple, Amazon, Motorola Solutions, Corning and several startups. He submitted for his PhD at MIT on distributed and private computation. He is the president of IntegrityDistributed, a research based non-profit that he co-founded with a focus on anti-corruption and financial compliance. Professor Vepakomma's research focuses on developing algorithms for distributed computation in statistics and machine learning under constraints of privacy and efficiency. He won the ADIA Lab Fellowship, Meta PhD research fellowship in Applied Statistics, two SERC Scholarships (for Social and Ethical Responsibilities of Computing) from MIT's Schwarzman college of computing. His non-profit won the Financial Times Digital Innovation Award. He won a Best Student Paper Award at FL-IJCAI, a Baidu Best Paper Award at NeurIPS-SpicyFL and a Best Paper Runner Up Award at FG-2021. Professor Vepakomma's technical work is inspired by foundations of non-asymptotic statistics, randomized algorithms, learning augmented algorithms, combinatorics, and at times just by systems design. He has organized several workshops at ICLR, ICML, IJCAI, CVPR and NeurIPS.
  • Ph.D. from Massachusetts Institute of Technology (2024), Cambridge, MA, U.S.A 
  • M.Sc. in Mathematical and Applied Statistics, Rutgers University (New Brunswick, NJ, U.S.A) (2009) 
 

  • Posthoc privacy guarantees for collaborative inference with modified Propose-Test-Release, (PDF), @NeurIPS 2023 Conference, Thirty-seventh Conference on Neural Information Processing Systems, Abhishek Singh, Praneeth Vepakomma, Vivek Sharma, Ramesh Raskar, Topic: Differential privacy, formalizes privacy of informal ML pipelines, Distributed/Collaborative Inference (2023) 
  • PrivateMail: Differentially private supervised manifold learning of deep features with privacy, @AAAI 2022, 36th AAAI Conference on Artificial Intelligence, (AAAI 2022) (Oral), Praneeth Vepakomma, Julia Balla, Ramesh Raskar, Topic: Differential Privacy, Privacy Preserving ML, On-device ML (2022) 
  • Differentially private Fréchet Mean on the Manifold of Symmetric Positive Definite (SPD) Matrices (PDF), @TMLR, Transactions on Machine Learning Research, Saiteja Utpala, Praneeth Vepakomma, Nina Miolane, (Journal), Topic: Geometric Statistics, Differential Privacy, Differential Geometry (2023) 
  • DISCO: Dynamic and invariant sensitive channel obfuscation for deep neural networks, @CVPR 2021, IEEE Computer Vision and Pattern Recognition Conference, A.Singh, A.Chopra,V.Sharma, E.Garza, E.Zhang, P.Vepakomma, R.Raskar, Topic: Preventing reconstruction attacks, Distributed Inference, (CVPR 2021)  
  • Supervised Dimensionality Reduction via Maximization of Distance Correlation, @Electronic Journal of Statistics, (Journal),  P.Vepakomma, C. Tonde and A.Elgammal, Topic: Statistics, Optimization, ML. (2018) 
  • Advances and open problems in federated learning, (PDF)@Foundations and Trends in Machine Learning, Vol 14, Issue 1–2 with 58 authors from 25 institutions 
  • Parallel quasi-concave set function optimization for scalability even without submodularity, @IEEE HPEC High Performance Extreme Computing Conference, Praneeth Vepakomma, Yulia Kempner, Rodmy Paredes Alfaro, Ramesh Raskar, Topic: Parallel Combinatorial Optimization (2023) 

Contact faculty affairs

Interested in working with our renowned faculty?
Fill out the below form and we will get back to you.