Efficiently Approximating Equivariance in Unconstrained Models

Monday, January 13, 2025

Equivariance ML models has emerged as a powerful inductive bias for leveraging symmetries in 3D data, achieving significant success across different scientific applications in molecular modeling and protein generation. Despite these advances, strictly enforcing equivariance can impose prohibitive computational costs. Meanwhile, recent breakthroughs in protein structure prediction—exemplified by AlphaFold 3’s reliance on data augmentation techniques—have demonstrated that unconstrained architectures can still yield outstanding results, culminating in a Nobel Prize for its transformative impact on 3D protein modeling.

In this talk, we will introduce a new training procedure that approximates equivariance via a simple multitask objective. By adding an additional equivariance loss to unconstrained models, our approach enables these architectures to learn approximate symmetries without incurring the overhead of fully equivariant methods. Crucially, formulating equivariance as a flexible learning objective allows precise control over the extent to which symmetry is enforced, matching the performance of strictly equivariant baselines at a fraction of the training and inference cost.

We will conclude by surveying key achievements, current challenges, and future directions for equivariant models, emphasizing the balance between rigorous theory and practical scalability in geometric deep learning.

Speaker/s

Ahmed Elhag is a PhD student at the Department of Computer Science at the University of Oxford, specializing in machine learning, geometric deep learning, and generative models. Ahmed is particularly interested in how we can combine these approaches to develop robust ML methods that can accelerate progress in drug discovery and design. Ahmed research covers on multiple scientific problems in molecules generation, anisotropic diffusion in graphs, and manifold diffusion fields, and was previously presented my work at prestigious conferences like ICML and ICLR. Ahmed previously interned at Apple Machine Learning Research team working on scalable machine learning models for 3D and graph-structured data.

Related