In this talk, I will present my work on developing machine learning methods for scientific discovery and simulation. In scientific discovery, discovering universal, simple laws from multiple dynamical systems is critical across many scientific disciplines. I introduce a paradigm and method for learning theories, where instead of using one model to learn everything, a theory parsimoniously predicts one aspect of the dynamics and the domain in which these predictions are accurate. By incorporating four important inductive biases from how physicists model the world (divide-and-conquer, Occam’s razor, unification, and lifelong learning), our method achieves significantly better accuracy, interpretability and sample efficiency.
Secondly, across most disciplines of science, e.g., physics, biomedicine, materials, mechanical engineering, and energy, a most critical challenge is that their simulations are typically slow due to the complex and multi-resolution nature of the system. I introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP), the first deep learning-based surrogate model that learns the evolution model while optimizing spatial resolutions to allocate more computation to highly dynamic regions. LAMP includes a Graph Neural Network (GNN)-based evolution model to learn the forward dynamics and a GNN-based actor-critic to learn the policy of discrete actions of local refinement and coarsening of the spatial mesh, trained via reinforcement learning. Our experiments on 1D PDEs and 2D mesh-based simulations demonstrate LAMP’s superior performance compared to state-of-the-art surrogate models combined with Adaptive Mesh Refinement.
Post Talk Link: Click Here
Passcode: 96lwL$7p
Tailin Wu is a postdoctoral scholar in the Computer Science Department at Stanford University, working with Prof. Jure Leskovec. He received his Ph.D. from MIT Physics, where his thesis focused on AI for Physics and Physics for AI. His research interests include developing machine learning methods for large-scale scientific simulations, neuro-symbolic methods for scientific discovery, and representation learning, using tools of graph neural networks, information theory, and physics. His work has been published in top machine learning conferences and leading physics journals, and featured in MIT Technology Review. He also serves as a reviewer for high-impact journals such as PNAS, Nature Communications, Nature Machine Intelligence, and Science Advances.
Read More
Read More