Asymmetry Learning and OOD Robustness

Tuesday, November 22, 2022

Generalizing from observed to new related environments (out-of-distribution) is central to the reliability of classifiers. However, most classifiers fail to predict label Y from input X when the change in environment is due to a (stochastic) input transformation T not observed in training. In this talk I will argue that the current supervised learning paradigm, which seeks to find invariances in the data in order to learn invariant functions (e.g., so that all dog images have the same label “dog”), is not out-of-distribution (OOD) robust. I will then introduce Asymmetry Learning, a new learning paradigm which instead focuses on finding evidence of asymmetries in the data. Asymmetry Learning performs a causal structure search that, under certain identifiability conditions, finds classifiers that perform equally well in-distribution and out-of-distribution. Asymmetry Learning is inspired by a defining characteristic of human intelligence (that sets apart young children from apes), where we assume symmetries (invariance to transformations) until we encounter data with evidence that asymmetries are necessary to perform a task. I will also introduce the concept of counterfactual invariance and OOD tasks and robustness in graphs.

Speaker/s

Bruno Ribeiro is an Assistant Professor in the Department of Computer Science at Purdue University. He obtained his Ph.D. at the University of Massachusetts Amherst and did his postdoctoral studies at Carnegie Mellon University. His research interests are in invariant & causal representation learning, with a focus on sampling and modeling relational and temporal data. He received an NSF CAREER award in 2020, Amazon Research Award in 2022, and multiple best paper awards including the ACM SIGMETRICS 2016 best paper award.

Related