People can easily distinguish between different types of the same object. For instance, we can identify a mouse, whether it is Mickey Mouse or a real mouse. In contrast, machines often struggle with even simple transformations. This introduces a core challenge in machine learning: the issue of domain shift, where the data distribution during testing differs from that in the training data. This talk will focus on various techniques developed to mitigate domain shift, broadly categorized into domain adaptation and domain generalization. Domain adaptation methods involve training with labeled data from the source (training) domain and unlabeled data from the target (test) domain, aiming to improve the performance of the machine learning model on the target data. Domain generalization utilizes labeled data from single or multiple source domains, aiming to generalize to unseen target domains. The presentation will highlight our research in these areas and continue with our recent work in the related fields of transferability estimation and domain-invariant accuracy prediction.
Post Talk Link: ClickHere
Passcode: Z@6YjT9N
Mahsa Baktashmotlagh is an ARC Future Fellow and a Senior Lecturer in the data science discipline at the University of Queensland (UQ). Mahsa completed her PhD in 2014, with her thesis contributing original work in the fundamental theory of stationarity learning for visual domain adaptation and video analysis. She has published around 60 articles in high-impact journals and conferences in machine learning and computer vision, collaborating with Australian and leading international researchers. In 2023, Mahsa was awarded a very competitive ARC Future Fellowship, receiving $960k in funding to do research in Safe AI. Recently, she received a prestigious Facebook Research Award, for her work on “Learning and Evaluation Under Uncertainty”.
Read More
Read More