Label noise is ubiquitous in the era of big data. Deep learning algorithms can easily overfit the noise and thus cannot generalize well without properly handling the noise. In this talk, we will introduce the typical approaches to deal with label noise, i.e., extracting confident examples (whose labels are likely to be correct) and modelling label noise. The former one helps get rid of the incorrect labels while the latter one helps build statistically consistent classifiers. We will illustrate the intuitions of the state-of-the-art methods. We hope that the participants will roughly know how to learn with noisy labels via the talk.
Tongliang Liu is the Director of Sydney AI Centre at the University of Sydney. He is broadly interested in the fields of trustworthy machine learning and its interdisciplinary applications, with a particular emphasis on learning with noisy labels, adversarial learning, transfer learning, unsupervised learning, and statistical deep learning theory. He is/was meta reviewer for many conferences, such as ICML, NeurIPS, ICLR, UAI, AAAI, IJCAI, and KDD. He is a recipient of Discovery Early Career Researcher Award (DECRA) from Australian Research Council (ARC) and was named in the Early Achievers Leaderboard of Engineering and Computer Science by The Australian in 2020.