Two-sample tests ask, “given samples from each, are these two populations the same?” For instance, one might wish to know whether a treatment and control group differ. With very low-dimensional data and/or strong parametric assumptions, methods such as t-tests or Kolmogorov-Smirnov tests are widespread. Recent work in statistics and machine learning has sought tests that cover situations not well-handled by these classic methods, providing tools useful in machine learning for domain adaptation, causal discovery, generative modeling, fairness, adversarial learning, and more. In this talk, I will introduce one advance in the two-sample testing field: Two-sample testing under high dimensionality. I also present how to use advanced two-sample tests to defend against the adversarial attacks, which justified the significance of two-sample testing in the AI security area.
Dr Feng Liu is a machine learning researcher with research interests in hypothesis testing and trustworthy machine learning. Currently, he is a Lecturer (in Machine Learning) at The University of Melbourne, Australia, and a Visiting Scientist at RIKEN-AIP, Japan. He has served as an Area Chair for ICML, NeurIPS, ICLR. He also serves as an Editor for ACM Transactions on Probabilistic Machine Learning, Associate Editor for the International Journal of Machine Learning and Cybernetics, and Action Editor for Neural Networks. He has received the ARC Discovery Early Career Researcher Award, the Outstanding Paper Award of NeurIPS (2022), the Outstanding Reviewer Award of NeurIPS (2021), and the Outstanding Reviewer Award of ICLR (2021).
Read More
Read More