As Artificial Intelligence (AI) continues to achieve widespread success and integration across diverse sectors, public concern over its safety and dependability has markedly increased. This is particularly true in light of high-profile and sometimes fatal incidents involving autonomous systems. In this context, there is an active and focused strand of research aimed at addressing these concerns.
In this presentation, I will first introduce a comprehensive suite of technologies specifically designed to enhance and ensure the trustworthiness of AI systems across a broad spectrum of criticality. This toolkit comprises methodologies like formal verification, software testing, and explainable AI, each contributing in a complementary way to build a base of trust in AI applications. I will then move on to demonstrate how these varied approaches can be effectively utilised to address specific and current challenges that AI systems encounter, such as the issue of copyright protection for AI models. The presentation will conclude with forward-looking exploration of the challenges and prospects that lie ahead for research in trustworthy AI, setting the stage for a dialogue on the future of AI dependability and integrity.
Post Talk Link: Click Here
Passcode: ?hzDL^1F
Dr. Youcheng Sun is an Assistant Professor from The University of Manchester, UK. Prior to Manchester, he worked at Queen's University Belfast and he was a postdoctoral researcher at the University of Oxford. He obtained his PhD from Scuola Superiore Sant'Anna, a special-statute, highly selective public research university located in Pisa, Italy. A recognised expert in the field of Trustworthy AI, Dr. Sun has led the development of innovative techniques focused on ensuring the safety of AI systems. His research has been funded by leading companies such as Google, Ethereum Foundation, the UK’s Defence Science and Technology Laboratory (Dstl), and BAE Systems. He has also been awarded grants from The Alan Turing Institute for the UK-Italy Trustworthy AI Visiting Researcher Programme. In the past, Dr. Sun led the source code testing and verification work for two UK domestic airborne software projects, SECT-AIR and AUTOSAC. He was a member of the EU H2020 project SAFURE investigating safety and security assurance in the design of mixed-critical cyber-physical systems. Dr. Sun has a strong track record of publications in top-tier academic conferences and journals. He serves as an Associate Editor of the ACM Transactions on Software Engineering and Methodology (TOSEM), and he is also the Guest Editor for the journal's special issue on Software Engineering and AI.
Read More
Read More