Empowering Large Language Models with Reliable Reasoning

Tuesday, February 27, 2024

Despite the remarkable advances made by large language models (LLMs) in a variety of applications, they still struggle to perform consistent and reliable reasoning when faced with highly complex tasks, such as solving logical problems and answering deep questions. This limitation becomes particularly significant when deploying LLMs in scenarios requiring high precision, such as financial analysis, medical diagnostics, and legal judgment. In this talk, I will discuss my research on building reliable generative AI agents through integrating symbolic representations and modules with large language models. This neuro-symbolic strategy integrates the flexibility of language model reasoning with the precise knowledge representation and verifiable reasoning offered by symbolic systems. I will introduce three lines of our works: 1) Logic-LM, integrating symbolic solvers for reliable logical reasoning, 2) ProgramFC, integrating symbolic programs for explicit planning, and 3) learning from automated feedback. I will conclude by reflecting on the challenges we’ve faced, and mapping out prospective directions towards building reliable generative AI agents.

 

Post Talk Link:  Click Here 

Passcode: nwG4nv=w

 

Speaker/s

Liangming Pan is a Postdoctoral Scholar at the Natural Language Processing Group, University of California, Santa Barbara (UCSB), working with Prof. William Wang. He obtained his Ph.D. from National University of Singapore in 2022, advised by Prof. Min-Yen Kan. His research interests lies in natural language processing and machine learning, with a main focus on large language model reasoning. He aims to build reliable generative AI agents able to handle complex reasoning scenarios such as logical problem-solving, deep question answering, etc. He has published more than 30 papers at leading NLP/AI/ML conferences and journals. He received the Area Chair Award in IJCNLP-AACL 2023 and the NUS Research Achievement Award in 2021.

Related