Parameter-Efficient Fine-Tuning for NLP Models

Wednesday, April 26, 2023

State-of-the-art language models in NLP perform best when fine-tuned even on small datasets, but due to their increasing size, finetuning and downstream usage have become extremely compute-intensive. Being able to efficiently and effectively fine-tune the largest pre-trained models is thus key in order to reap the benefits of the latest advances in NLP. In this tutorial, we provide an overview of parameter-efficient fine-tuning methods. We highlight their similarities and differences by presenting them in a unified view.

 

Post Talk Link:  Click Here 

Passcode: 8V^=A1Hk

Speaker/s

Indraneil is a PhD Candidate in UKP Lab at TU Datmstadt. He is currently researching parameter-efficient fine-tuning, sparsity and conditional computation methods in large language models to improve performance multi-lingual, multi-task settings. Previously, he was an applied scientist at Amazon Advertising, where he worked on few-shot multi-modal models for advert moderation and for content generation in advertiser assistance.

Related