Parameter-Efficient Fine-Tuning for NLP Models - MBZUAI MBZUAI

Parameter-Efficient Fine-Tuning for NLP Models

Wednesday, April 26, 2023

State-of-the-art language models in NLP perform best when fine-tuned even on small datasets, but due to their increasing size, finetuning and downstream usage have become extremely compute-intensive. Being able to efficiently and effectively fine-tune the largest pre-trained models is thus key in order to reap the benefits of the latest advances in NLP. In this tutorial, we provide an overview of parameter-efficient fine-tuning methods. We highlight their similarities and differences by presenting them in a unified view.

 

Post Talk Link:  Click Here 

Passcode: 8V^=A1Hk

Speaker/s

Indraneil is a PhD Candidate in UKP Lab at TU Datmstadt. He is currently researching parameter-efficient fine-tuning, sparsity and conditional computation methods in large language models to improve performance multi-lingual, multi-task settings. Previously, he was an applied scientist at Amazon Advertising, where he worked on few-shot multi-modal models for advert moderation and for content generation in advertiser assistance.

Related

thumbnail
Monday, October 06, 2025

Nobel Laureate Michael Spence on how AI is redefining the global economy

Nobel Prize-winning economist Michael Spence explains how AI is reshaping the economic landscape and what is needed.....

  1. digital policy ,
  2. governance ,
  3. Nobel Prize ,
  4. guest talk ,
  5. guest lecture ,
  6. economics ,
  7. Economy ,
  8. Undergraduate ,
Read More
thumbnail
Thursday, July 24, 2025

Understanding faith in the age of AI

MBZUAI hosted a panel discussion in collaboration with the Manara Center for Coexistence and Dialogue focused on.....

  1. connection ,
  2. discussion ,
  3. religion ,
  4. spirituality ,
  5. faith ,
  6. conversation ,
  7. panel ,
  8. Human–computer interaction ,
Read More