Professor Stoyanov’s research interests relate to large language models, including pretraining, fine-tuning related to instruction following and application that are designed to solve real-world problems. He is also interested in efficient, sparsely activated models such as mixtures of experts (MoE) as well as multilingual LLMs and training models for performing tasks cross-lingually. He is seeking to develop new paradigms for applying LLMs in real-world, interactive scenarios that augment the creative process, while allowing people to be more efficient. Email
Prior to joining MBZUAI, Professor Stoyanov has beenn serving as the head of AI/ML at Tome, a productivity company based in San Francisco, since 2023, where he leads the development of new approaches for AI-powered products. Before joining Tome, he worked for nearly a decade at Facebook and Meta where he most recently served as applied research scientist manager and led the development of pretrained language models such as RoBERTa, XLM-R and OPT. His work at Facebook and Meta broadly related to NLP search, neural machine translation, self-supervised methods for identifying hate speech and multilingual language models.
He was also integral to the team who built MultiRay, a service that runs multiple, very large and accurate self-supervised models on the same input. Prior to Facebook, Stoyanov was an assistant research scientist at Johns Hopkins University's Center for Language and Speech Processing where he received a computing innovation fellowship and focused on learning for structured prediction.
Interested in working with
our renowned faculty?
Fill out the below form and we will get back to you.