Prior to joining MBZUAI, Professor Shelmanov led natural language processing research programs on active learning, uncertainty estimation, and biomedical text processing, combining scientific research with real-world deployment at Artificial Intelligence Research Institute. Previously, at Skolkovo Institute of Science and Technology, Professor Shelmanov carried out postdoctoral research on uncertainty-aware natural language processing and industry-driven applications in healthcare and information extraction.
- Ph.D. in Computer Science, Russian Academy of Sciences, Russia
- M.Sc. in Computer Science, National Research Nuclear University MEPhI, Russia
- B.Sc. in Computer Science, National Research Nuclear University MEPhI, Russia
- MBZUAI–ETH / Swiss AI Initiative Research Grant, 2026
- MBZUAI Outstanding Teaching Assistant and Peer Mentor – Researcher Award, 2025–2026
- EACL Best Resource Paper Award, 2024
- MBZUAI–Weizmann Institute of Science Joint Program Grant, 2023
- AIRI Special Recognition for Research Contributions, 2022
- FASIE Grant, 2021
- RSF Grant, 2020, 2017, 2016
- Joint MTS-Skoltech Laboratory Grant, 2020
- RFBR Grant, 2017
- Ekaterina Fadeeva, Maiya Goloburda, Aleksandr Rubashevskii, Roman Vashurin, Artem Shelmanov, Preslav Nakov, Mrinmaya Sachan, Maxim Panov: "Don’t Throw Away Your Beams: Improving Consistency-based Uncertainties in LLMs via Beam Search", ICLR, 2026.
- Roman Vashurin, Maiya Goloburda, Albina Ilina, Aleksandr Rubashevskii, Preslav Nakov, Artem Shelmanov, Maxim Panov: "Uncertainty Quantification for LLMs through Minimum Bayes Risk: Bridging Confidence and Consistency", NeurIPS, 2025.
- Artem Shelmanov, Ekaterina Fadeeva, Akim Tsvigun, Ivan Tsvigun, Zhuohan Xie, Igor Kiselev, Nico Daheim, Caiqi Zhang, Artem Vazhentsev, Mrinmaya Sachan, Preslav Nakov, Timothy Baldwin: "A Head to Predict and a Head to Question: Pre-trained Uncertainty Quantification Heads for Hallucination Detection in LLM Outputs", EMNLP, 2025.
- Roman Vashurin, Ekaterina Fadeeva, Artem Vazhentsev, Lyudmila Rvanova, Akim Tsvigun, Daniil Vasilev, Rui Xing, Abdelrahman Boda Sadallah, Kirill Grishchenkov, Sergey Petrakov, Alexander Panchenko, Timothy Baldwin, Preslav Nakov, Maxim Panov, Artem Shelmanov: "Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph", TACL, 2025.
- Artem Vazhentsev, Lyudmila Rvanova, Ivan Lazichny, Alexander Panchenko, Maxim Panov, Timothy Baldwin, Artem Shelmanov: "Token-Level Density-Based Uncertainty Quantification Methods for Eliciting Truthfulness of Large Language Models", NAACL, 2025.
- Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem Shelmanov, Akim Tsvigun, Chenxi Whitehouse, Osama Mohammed Afzal, Tarek Mahmoud, Toru Sasaki, Thomas Arnold, Alham Fikri Aji, Nizar Habash, Iryna Gurevych, Preslav Nakov: "M4: Multi-generator, Multi-domain, and Multi-lingual Black-box Machine-generated Text Detection", EACL, 2024, Best Resource Paper Award.
- Ekaterina Fadeeva, Aleksandr Rubashevskii, Artem Shelmanov, Sergey Petrakov, Haonan Li, Hamdy Mubarak, Evgenii Tsymbalov, Gleb Kuzmin, Alexander Panchenko, Timothy Baldwin, Preslav Nakov, Maxim Panov: "Fact-checking the Output of Large Language Models via Token-level Uncertainty Quantification", ACL Findings, 2024.
- Ekaterina Fadeeva, Roman Vashurin, Akim Tsvigun, Artem Vazhentsev, Sergey Petrakov, Kirill Fedyanin, Daniil Vasilev, Elizaveta Goncharova, Alexander Panchenko, Maxim Panov, Timothy Baldwin, Artem Shelmanov: "LM-Polygraph: Uncertainty Estimation for Language Models", EMNLP 2023.
- Artem Vazhentsev, Gleb Kuzmin, Akim Tsvigun, Alexander Panchenko, Maxim Panov, Mikhail Burtsev, Artem Shelmanov: "Hybrid Uncertainty Quantification for Selective Text Classification in Ambiguous Tasks", ACL, 2023.
- Ozge Sevgili, Artem Shelmanov, Mikhail Arkhipov, Alexander Panchenko, Chris Biemann: "Neural Entity Linking: A Survey of Models based on Deep Learning", Semantic Web Journal, 2022.