Prior to joining MBZUAI, Professor Kuribayashi was a postdoctoral researcher at Tohoku University and previously held a postdoctoral research position at MBZUAI, focusing on bridging natural language processing (NLP) and language science. He also co-founded a start-up in Japan dedicated to developing writing assistance systems.
Professor Kuribayashi earned his Ph.D. in Information Science from Tohoku University under the prestigious JSPS DC1 fellowship. His doctoral research investigated the fundamental differences between human language processing and language models. He has published over 30 research articles, including numerous papers in top-tier NLP conferences, covering broad interdisciplinary topics.
Committed to advancing the field, Professor Kuribayashi has served as a leading organizer of international workshops on cognitive modeling and computational linguistics. He is also an action editor for the ACL Rolling Review and has contributed to program committees for major conferences such as ACL, EMNLP, NAACL, NeurIPS, COLING, CoNLL, and LREC.
- 2022 Ph.D. of Information Science, Graduate School of Information Sciences, Tohoku University,
Japan.
- 2020 Master of Information Science, Graduate School of Information Sciences, Tohoku University,
Japan.
- 2018 Bachelor of Engineering, Department of Information and Intelligent Systems, Tohoku University, Japan.
- 2023 Best Paper Award, ACL student research workshop.
- 2022 Best Paper Award, AACL student research workshop.
- 2021 Best Paper Award, Association for Natural Language Processing, Japan. - 2020 Best Paper Award, Annual Meeting of the Association of Natural Language Processing, Japan.
- 2024 Outstanding Paper Award, Annual Meeting of the Association of Natural Language Processing, Japan.
- 2024, 2023, 2021 Special Committee Award, Annual Meeting of the Association of Natural Language Processing, Japan.
- 2022 President’s Award, Graduate School of Information Sciences, Tohoku University.
- 2022, 2020 Excellent Student Award in Electrical and Information Science, Tohoku University.
- 2018 Excellent Academic Award, School of Engineering, Tohoku University.
He is particularly interested in fundamental quesitons, such as:
- What can modern NLP tell us about human language?
- Under what conditions can human-like language ability be replicated?
- His research is also relevant to the efficiency of NLP systems, given that humans can efficiently learn and process language.
- Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin. "Emergent Word Order Universals from Cognitively-Motivated Language Models." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024, main long), 2024/08.
- Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin. "Psychometric Predictive Power of Large Language Models." Findings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024, Findings long), 2024/06.
- Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui. "Lower Perplexity is Not Always Human-Like." In proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021, main long), pp. 5203-5217, 2021/08
- Rena Wei Gao, Xuetong Wu, Tatsuki Kuribayashi, Mingrui Ye, Siya Qi, Carsten Roever, Yuanxing Liu, Zheng Yuan, Jey Han Lau. "Can LLMs Simulate L2-English Dialogue? An Information-Theoretic Analysis of L1-Dependent Biases." In Proceedings of The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025, main long), 2025/08.
- Tatsuki Kuribayashi, Timothy Baldwin. "Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?" In Proceedings of The 31st International Conference on Computational Linguistics (COLING 2025, long), 2025/01.
- Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui. "Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps." In Proceedings of the 12th International Conference on Learning Representations (ICLR 2024, spotlight, top 5%), 2024/05.