Tatsuki Kuribayashi

Assistant Professor of Natural Language Processing

Research Interests

He is particularly interested in fundamental quesitons, such as: what can modern NLP tell us about human language? Under what conditions can human-like language ability be replicated? His research is also relevant to the efficiency of NLP systems, given that humans can efficiently learn and process language. - Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin. "Emergent Word Order Universals from Cognitively-Motivated Language Models." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024, main long), 2024/08. - Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin. "Psychometric Predictive Power of Large Language Models." Findings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024, Findings long), 2024/06. - Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui. "Lower Perplexity is Not Always Human-Like." In proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021, main long), pp. 5203-5217, 2021/08 - Rena Wei Gao, Xuetong Wu, Tatsuki Kuribayashi, Mingrui Ye, Siya Qi, Carsten Roever, Yuanxing Liu, Zheng Yuan, Jey Han Lau. "Can LLMs Simulate L2-English Dialogue? An Information-Theoretic Analysis of L1-Dependent Biases." In Proceedings of The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025, main long), 2025/08. - Tatsuki Kuribayashi, Timothy Baldwin. "Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?" In Proceedings of The 31st International Conference on Computational Linguistics (COLING 2025, long), 2025/01. - Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui. "Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps." In Proceedings of the 12th International Conference on Learning Representations (ICLR 2024, spotlight, top 5%), 2024/05.

Professor Kuribayashi’s research focuses on unique interdisciplinary topics bridging NLP to the science of human language. This involves the exploration of the cognitive plausibility of NLP models as well as the role of modern NLP in understanding language acquisition, processing, and communication. This spans multiple areas beyond NLP, such as computational psycholinguistics, linguistic typology, and information theory.
2022 Ph.D. of Information Science, Graduate School of Information Sciences, Tohoku University, Japan. 2020 Master of Information Science, Graduate School of Information Sciences, Tohoku University, Japan. 2018 Bachelor of Engineering, Department of Information and Intelligent Systems, Tohoku University, Japan.
- 2023 Best Paper Award, ACL student research workshop. - 2022 Best Paper Award, AACL student research workshop. - 2021 Best Paper Award, Association for Natural Language Processing, Japan. - 2020 Best Paper Award, Annual Meeting of the Association of Natural Language Processing, Japan. - 2024 Outstanding Paper Award, Annual Meeting of the Association of Natural Language Processing, Japan. - 2024, 2023, 2021 Special Committee Award, Annual Meeting of the Association of Natural Language Processing, Japan. - 2022 President’s Award, Graduate School of Information Sciences, Tohoku University. - 2022, 2020 Excellent Student Award in Electrical and Information Science, Tohoku University. - 2018 Excellent Academic Award, School of Engineering, Tohoku University.

Prior to joining MBZUAI, Professor Kuribayashi was a postdoctoral researcher at Tohoku University. He also launched a start-up company to develop writing assistance systems in Japan. Prior to the current role, he was a postdoctoral researcher at MBZUAI, where he focused on the interdisciplinary field to bridge NLP and language science. Professor Tatsuki holds a PhD in information science from Tohoku University under the JSPS DC1 fellowship. His PhD work focused on the fundamental discrepancy between language processing underlying humans and language models. He has published over 30 research articles, including over a dozen papers in top-tier NLP conferences, on broad interdisciplinary topics. He has committed to the field as a leading organizer of international workshops on cognitive modeling and computational linguistics. He has also been an action editor for the ACL Rolling Review as well as served program committees in many conferences, such as ACL, EMNLP, NAACL, NeurIPS, COLM, COLING, CoNLL, LREC.

Contact faculty affairs

Interested in working with our renowned faculty?
Fill out the below form and we will get back to you.