He is particularly interested in fundamental quesitons, such as: what can modern NLP tell us about human language? Under what conditions can human-like language ability be replicated? His research is also relevant to the efficiency of NLP systems, given that humans can efficiently learn and process language. - Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin. "Emergent Word Order Universals from Cognitively-Motivated Language Models." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024, main long), 2024/08. - Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin. "Psychometric Predictive Power of Large Language Models." Findings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024, Findings long), 2024/06. - Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui. "Lower Perplexity is Not Always Human-Like." In proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021, main long), pp. 5203-5217, 2021/08 - Rena Wei Gao, Xuetong Wu, Tatsuki Kuribayashi, Mingrui Ye, Siya Qi, Carsten Roever, Yuanxing Liu, Zheng Yuan, Jey Han Lau. "Can LLMs Simulate L2-English Dialogue? An Information-Theoretic Analysis of L1-Dependent Biases." In Proceedings of The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025, main long), 2025/08. - Tatsuki Kuribayashi, Timothy Baldwin. "Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?" In Proceedings of The 31st International Conference on Computational Linguistics (COLING 2025, long), 2025/01. - Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui. "Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps." In Proceedings of the 12th International Conference on Learning Representations (ICLR 2024, spotlight, top 5%), 2024/05.
Prior to joining MBZUAI, Professor Kuribayashi was a postdoctoral researcher at Tohoku University. He also launched a start-up company to develop writing assistance systems in Japan. Prior to the current role, he was a postdoctoral researcher at MBZUAI, where he focused on the interdisciplinary field to bridge NLP and language science. Professor Tatsuki holds a PhD in information science from Tohoku University under the JSPS DC1 fellowship. His PhD work focused on the fundamental discrepancy between language processing underlying humans and language models. He has published over 30 research articles, including over a dozen papers in top-tier NLP conferences, on broad interdisciplinary topics. He has committed to the field as a leading organizer of international workshops on cognitive modeling and computational linguistics. He has also been an action editor for the ACL Rolling Review as well as served program committees in many conferences, such as ACL, EMNLP, NAACL, NeurIPS, COLM, COLING, CoNLL, LREC.
Interested in working with
our renowned faculty?
Fill out the below form and we will get back to you.