The chameleon effect in education with social AI: can children learn by subconsciously mimicking a social robot?

Thursday, October 31, 2024

One of the most common applications of social robotics is for education with children. Rooted in psychology theories: 1) The chameleon effect that explains how people subconsciously mimic one another 2) Imitative learning that suggests that people learn from one another by copying each other’s behaviors; in my talk I explore the effects of subconscious mimicry that children exhibit when they interact with a social robot on their learning. In a series of three studies that we conducted, children engaged in one-to-one storytelling interactions with a social robot. We generated robot behaviors that induced subconscious mimicry from the child’s side. Furthermore, we assessed creativity as the learning metric during the interaction since creativity is a crucial demanded skill in the 21st century that has been shown to plunge as children move from kindergarten to elementary school. In my talk, I further shed the light on how to make use of current technological advances such as Large Language Models (LLMs) for education. I also explore some of the challenges for the field and potential future directions.

 

Post Talk Link:  Click Here

Passcode: 3Vu$WV=$

Speaker/s

Maha Elgarf is currently a Postdoctoral Associate at the Social Machines and Robotics (SMART) Lab at NYU Abu Dhabi. Her work focuses on leveraging AI and social robotics for mental health support especially for adults with depression and ADHD. In 2022, she got her Ph.D. in Computer Science from the Royal Institute of Technology (KTH) in Sweden. In her Ph.D., she generated behavioral methods for robots to stimulate children’s creativity through child-robot interactions. She previously earned both her bachelor’s and master’s degrees in Digital Media Engineering and Technology. She worked on her master’s thesis project at the Human-centered Artificial Intelligence Lab at the University of Augsburg in Germany where she implemented a system that converts visual information into sound for people with visual impairment. Her current research interests include social robotics, human-robot interaction, affective computing, deep learning, conversational AI and educational human-computer interaction.

Related