The term ‘AI for good’ has become increasingly commonplace, highlighting a collective intention to use artificial intelligence for the benefit of humanity, society, and the planet. But what does it really mean to build AI systems that are not just intelligent, but truly human-centered?
That was the question posed by James Landay, Professor of Computer Science at Stanford University, during his recent talk at MBZUAI entitled ‘AI for Good isn’t Good Enough’.
Hosted by Elizabeth Churchill, Department Chair and Professor of Human-Computer Interaction at MBZUAI, Landay drew on decades of research in human-computer interaction, design thinking, and behavioral science to make a powerful case for rethinking how AI is designed.
He argued that, given the power of AI as a tool and its potential for both positive and negative applications and outcomes, we must make a fundamental change to who we consider while designing AI, how we consider them, and at what stage of the design process.
“We have to approach and design AI in a different way than we have previous computing technology,” he said.
“It’s critical that we design AI systems at the user, the community and the society level if we want to have a positive impact on the systems that we are building.”
Landay and his colleagues at Stanford launched the Institute of Human-Centered AI (HAI) six years ago to address this issue and explore new ways to design AI. Noticing that tech companies, researchers and manufacturers focused primarily on just the user level during design processes, he proposed an alternative.
“We need to think about who we study and who we involve to find our problems, develop the solutions, and evaluate whether we’re even building the right things.”
The importance of these three levels, he explained, was largely rooted in the wide-ranging and far-reaching effects of AI.
“AI systems aren’t just about the user,” he said. “They have many more side effects than traditional computing systems. They affect other people in your community, and they affect entire societies.”
Crucially, he added, these three aspects must be brought to the front of the design process; not included as an afterthought, or as the result of a negative consequence.
“This technology is too potent to think about the effects after the fact,” he said. “We need to understand ethics, responsible AI and responsible design before, during and after the development of the system.
“This is how we can build AI that helps people be better at their tasks, augment them to be better learners, better healthcare givers, better designers, to take better care of their bodies.”
Putting this new approach into practice, however, is easier said than done. The key to making it easier, claimed Landay, is collaboration and cooperation.
“Designing for these levels is hard,” said Landay. “Especially for computer scientists and engineers who have no training to think about community or society.”
“That’s why we need true interdisciplinary teams — technologists, AI experts, designers, social scientists, humanists and domain experts.”
This is one of the reasons Stanford was the ideal place to launch HAI, he added.
“We don’t need just computer scientists and AI experts. We have those, but we also have a world-class medical school, a world-class law school, a world-class business school, great humanities, social sciences, and the arts. All of these fields represent different parts of society and need to come together to help shape and comment on this technology.”
In concluding his presentation, Landay admitted that there is still a long road ahead before we reach fully human-centered AI but encouraged us to take some first steps.
“We’re in the really early days of finding the right design processes to practice truly human-centered AI,” he said. “But you can start by going to the community beyond your direct user population, and that will already get you to a point of potentially having a more positive impact on AI and society.”
Having invested in building world-class technical expertise in AI since launching in 2019, MBZUAI is also taking a lead in developing human-centered AI systems. The University’s Human-Computer Interaction (HCI) department was launched in 2024 to focus on the human-centered aspects of AI highlighted by Landay.
“We were honored to have Professor Landay visit us at MBZUAI,” said department chair Churchill, who is a long-term collaborator with Landay in the field of HCI.
“His visit signals the growing interest and investment in the UAE region for thinking not just technically about what is possible with emerging AI technologies for individuals, but also what community and societal impacts there will be going forward.
“Part of MBZUAI’s investment beyond the HCI Department is the fostering of collaborations with universities like Stanford who were ahead of the curve with the launch of HAI.
“The work of HAI is highly important. It focuses on addressing societal and ethical issues in the realm of AI development and dissemination. Recognizing that AI’s influence extends far beyond technical capabilities, HAI actively engages in research, policy discussions and education to address the ethical, social, economic, and political implications of AI.”
Aziz Khan of MBZUAI’s Computational Biology department hosted the UAE’s first Data Carpentry workshop to enhance genomics.....
The inaugural Abu Dhabi AI-Robotics Conference took place at MBZUAI, exploring the potential for AI-driven robotics to.....
Global experts gathered at MBZUAI for AHs 2025 to explore innovation, AI ethics and the future of.....