For decades, chatbots symbolized digital assistance. In 2025, that era is over. Agentic AI represents a fundamental shift from reactive scripts to intelligent agents that think, plan, and execute with precision.
At MBZUAI, that shift is already visible. But the movement is part of something much larger. MBZUAI Provost, Professor Timothy Baldwin, predicted that “2025 will be a breakout year for agentic AI” – and global data backs up that view. According to Gartner, by 2028, 33% of enterprise software applications will include agentic AI capabilities, up from less than 1% in 2024, and at least 15% of day-to-day work decisions will be made autonomously by AI agents.
Baldwin describes these systems as capable of understanding a user’s intent, planning a sequence of actions, and completing complex tasks end-to-end. Agentic AI, he says, “combines planning and action and has the potential to provide massive benefits to people”. But as a researcher focused on AI safety, he also warns that the deployment of these systems “needs to be pursued with care”.
At MBZUAI, Baldwin’s forecast has already come to life through Lawa.AI, a homegrown startup that demonstrates how the next generation of AI can think and act on behalf of users.
The system doesn’t just talk back; it takes action. Co-founded by MBZUAI computer vision doctoral students Wafa Alghallabi and Omkar Thawaker, Lawa.AI is being tested on the University’s own website. The results show what happens when local innovation meets global AI research: faster answers, deeper understanding, and trusted automation.
Backed by the Incubation and Entrepreneurship Center (IEC), the project reflects the UAE’s drive to translate academic research into deployable, secure AI innovations that enhance government and education services.
Speaking at GITEX Dubai 2025, Alghallabi shared how the solution began as a research project in multimodal efficiency and large language models (LLMs) before evolving into a platform capable of bridging the gap between people and information in higher education.

This is not Alghallabi and Thawaker’s first entrepreneurial venture. The pair previously launched Nutrigenics Care, an AI-powered nutrition platform that supports hospitals and dietitians with evidence-based management tools.
Lawa.AI grew from their shared vision of making institutional knowledge more accessible. “Traditional chatbots just give generic and scripted answers,” Alghallabi said. “They’re not intelligent. They’re not aware of policy and understanding how users interact. For this, we built Lawa.AI for the educational and governmental ecosystem.”
A small, focused research team of computer vision and natural language processing master’s students helped develop and test the agent for more than a year. Together, they overcame technical and organizational challenges to create a product refined under the mentorship of Professor Baldwin, Professor Fahad Khan, and Associate Professor Salman Khan.
Unlike existing chatbots that rely on APIs and scripted responses, agentic AI systems use LLMs and other foundation models to make sense of user intent, plan actions, and complete complex tasks.
Lawa.AI, short for LLM-assisted web agent, acts as an intelligent intermediary between users and an organization’s digital ecosystem. It interprets questions, retrieves the correct policies or data, and delivers accurate, cited, and personalized answers from official sources.
Traditional chatbots react. Agentic AI acts. It reasons, plans, and learns from feedback, and early adopters are already seeing increases in productivity. In practice, it means a student can ask Lawa.AI about scholarship eligibility, credit transfer, or exam schedules, and receive instant, policy-sourced answers with forms or next steps.
Its early performance on MBZUAI’s website demonstrates that policy-aware AI works at scale and can be replicated as a model across the UAE to transform how students, faculty, and visitors interact with institutional information. It can be adapted to any industry requiring a domain-specific AI assistant including an advanced search engine, personalized content generation, product recommendations, or multilingual support.
“MBZUAI created something; a model that can be taken to all other higher education institutions,” Alghallabi said. “It can be used anytime, anywhere, and on any platform. It’s dynamic to all platforms, through laptop or mobile or any other device. We’re providing the solution at a very low cost compared to existing models.”
The benefits go far beyond speed. By centralizing accurate, cited information and connecting departments, Lawa.AI frees staff to focus on higher-value work rather than repetitive inquiries. Cross-departmental questions that once lingered in inboxes are now resolved through a single, traceable thread, reducing administrative delays and increasing productivity.
This automation is already delivering measurable improvements:
Additionally, Lawa.AI provides data-driven insights derived from user interactions with its intelligent agents, helping organizations identify bottlenecks, optimize website navigation, and streamline internal workflows for improved efficiency and user satisfaction.
Because it was built in the UAE, Lawa.AI supports both Arabic and English, reflecting the country’s linguistic and cultural context. It can be deployed across government agencies, universities, and public service institutions.
“It’s important to have cultural awareness,” Alghallabi said. “If a question about Ramadan appears, it provides the answer with policy acknowledgement and cultural sensitivity.”
Baldwin predicts that as agentic AI systems like Lawa.AI mature, they will redefine not only how organizations communicate, but how they operate. Agents will connect databases, tools, and workflows, learning to carry out complex missions autonomously.
Unlike legacy chatbots confined to text output, agentic AI connects directly to tools, data, and systems. It can make travel or dinner reservations, file forms, or triage workflows—turning what was once conversation into real-world action.
But that power demands security, privacy, and human oversight at every level. Baldwin foresees “spectacular examples of agentic AI going off the rails” if deployed carelessly. “As a researcher who is heavily invested in AI safety, I also believe that it’s difficult to anticipate the outcomes of agentic AI systems, and their deployment needs to be pursued with care,” he said.
Gartner also reported “inadequate risk controls” as one of the three main reasons more than 40% of agentic AI projects will be canceled by the end of 2027, along with escalating costs and unclear business value.
While the efficiency gains are impressive, the Lawa.AI team emphasizes security, transparency, and accountability as foundational principles; values Baldwin also highlighted in his caution about uncontrolled agentic systems.
“Most importantly, we built Lawa with a foundation of ethics and trust,” Alghallabi told the GITEX audience. “All logs are anonymized, no personal information is stored, and every answer cites the source of its information.”
When confidence in an answer is low, the system connects the case to a human reviewer to ensure traceability and accountability. Regular bias audits and content updates remove outdated or conflicting data, unlike generic chatbots that rely on stale public indexes.
This attention to governance aligns with Baldwin’s view that connecting AI agents to real-world systems requires rigorous oversight. “AI safety is hard enough when dealing with tools like LLMs that input and output text,” he said. “The stakes get higher once we connect agents to other systems that have real-world impact.”
As Baldwin predicted, this is the year agentic AI takes action. And as Alghallabi and Thawaker have proven, chatbots are dead forever. The future belongs to intelligent agents that understand us, act for us, and do so safely.
As the line between conversation and action blurs, institutions adopting agentic AI must ensure that every system operates ethically and within policy boundaries. And what began as a student prototype has become a benchmark for responsible AI deployment in the UAE.
MBZUAI researchers have co-developed GEOBench-VLM, a benchmark testing AI’s geospatial skills for disasters, climate, and cities.
Researchers from MBZUAI have helped develop a new system that could drastically improve treatment decisions for people.....
A team of researchers will show how frontier AI could solve daily challenges for elderly people and.....