Navigating AI Governance in the Generative AI Era

As generative AI reshapes industries, a recent AIM Leaders Council roundtable brought experts together to tackle the complexities of AI governance, balancing innovation with responsible adoption in a rapidly evolving regulatory landscape.

As generative AI continues to reshape industries and accelerate innovation, it also introduces complex challenges around governance, risk, and compliance (GRC). In a recent roundtable discussion of the AIM Leaders Council, industry leaders gathered to unpack the intricacies of AI governance and share strategies for balancing innovation with responsible AI adoption.

The panel featured Arvind Mathur, Managing Director of Data & AI at Amazon Web Services (AWS), who brought his extensive expertise in cloud technologies and AI. Bhaskar Roy, Client Partner for APAC and Bangalore Center Head at Fractal, shared his insights into AI-driven transformation across regions. Dr. Deepak Narayanam, Group Head of Data (Chief Data Officer) at Access Bank Plc, offered a perspective on AI governance within the highly regulated banking sector. Kulbhooshan Patil, Head of Data Science and Analytics at TATA AIG General Insurance Company Limited, discussed the challenges of managing risk and compliance in the insurance industry. Panuwan Lerssrisuriya, Head of Data Analytics & CRM WPB at HSBC India, highlighted the complexities of working within both global and local regulatory frameworks. Lastly, Sridevi Vadapalli, Chief Data Scientist at Daimler Truck Innovation Center India, provided insights on balancing compliance with innovation in AI applications. The session was expertly moderated by Gurbans Chatwal, Vice President of Innovation & Intelligent Automation at Fiserv.

Gurbans Chatwal, the session’s moderator, opened with a stark observation: “Generative AI has democratized artificial intelligence within organizations. Everyone can start thinking about building applications.” While this democratization unlocks potential, it also amplifies risks, especially in high-stakes sectors like financial technology and banking, where systems handle millions of transactions daily.

This duality — innovation versus regulation — became the crux of the conversation, as leaders explored regional disparities, practical governance frameworks, and best practices.

Regional Disparities in AI Regulation

Bhaskar Roy spotlighted the uneven global regulatory landscape. While developed economies like the US and EU have established robust AI guidelines, countries in regions like South Asia and the Middle East face significant regulatory gaps. “In places like India and the Middle East, comprehensive regulatory frameworks are lacking, pushing organizations to self-regulate,” Roy explained. This regulatory vacuum forces businesses in these regions to proactively craft internal frameworks to mitigate AI risks.

Lessons from Industry Leaders

Banking Sector: Embedding Ethics and Lifecycle Governance

Dr. Deepak Narayanam, with his extensive experience in banking, emphasized integrating GRC considerations into every stage of AI deployment. Drawing parallels to privacy frameworks like Privacy Impact Assessment (PIA) and Data Protection Impact Assessment (DPIA), he proposed extending these to cover AI-specific nuances. “We must evaluate all facets of data origination, quality, and consent management,” Narayanam explained, underscoring the need for ethical safeguards from data collection to model deployment.

Automotive Innovation: Balancing Compliance with Creativity

Sridevi Vadapalli, shared how her team ensures governance without stifling creativity. Their Center of Excellence (CoE) for AI includes cross-functional representation from legal, HR, and leadership. This setup fosters innovation while keeping compliance at the forefront. She added, “We’ve consciously avoided generative AI solutions that directly interact with customers, ensuring we’re prepared to manage risks before scaling such applications.”

Navigating Regulatory Complexity

Panuwan Lerssrisuriya, discussed the unique challenges faced by the banking sector when integrating AI. She emphasized the complexity of operating under both global and local regulations, which adds layers of compliance oversight. “We are working within the global framework, but we also have to adapt to local regulatory requirements to ensure compliance,” Lerssrisuriya explained.

She continued, “As we move forward with generative AI, we’re being extra cautious before we implement solutions for customers. Many other industries, especially those with less stringent regulations, are already ahead in AI adoption. They’re seeing real data and fine-tuning their models as they go along. For us, we’re still in the pilot phase, testing and learning, ensuring that all risks are accounted for, including cyber resilience and regulatory data risks.”

Prioritizing Model Explainability

From the insurance sector, Kulbhooshan Patil highlighted their efforts to demystify AI models for all stakeholders. “Explainability is critical,” he noted, sharing how monthly testing protocols ensure consistent outputs from foundation models. Ideally using the same test cases every month provides us a baseline for comparison and makes it easier to detect trends or anomalies in outputs.

Emerging Best Practices in AI Governance

As the discussion unfolded, key best practices began to take shape:

  1. Explainability First: Several participants advocated for prioritizing explainable AI over generative models unless the latter’s benefits clearly outweigh the risks.
  2. Continuous Monitoring: Regular model testing and performance tracking emerged as non-negotiable to detect anomalies and maintain trust.
  3. Cross-functional Governance: Effective GRC implementation requires collaboration across departments, including legal, HR, cybersecurity, and marketing teams.
  4. Data Classification: Participants stressed the importance of stringent protocols, particularly around personally identifiable information (PII), to minimize exposure to risks.

The Unique Challenges of Generative AI

Arvind Mathur, another key participant, expanded on the evolving landscape in generative AI, acknowledging the new challenges that arise as organizations adopt these technologies. “Generative AI opens up a whole new range of considerations that weren’t as sharp in the past,” Mathur explained. He pointed out that while traditional AI concerns like privacy and data protection have been well established and regulated, generative AI introduces fresh complexities, particularly regarding the internal versus external use of AI solutions.

“When you start using generative AI in external environments with customers,” Mathur continued, “especially with the potential for hallucination, there are significant additional concerns. The risks associated with these technologies in customer-facing applications are substantial, but there isn’t yet clear regulation in this space.” This lack of regulation around the specific risks of generative AI in customer-facing applications, such as hallucinations or unpredictable outputs, presents a new frontier for AI governance.

Future Challenges

While existing frameworks can be extended to address many generative AI challenges, emerging issues demand novel solutions. Chatwal raised a critical question: “What happens when something goes wrong?” Incident management remains a gray area for many organizations, signaling the need for standardized protocols to address potential crises.

Roy provided additional insights into Fractal’s three-layer responsible AI framework:

  • Principles: Accountability, transparency, robustness, and fairness.
  • Behaviors: Establishing a culture of ethical AI use.
  • Enablers: Certifications, training, and widespread adoption of best practices.

This multi-faceted approach underscores the need for organizations to embed governance into their cultural and operational fabric.

Innovation Meets Control

As the roundtable concluded, a recurring theme emerged: the delicate balance between fostering innovation and enforcing control. Generative AI is a transformative force, but without robust governance, its risks can outweigh its rewards.

“Organizations must remain vigilant and adaptive,” Chatwal advised. “The technology is evolving rapidly, and so must our approach to governance.”

By integrating explainability, monitoring, and cross-functional collaboration into their AI strategies, businesses can unlock the potential of generative AI while managing its risks effectively. In this era of rapid technological change, responsible governance is not just a safeguard — it’s a competitive advantage.

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
aim councils
Join the AIM Leaders Council to connect with top data science leaders and gain exclusive insights.