Global AI regulations are shifting fast, but are Boards keeping up?
Published July 17, 2025
- Data & AI

It’s increasingly clear that AI isn’t just another tech trend; it’s a full-blown strategic priority; as AI becomes more embedded in products, services, and decision-making across the company, Boards can no longer afford to watch from the side-lines.
The rules and regulations are changing – fast
Global regulators are racing to catch up with the pace of AI innovation, creating a complex patchwork of evolving frameworks. The EU’s AI Act (in force as of August 2024 and live from February 2025) set the tone, but it’s far from alone. From the U.S. to China, Singapore to Saudi Arabia, governments are actively shaping how AI can and can’t be used – each with different priorities, scopes, and timelines.
For Boards, this means existing risks being amplified (e.g. operational risk, legal risk, reputational risk, and strategic risk) as well as entirely new risks that have never been managed before (i.e. intentional and unintentional risks derived from an AI system). A lack of shared understanding of AI-related risk across the enterprise will impede a Board’s ability to ensure that appropriate decisions are made. Not understanding their compliance status or obligations – or worse, assuming it’s someone else’s problem – could expose companies to fines, lawsuits, or damage to brand trust and consumer confidence. Furthermore, a failure to identify and understand AI-related risks within your environment and control AI could negatively impact the company’s overall IT strategy and their ability to realize value from their AI investments. Each company must decide its own balance between innovation and control.
What makes this landscape so challenging?
There’s no single global rulebook; some AI laws are already live and enforced while others are still being debated. Some apply broadly across industries (like the EU AI Act), while others are sector-specific (like those emerging in the U.S. healthcare and finance sectors). Many have extra-territorial reach, meaning they can apply to companies even if they’re not based in the country where the law originated.
For Boards overseeing global operations, this creates a need for a “highest common denominator” approach – adopting the most stringent applicable standard and applying it across jurisdictions.
The added challenge is that AI systems are likely to intercept with pre-existing legal frameworks, such as the General Data Protection Regulation (GDPR) and the Consumer Duty. Therefore, companies will need to ensure that pre-existing frameworks can co-exist with emerging frameworks in order manage risk and compliance holistically and efficiently.
Not everyone agrees on what constitutes AI. This matters because compliance often starts with identifying what systems fall under regulatory scope – and when definitions vary, things get difficult. Boards must ensure their organizations are applying consistent and transparent definitions of AI across the business.
Fulfilling regulatory obligations is the baseline, but Boards must also consider brand perception. Regulators and customers alike expect responsible AI use. Bias, lack of explainability, or privacy breaches could all spark backlash, even if technically compliant. In today’s environment, compliance alone isn’t enough. It’s also critical to track AI frameworks such as those from the OECD ‘AI Principles and AI Action Summit.
Managing AI responsibly touches multiple teams – risk, compliance, legal, IT, data, information security, technology & operations, and more. Boards should ensure these groups are working together, not in silos, to track regulations, assess risks, and govern AI usage effectively.
AI presents unique challenges: opaque algorithms, data quality issues, potential bias, and evolving safety threats. Boards need to be confident their organizations have the right governance strategy, organizational structure, training, and tools in place to tackle them. As companies evolve their ML and AI capabilities, if the foundational governance work has not been laid down, we expect companies to struggle to keep pace with their competitors but also meet consumer expectations on technology, functionality, and trust.
Keeping up and responding to global AI regulations
Over the next few years, we expect that companies will be developing their AI strategy, and to keep up, Boards need to understand the impact their AI strategy will pose. A proactive, well-informed approach to AI governance and regulation isn’t just good hygiene, it’s increasingly a competitive advantage.
To keep up with this rapidly evolving regulatory landscape requires continually tracking and response to manage compliance and reputation risks.
Wavestone global AI tracker

The Wavestone Risk Advisory team have developed a dynamic ‘Global AI Regulations Tracker’ that helps organizations track the trends and requirements imposed by AI regulations to ensure they respond accordingly through robust AI frameworks. We’re supporting cross-industry clients in translating the emerging regulatory landscape to foster a practical understanding of how to shape an effective AI governance capability that is relevant to your enterprise context, requirements and obligations. This is achieved through:
- Jurisdictional focus – Filter and zoom into relevant jurisdictions based on global operations.
- Framework alignment – Map regulatory insights to existing risk, governance frameworks, policies, and standards to define deduplicated and consolidated requirements.
- Impact gap analysis – Identify and prioritize gaps between current practices and regulatory expectations.
- Cross-functional action – Collaborate with stakeholders across Legal, Privacy, InfoSec, Procurement, and Data Science to define improvement opportunities and compliant and fit for purpose AI frameworks.
- Board-level reporting – Generate insights and summaries to support and report on responsible AI governance at the executive level.
Author
-
Madeleine Thirsk
Manager, United Kingdom
Wavestone
LinkedIn