In Part 2 of this series, we examined how legal teams can proactively scan the horizon for emerging AI risks and opportunities. Now, in Part 3, our focus turns to the evolving AI regulatory landscape and how legal professionals can navigate a patchwork of laws, guidelines and standards.
Understanding the Global Regulatory Patchwork
In today’s fast-paced environment, navigating AI regulation can feel like aiming at a moving target. As of May 2025, some jurisdictions have formal laws while others rely on high-level principles or existing statutes. Thus, it would be beneficial for legal teams to map these developments and chart a compliance path that works across borders, as well as actively monitoring any applicable changes.[1][2]
EU AI Act: A New Comprehensive Regime
The European Commission proposed the first EU AI law in April 2021, to ensure that AI systems used in the EU are safe, transparent, and overseen (EU AI Act: first regulation on artificial intelligence | News | European Parliament). It applies a risk-based classification (minimal, limited, high and unacceptable) and places strict controls on high-risk uses, such as medical devices, hiring tools and credit scoring.[1] It prohibits unacceptable practices (for example, social scoring or real-time biometric identification by law enforcement) and requires transparency for limited-risk systems (users must be informed they are interacting with AI).[1]
Providers of high-risk AI must carry out risk assessments and mitigation measures, ensure high-quality, representative datasets, maintain detailed technical documentation and logging, provide information to users, enable human oversight and ensure robustness and cybersecurity.[1] They should also undergo conformity assessments before marketing their systems. Non-compliance can incur fines up to € 35 million or 7 per cent of global turnover for the most serious breaches, and up to € 15 million or 3 per cent for other violations.[3]
Although the Act entered into force on 1 August 2024 and most obligations apply from August 2026, key prohibitions kicked in on 2 February 2025, and transparency rules for general-purpose models start on 2 August 2025.[1] The Act's extra-territorial reach means any AI affecting EU citizens may fall under its scope.[5] In late 2024, the EU also adopted a landmark revision on the Product Liability Directive to include software and AI in the definition of "products", exposing providers to liability for harm caused by AI systems.[3] On the other hand, we see a retreat in the proposed AI Liability Directive was withdrawn by the European Commission in February 2025 due to lack of consensus among member states, leaving civil liability questions to be addressed through existing national laws and the revised Product Liability Directive. This may reflect the complexities and the different views on how regulation should go, especially when the field is rapidly evolving.
UK's Principles-Based Approach
The UK has not enacted any AI-specific regulation,but has implemented a principles-based framework under existing regulators. The 2023 AI White Paper sets out five principles: safety (including security and robustness), appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress, which agencies such as the ICO and FCA apply in their sectors.[2]
UK GDPR's Article 22 already limits fully automated decisions with legal or similarly significant effects, mandating human review in many cases, and common law duties (negligence, fiduciary duty) apply if AI causes harm.[4] At the AI Summit, speakers emphasised that organisations must define a "reasonable standard of care" for AI now, as this will become the benchmark in litigation. Legal teams should treat non-binding guidelines as effectively mandatory and monitor Whitehall for signs of formal legislation.[6][7]
United States: Federal Guidance and State Innovation
The United States does not have an integrated AI law, on executive orders, agency guidance and state statutes. Executive Order 14110 (30 October 2023) mandated safety testing for frontier models and required developers to share test results with the federal government.[18] However, this order was revoked on 23 January 2025 by Executive Order "Removing Barriers to American Leadership in Artificial Intelligence," which indicates a significant shift in federal AI policy toward deregulation and private sector leadership. This reversal affects all federal AI initiatives previously established under the Biden administration.
The FTC continues to enforce AI rules under existing consumer-protection powers, warning that unfair or deceptive AI practices breach the FTC Act.[7] The NIST AI Risk Management Framework (26 January 2023) remains voluntary but widely adopted as a de facto standard for identifying and mitigating AI risks.[9]
States remain active despite federal deregulation: Colorado's SB 24-205 (May 2024) imposes impact-assessment, documentation and anti-bias duties on high-risk AI developers and deployers, taking effect on 1 February 2026,[10] New York City's Local Law 144 (enacted December 2021, enforcement began July 2023) requires annual bias audits of automated hiring tools [11], California's AI Transparency Act (SB 942), effective January 2026, mandates disclosures specifically for generative AI content through watermarking and detection tools,[12] and Utah's SB 149 (effective May 2024) represents the first major private-sector AI consumer protection law in the US.
Legal teams should adhere to the strictest applicable standards and monitor emerging federal norms, as a patchwork of sectoral rules, state laws and agency guidance will continue to evolve despite federal retreat from comprehensive regulation.
Key Regulatory Principles and Obligations
Despite varied approaches, most AI regimes share core requirements:
- Risk-based categorisation and oversight
High-risk or safety-critical AI (e.g. healthcare, finance, employment) faces stringent requirements, often mandating human-in-the-loop controls. Legal teams must classify AI applications by risk and scale compliance measures accordingly.[1][10] - Transparency and documentation
Regulators expect disclosure when users interact with AI and extensive records of model development, training data, architecture and performance testing. [1][7] Legal teams should ensure a clear paper trail and the ability to explain decisions to impacted parties. - Data governance and bias mitigation
Rules emphasise data quality, representativeness and privacy. High-risk AI providers must implement data-governance procedures and conduct bias audits or impact assessments.[1][4] Legal teams should verify data selection, testing protocols and remediation processes. - Accountability and security
Frameworks assign clear responsibility for AI outcomes and require robust cybersecurity measures. Legal teams should confirm that each AI system has a designated owner and that risk-management protocols cover security threats.
Compliance Challenges Across Jurisdictions
Global organisations face key challenges:
- Divergent definitions of AI
Definitions vary by jurisdiction, which creates uncertainty over which systems fall under AI rules.[1] - Inconsistent risk classification
Different criteria for high-risk AI forces firms to adopt multiple risk-assessment methodologies or adhere to the strictest standard.[1][10] - Varying obligations and processes
The EU Act prescribes detailed conformity assessments, whereas the UK's approach is principle-based. These disparities may lead to over-compliance or gaps under stricter regimes. - Non-aligned timelines and extraterritorial scope
Enforcement dates range from 2023 to 2026, and the EU Act's extra-territorial reach can impose overlapping obligations, which can further complicate planning.[1][3]
These challenges highlight the need for a unified AI policy with jurisdiction-specific appendices can simplify compliance and reduce liability gaps.
Standards as a Compliance Foundation
International standards provide a vital compliance base:
- ISO/IEC 42001:2023
Establishes requirements for an AI Management System (AIMS) with a plan-do-check-act cycle, covering governance, risk management, transparency and continual improvement.[13] - NIST AI Risk Management Framework
Offers voluntary guidance on identifying, measuring and managing AI risks. Adoption positions organisations favourably for future regulations.[9] - Other standards
IEEE 7001-2021 on AI transparency,[14] ISO/IEC 23894:2023 on AI risk management [15] and ISO/IEC 38507:2022 on AI governance for boards.[16]
Following recognised standards demonstrates reasonable measures in litigation or regulatory review and can reduce insurance or contractual risks. Leading firms publish internal toolkits (Rolls-Royce's Aletheia Framework (2020)[17] and the BBC's Machine Learning Engine Principles (2023))[18] to guide AI ethics and trustworthiness. Legal teams should encourage adopting such frameworks proactively to build a governance bedrock that meets current obligations and exceeds future requirements.
How to Future-Proof Against Regulatory Change
To stay ahead of evolving AI policies, legal teams should:
- Build adaptable governance structures
Establish a permanent AI governance committee with cross-functional representation and documented processes to update policies swiftly in response to new laws. - Systematic monitoring
Implement horizon-scanning for draft bills, enforcement actions and case law. Assign team members to track key jurisdictions and participate in public consultations. - Maintain thorough documentation
Keep an up-to-date inventory of AI systems, risk classifications and compliance actions. Store audit trails centrally to demonstrate readiness. - Adopt a compliance-plus-ethics mindset
Form an AI ethics board to vet projects beyond legal minimums. Evaluate AI uses not only for legality but also for responsibility and alignment with corporate values.
Proactive compliance builds trust with customers, partners and regulators, which in turn enables innovation within reliable guardrails. In a rapidly changing landscape, the companies with agile, forward-looking legal guidance will harness AI's benefits safely and successfully.
Five Practical Takeaways: A Checklist for Regulatory Readiness
- Establish an AI Governance Committee
Form a cross-functional taskforce to define AI policies, approve new use cases, monitor compliance and report to leadership. - Implement best-practice standards
Adopt ISO 42001 or the NIST RMF as de facto requirements to develop internal procedures for risk assessment, bias mitigation, transparency and security. - Plan for cross-jurisdiction compliance
Map all deployment regions, identify the strictest rules and harmonise your programme accordingly, adding local requirements as needed. - Foster SME readiness
Use lightweight AI policies and risk-assessment checklists. Leverage governmental and professional-body toolkits to build basic governance quickly. - Maintain a regulatory watch
Assign responsibility for continuous monitoring and hold quarterly reviews to adjust policies promptly in line with new developments.
Disclaimer: The sole purpose of this article is for information purposes and the contents of this document should not be considered as legal advice. The article is based on publicly available information and while care is taken in compiling this, no warranty, express or implied is given, nor does Deminor assume any liability for the use thereof.
References
- European Commission (2024) Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (AI Act), COM(2024) 1689. Brussels: European Commission.
- UK Government (2023) AI Regulation White Paper: A Pro-Innovation Approach. London: Department for Science, Innovation and Technology.
- European Parliament (2024) Directive (EU) 2024/2853 on Liability for Defective Products. Strasbourg: European Parliament.
- Information Commissioner's Office (2023) Guidance on AI and Data Protection. Wilmslow: ICO.
- European Commission (2024) Annex to COM(2024) 1689 -- Unacceptable and High-Risk AI Use Cases. Brussels: European Commission.
- Equality and Human Rights Commission (2022) AI and Public Sector Equality Duty Guidance. Manchester: EHRC.
- The White House (2023) Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (EO 14110). Washington DC: The White House. [Note: Revoked 23 January 2025]
- Federal Trade Commission (2021) Office of Policy Planning et al., FTC Voices: Addressing AI and Algorithmic Bias. Washington DC: FTC.
- National Institute of Standards and Technology (2023) AI Risk Management Framework (Version 1.0). Gaithersburg: NIST.
- Colorado General Assembly (2024) Senate Bill 24-205: Regulation of High-Risk AI Systems. Denver: Colorado General Assembly.
- New York City Council (2021) Local Law 144: Automated Employment Decision Tools. New York: NYC Council.
- California Legislature (2023) California AI Transparency Act (SB 942). Sacramento: California Legislature.
- International Organization for Standardization (2023) ISO/IEC 42001:2023 -- Artificial Intelligence Management System. Geneva: ISO.
- IEEE Standards Association (2021) IEEE 7001-2021: Standard for Transparency of Autonomous Systems. New York: IEEE.
- ISO/IEC JTC 1/SC 42 (2023) ISO/IEC 23894:2023 -- Artificial Intelligence Risk Management Guidance. Geneva: ISO.
- ISO/IEC JTC 1/SC 42 (2022) ISO/IEC 38507:2022 -- Governance Implications of Artificial Intelligence. Geneva: ISO.
- Rolls-Royce (2020) Aletheia Framework: AI Ethics and Trustworthiness Toolkit. London: Rolls-Royce.
- BBC (2023) Machine Learning Engine Principles and Self-Audit Checklist. London: British Broadcasting Corporation.
- Society for Computers and Law (2023) Model AI Contract Clauses. London: SCL.
- Consilien (2023) Global AI Governance Survey. London: Consilien Research.