AI Litigation, Enforcement, and Compliance Risk: Q4 2025 Regulatory Update
At the federal level, the US has adopted a pro-innovation posture that largely defers to agencies for specific enforcement guidance. Meanwhile, certain states have advanced AI regulations focused on consumer protection and workplace practices, creating a complex and, at times, contradictory enforcement environment. Abroad, the EU AI Act has entered its staged implementation, and other jurisdictions are developing and tightening oversight. This update builds upon our June 2025 Structured Response Framework[1] by summarizing key state, federal, and international regulatory developments and practical takeaways for SEC registrants, boards of directors, multinationals, and private-equity investors.
The Current Regulatory Landscape[2]
The regulatory environment has entered an implementation phase. Federal agencies are advancing the Trump administration’s pro-growth priorities but continue to use existing authorities to address AI-related disclosure, governance, and consumer-protection issues. The SEC’s new task force and DOJ guidance to prosecutors both signal continued focus on how companies describe and govern their AI systems. At the state level, legislatures are advancing frameworks that address requirements related to automated decision-making and generative AI disclosures. In the past six months, multiple cases were brought by DOJ, the SEC, and the FTC related to AI washing and AI fraud. Abroad, the EU AI Act’s initial obligations took effect in August, and regulators in Brazil, China, Japan, Singapore, and the UAE are introducing complementary measures that collectively establish a more defined global baseline for AI oversight.
Federal:
- July 2025 – The White House releases America’s AI Action Plan, establishing an innovation-driven and pro-growth AI strategy.
Regulatory Agencies:
- Sept 2025 – FTC opens inquiry into seven companies’ consumer-facing AI chatbots, seeking data on how they test, monitor, and govern potential harms.
- Aug 2025 – SEC launches an AI Task Force and creates a Chief AI Officer role to advance internal AI use and strengthen oversight of AI-related disclosures. The SEC also publishes its 2025 AI Compliance Plan.
- May 2025 – BIS rescinds the AI Diffusion Rule and issues updated export-control guidance on compute and model sharing.
- Sept 2024 – DOJ updates its Evaluation of Corporate Compliance Programs to include expectations on companies’ AI governance and data-analytics controls.
State:
- Oct 2025 – California’s new AI regulations take effect, requiring employers to retain automated decision systems data for four years and conduct bias testing when using AI in hiring and promotions.
- Aug 2025 – Colorado delays implementation of its AI Act to June 2026, extending the timeline for the nation’s first comprehensive state law regulating “high-risk” AI systems in employment, housing, credit, education, and healthcare.
- Aug 2024 – Illinois amends its Illinois Human Rights Act which will prohibit employers from using AI in employment decisions. The amendment goes into effect January 1, 2026.
- Nov 2025 – China is set to implement three new national standards to enhance the security, governance, and ethical use of generative AI technologies.
- Aug 2025 – EU AI Act obligations for general-purpose AI models take effect, accompanied by a voluntary Code of Practice encouraging early compliance before full enforcement in 2026.
- Jul 2025 – New Zealand launches National AI Strategy that emphasizes ethical deployment and industry investment.
- May 2025 – Singapore updates its AI Verify testing framework, enhancing generative-AI guidance in alignment with international standards.
Practical Considerations
SEC Registrants
SEC registrants should expect continued government scrutiny of AI-related disclosures, governance, and controls. Firms must provide transparent and accurate disclosures on their use of AI, including impact on financial performance and potential risks. In addition to incorporating AI-related risks into formal risk assessments, registrants should document AI-related governance and controls and conduct periodic monitoring and testing over key processes.
Boards of Directors
Boards should direct companies to conduct periodic reviews of AI-related risks, particularly those related to data governance, model transparency, and vendor oversight. As part of these efforts, boards should consider implementing a defined AI governance framework aligned with applicable domestic and international regulatory standards.
Multinational Companies
Multinationals should identify and assess the AI-related risks and regulations that apply to their global operations and supply chain. In particular, risks may exist related to third parties’ (e.g., vendors) compliance with applicable data, export control, and consumer protection regulations. Multinationals should carefully examine the scope of potentially relevant laws and regulations. For example, the EU AI Act’s extraterritorial provisions apply to non-EU companies that develop, deploy, or distribute AI systems within the EU.
Private Equity Funds and Investors
PE funds and investors in AI technology should conduct thorough due diligence to identify potential governance and control gaps that could lead to AI washing, data-lineage weaknesses, and other potential issues. Investors should assess the extent to which AI capabilities are supported by documentation and subject to periodic validation.
How A&M Can Help
Proactive
AI Claims and Disclosure Risk Review: Evaluate AI-related statements in filings, marketing materials, and investor communications and help organizations integrate AI into existing or developing risk assessments consistent with DOJ expectations, to identify and mitigate “AI-washing” or misrepresentation risks before they draw regulatory scrutiny.
Governance and Control Evaluation: Assess AI governance structures, model-validation processes, and control frameworks against regulatory expectations to promote well-documented and defensible programs.
Third-Party and Vendor Compliance Testing: Review AI vendors and data partners for compliance with data-privacy and jurisdiction-specific requirements, including the EU AI Act’s extraterritorial obligations.
Reactive
Regulatory Inquiry and Enforcement Response: Provide forensic technology and litigation-support services to in-house and outside counsel responding to regulatory inquiries related to companies’ AI use, governance, or disclosure practices.
Litigation and Class-Action Support: Deliver forensic accounting, data analytics, and expert testimony in matters alleging AI-related misrepresentation, IP misuse, discrimination, or bias.
[1] Brooke Hopkins, et al., “AI Litigation, Enforcement and Compliance Risk: A Structured Response Framework,” Alvarez & Marsal, June 4, 2025.
[2] All regulatory developments are drawn from published reports available in the public domain.