Navigating a Fragmented Future: AI Regulation and the Financial Services Sector
On December 11, 2025, President Trump signed a much-anticipated Executive Order that seeks to forestall state regulation of artificial intelligence (AI) by threatening federal lawsuits and the withholding of some federal funds and calls for a national policy framework on AI. The Executive Order, Ensuring a National Policy Framework for Artificial Intelligence (EO), declares it the policy of the administration “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”
The EO which aims to prevent restrictive state-level regulatory frameworks, signals a deregulatory approach focused on national competitiveness and innovation. While this may ease domestic compliance burdens, financial services firms—especially those operating across jurisdictions—face growing uncertainty from increasingly fragmented global frameworks.
Curtailing State Lawmaking
This Executive Order (EO) is an attempt by the President to do what Congress has not managed—override or pause state-level AI laws, which have rapidly multiplied over the past three years. State governments across the U.S. have introduced hundreds of AI-related bills, many of which have been passed. These laws are largely focused on protecting consumers and children, restricting AI use in specific areas, and requiring certain developers to be transparent about how their AI systems work.
In July 2025, the White House published America’s AI Action Plan, a broad policy blueprint aimed at securing U.S. leadership in AI by scaling back regulatory oversight. This plan, along with a series of related presidential directives, led the White House to issue a call for feedback on federal AI rules that might be seen as holding back innovation.
Global Regulatory Fragmentation
The EO positions federal authority as the primary arbiter of AI policy in the U.S., explicitly challenging “onerous” state laws that impose algorithmic transparency or anti-discrimination mandates. Notably, Colorado’s law banning algorithmic discrimination is cited as a problematic example for potentially requiring model outputs to be altered for fairness objectives.
For multinational firms, however, this sets the U.S. federal direction at odds with approaches in the EU, UK, and parts of Asia:
- EU AI Act intends to calssify AI systems by risk category, mandating extensive documentation, human oversight, and imposing bans on certain use cases.
- The UK AI Principles focus on “pro-innovation” but are firmly rooted into fairness, accountability, and explainability in financial algorithms.
- In Asia, the emphasis on AI governance is tied to existing data privacy and financial risk controls.
This regulatory divergence complicates compliance for global financial services institutions. Systems built or hosted in the U.S. under a federal deregulatory regime may still need to comply with stricter rules when deployed or used in the EU or other regulated jurisdictions.
Implications for “Input vs. Output” Governance
The EO’s language targets state mandates that focus on outputs— forcing models to produce non-“truthful” results to meet anti-bias criteria. But global regulatory bodies increasingly assess both inputs (training data, modelling assumptions) and outputs (decisions or recommendations) as part of comprehensive governance. Clean data producing better AI insights.
While the EO intends to de-prioritise state-level regulatory enforcement, it creates a litigation-focused response—via the AI Litigation Task Force—to challenge conflicting state laws. This reactive approach may generate new legal uncertainties for firms operating across state lines, even within the US. .
Despite the EO’s framing, financial firms should still align their AI policies with international principles—such as those from IOSCO, OECD, and the BIS—focusing on explainability, fairness, auditability, and proportionality.
Looking Ahead
The new Executive Order may simplify domestic AI innovation in the short term, but for financial services firms operating globally, the practical outcome is greater compliance complexity. Navigating “input vs. output” tensions across jurisdictions requires a layered compliance strategy that balances technical integrity with legal defensibility.
The months ahead will mark a decisive phase in the federal government’s efforts to assert greater control over AI regulation in the United States. By 10 January 2026, the Attorney General is expected to establish the AI Litigation Task Force. This new body will be responsible for identifying and legally challenging state-level AI laws that are deemed to interfere with the federal policy direction, particularly those seen as overly restrictive or ideologically driven.
Following this, a wave of regulatory activity is set to culminate by 10 March 2026. On this date, the Department of Commerce is due to publish a comprehensive evaluation of existing state AI laws. This assessment will also be accompanied by the release of the BEAD Policy Notice, which will determine whether states with restrictive AI laws remain eligible for certain federal broadband infrastructure funds. In parallel, the Federal Communications Commission will initiate proceedings to assess whether a federal AI reporting and disclosure framework should be introduced — one that could potentially override conflicting state requirements.
Also by March, the Federal Trade Commission is expected to issue formal guidance clarifying when AI systems, particularly those modified to produce biased or misleading outputs, may violate existing consumer protection laws under the Federal Trade Commission Act.
Taken together, these developments indicate a sharp shift toward centralised AI oversight. For financial services firms and other technology-reliant sectors, these deadlines signal a need to closely monitor federal actions — and to prepare for a future where compliance with a harmonised national framework may come into direct tension with obligations in other jurisdictions.




