Sphere Partners
Govarix

AI in Financial Services: What FINRA, MiFID II, DORA, and the EU AI Act Require

Date Published

AI in Financial Services: What FINRA, MiFID II, DORA, and the EU AI Act Require — hero image
TL;DR

A bank operating in the EU and US faces at minimum seven concurrent AI regulatory frameworks: EU AI Act, GDPR, MiFID II, DORA, FINRA, SEC Reg FD, and SOX. Credit scoring AI is explicitly High-Risk under EU AI Act Annex III. DORA entered application in January 2025 — AI vendors are ICT third parties requiring formal risk assessment. FINRA rules apply to AI-generated customer communications with the same force as human-authored ones. The EU AI Act maximum fine is €35M or 7% of global turnover — for a €10B revenue bank that is €700M. All seven frameworks can be enforced simultaneously at the AI message layer without a separate compliance programme per regulation.

€700M
Potential EU AI Act fine for a €10B revenue bank — 7% of global turnover for Article 5 prohibited practice violations
$89M
FINRA total enforcement fines in 2023, including AI-assisted communications and supervisory control failures
Jan 2025
DORA application date — EU financial institutions are already required to treat AI vendors as ICT third-party service providers
7+
Concurrent regulatory frameworks applying to EU + US financial services AI — applied simultaneously to the same platform

A bank operating in the EU and US with a trading operation and a consumer-facing digital banking product faces an unusually dense compliance stack for AI: EU AI Act, GDPR, MiFID II, DORA, FINRA, SEC Reg FD, and SOX — seven distinct regulatory frameworks, each with AI-specific implications, applied to the same employees using the same AI platform.

The default approach — separate compliance programmes for each regulation, separate policy documents, separate training — produces an unworkable overhead. Most financial institutions have responded to AI regulation by adding AI sections to existing GRC programmes rather than building governance infrastructure that acts at the enforcement layer. The result is comprehensive documentation and no real-time control.

For the foundational distinction between documentation compliance and real-time enforcement, see our analysis of AI governance vs AI compliance. The gap is more consequential in financial services than in any other sector, because the regulatory density means more rules to document and more rules that are not being enforced.

The Financial Services AI Regulatory Stack at a Glance

FrameworkJurisdictionKey AI ObligationEnforcement / In Force

EU AI Act

EU + EEA

Risk classification, High-Risk system requirements, GPAI documentation

August 2026 (phased)

GDPR

EU + EEA

Personal data in AI queries, automated decision-making, cross-border transfers

May 2018 (fully in force)

MiFID II

EU + EEA

AI investment advice suitability, algorithmic trading controls

January 2018 (fully in force)

DORA

EU + EEA

AI vendors as ICT third parties, resilience testing, incident reporting

January 2025 (fully in force)

FINRA Rules

USA

AI-generated customer communications standards, supervisory controls

Ongoing — applicable now

SEC Reg FD

USA

MNPI in AI queries, selective disclosure via AI-generated content

Fully in force

SOX

USA (public cos.)

AI in financial reporting controls, audit trail requirements

Fully in force

DFSA / CBUAE

UAE

AI governance framework for DIFC/ADGM-licensed firms, data sovereignty

2024 guidance (enforceable)

Regulatory Position

"The existing securities laws and regulations apply to the use of AI by registrants. Whether a communication to a retail investor is authored by a human or generated by a machine, the standards for fairness, accuracy, and completeness are identical. Firms cannot use AI as a shield against supervisory responsibility."

— Gary Gensler, Chair, U.S. Securities and Exchange Commission, Remarks Before the Brookings Institution, December 5, 2022. The SEC's foundational position on AI-generated communications: existing rules apply in full.

EU AI Act: The Foundation Layer for Every EU Financial Institution

For any financial institution operating in the EU, the EU AI Act is the overarching framework within which all other AI governance operates. Its requirements apply regardless of which other regulations are in scope.

Financial services has the highest concentration of explicitly High-Risk AI categories in the entire EU AI Act. Annex III lists the following as High-Risk in financial services contexts:

  • Credit scoring and creditworthiness assessment (Category 5.B) — any AI that determines access to credit, evaluates creditworthiness, or sets loan terms
  • AI used in employment and HR decisions (Category 4) — hiring, promotion, performance assessment, task allocation
  • AI in access to education or vocational training — relevant for financial education products
  • AI in biometric identification — relevant for KYC/AML processes using facial recognition or voice verification

High-Risk AI systems require: technical documentation, accuracy and robustness testing, data governance records, human oversight procedures, registration in the EU AI Act database, and post-market monitoring. This is not a one-time audit — it is a continuous compliance obligation throughout the system lifecycle.

The penalty structure is calibrated at a level that is material for any financial institution. For Article 5 prohibited practice violations: €35M or 7% of global annual turnover, whichever is higher. For a bank with €10B in revenue, 7% is €700M — a figure large enough to affect capital adequacy ratios. For a complete guide to classification and compliance under the EU AI Act, see our EU AI Act risk classification guide.

MiFID II: AI Investment Advice Requires Suitability Verification

MiFID II's suitability requirements apply to investment advice regardless of whether that advice comes from a human, an algorithm, or a generative AI system. The regulation requires that any investment recommendation to a retail client accounts for that client's individual financial situation, investment objectives, risk tolerance, and knowledge and experience.

An AI system that produces specific investment recommendations — "buy X", "allocate Y% to this asset class", "this fund matches your risk profile" — without verified suitability data is producing non-compliant advice. The AI does not eliminate the suitability obligation; it inherits it.

The practical mitigation is a content policy that blocks investment advice query patterns for retail-facing AI interactions. The MiFID II policy template in Govarix intercepts prompts matching patterns including: requests for specific security recommendations, portfolio allocation guidance, comparative product assessments framed as recommendations, and similar formulations that would constitute advice without suitability context.

MiFID II — What Your AI Policy Must Block
  • Specific buy/sell/hold recommendations on named securities to retail clients
  • Asset allocation guidance presented as personalised recommendations
  • Risk suitability assessments for specific investment products
  • Comparative fund or product rankings framed as client-specific advice
  • AI-generated financial plans not tied to a documented suitability profile

MiFID II also governs algorithmic trading. Financial institutions using AI for trade execution or order generation must maintain audit trails of algorithmic decisions equivalent to those required for human trading desks. AI-generated trade rationales must be logged with the same completeness as human-authored ones.

DORA: AI Vendors Are ICT Third Parties — Starting January 2025

The Digital Operational Resilience Act entered application in January 2025. For AI, DORA's most immediate obligation concerns third-party ICT risk: AI vendors — OpenAI, Anthropic, Microsoft Azure OpenAI, Google Gemini — are ICT third-party service providers under DORA's definition.

Financial entities must:

  • Include AI vendors in their ICT third-party register — with full documentation of service scope, data handling, and service level commitments
  • Conduct due diligence on AI vendor operational resilience — security controls, incident response, business continuity, and data centre geography
  • Assess concentration risk — if multiple critical operations depend on a single AI vendor, that concentration must be documented and mitigated
  • Report significant AI incidents — operational disruptions, security incidents (including prompt injection attacks), and AI accuracy failures that affect financial operations may be DORA-reportable incidents

Financial institutions that had not assessed their AI vendors as ICT third parties before January 2025 were already in breach of DORA on day one of enforcement. This is not a future deadline — it is a current compliance gap for many institutions that adopted AI tools through shadow adoption channels without formal vendor assessment.

FINRA and SEC: US Securities Compliance for AI Communications

FINRA's communications standards under Rules 2010, 2210, and 4511 apply to all customer communications — including AI-generated and AI-assisted communications. The SEC's position, as stated by former Chair Gary Gensler, is that existing securities regulations apply in full to AI-generated content: "the standards for fairness, accuracy, and completeness are identical" whether content is human-authored or machine-generated.

For broker-dealers, this means:

  • AI-generated retail communications must meet fair and balanced standards — balanced presentation of risks and opportunities, no selective omission of material information
  • Supervisory controls must specifically cover AI-generated content — a supervisory system that covers human-authored communications but not AI-generated ones is non-compliant
  • Records of AI interactions with customers are communications records under Rule 17a-4 — they must be retained for the required retention period (up to 7 years) in the required format
  • AI-generated research must meet analyst certification and independence requirements the same as human-authored research

SEC Reg FD prohibits selective disclosure of material non-public information (MNPI) to persons who might trade on it. An employee drafting investor communications with AI assistance who inadvertently includes MNPI in their query — and that query is logged, processed by an external model, or potentially accessible — creates a Reg FD exposure. Content policy enforcement that detects MNPI-adjacent patterns (earnings pre-announcements, unreleased deal information, board decisions) is the standard mitigation.

GDPR: The Data Layer Beneath Every Financial AI Interaction

GDPR applies to every AI interaction that processes personal data of EU residents — which, for a financial institution, means essentially every customer-related query. The obligations relevant to AI governance:

  • Lawful basis for processing: Sending customer personal data to an external AI API requires a documented legal basis under Article 6. Most use cases require either legitimate interest assessment or explicit data processing terms with the AI vendor.
  • Special category data: Financial hardship information, health data, and demographic data touching protected characteristics are special category data under Article 9, requiring explicit consent or other enumerated legal basis. AI content policies must distinguish between standard and special category personal data.
  • Automated decision-making: Article 22 gives individuals the right not to be subject to solely automated decisions with significant effects. Credit decisions, risk scoring, and fraud flagging driven entirely by AI without human review may trigger Article 22 rights.
  • Cross-border transfers: When AI queries containing EU personal data are processed by servers outside the EU or adequate countries, Standard Contractual Clauses or equivalent transfer mechanisms must be in place.

SOX: AI in Financial Reporting Controls

The Sarbanes-Oxley Act's internal controls requirements (Section 302 and 404) extend to AI systems used in or adjacent to financial reporting processes. If AI assists in financial statement preparation, revenue recognition analysis, or management commentary drafting, the internal controls over that AI system fall within the SOX scope.

Auditors are increasingly requesting documentation of AI involvement in financial reporting workflows as part of external audit procedures. Organisations that have integrated AI into reporting processes without corresponding controls documentation face adverse findings in management's assessment of internal controls over financial reporting (ICFR).

The practical requirements: AI systems used in financial reporting must be documented, their outputs validated before use in filings, and the human review process over AI-generated content must be a documented internal control — not an informal practice.

UAE: DFSA and CBUAE AI Governance

For financial institutions operating in the Dubai International Financial Centre (DIFC) or Abu Dhabi Global Market (ADGM), the Dubai Financial Services Authority (DFSA) and Financial Services Regulatory Authority (FSRA) have issued AI governance guidance aligned with international standards. The Central Bank of UAE (CBUAE) published its AI governance principles for licensed financial institutions in 2024.

Key requirements for UAE financial services AI:

  • AI systems must be assessed for fairness and non-discrimination in credit and underwriting contexts
  • Data used to train or fine-tune AI systems must comply with UAE Personal Data Protection Law (PDPL) data sovereignty requirements
  • Material AI incidents affecting financial operations must be reported to the relevant regulator
  • AI used in AML/CFT screening must maintain human oversight and documented false-positive review processes

Enforcing All Frameworks Simultaneously at the Message Layer

The operational challenge is not understanding each regulation individually — it is applying all of them simultaneously to every AI interaction without requiring employees to navigate a compliance matrix before asking a question. A trading desk analyst should not need to know which of seven regulatory frameworks applies to their current query before typing it.

Govarix applies the relevant policy templates to the relevant team groups. A trading desk team gets MiFID II, SEC Reg FD, FINRA, and GDPR policies applied simultaneously. The compliance matrix is encoded in the policy engine, not in the employee's head. Every message is checked against all applicable policies before reaching any model. When a policy matches, the employee sees a specific explanation of which regulation applies and what alternative approach is available — not a generic block.

The audit log records every policy match, every block, and every permitted interaction — by policy, by user, by team, and by timestamp. When FINRA, the FCA, or the ECB asks for evidence of AI supervisory controls, the audit log is the evidence. For a complete guide to what that audit log must contain, see our article on AI audit logs as compliance evidence.

The Multi-Jurisdiction Reality

Financial institutions do not choose their regulatory stack — they operate where their clients are. EU AI Act, MiFID II, DORA, FINRA, SEC Reg FD, GDPR, and SOX are not alternatives. They are simultaneous obligations applied to the same platform, the same employees, and the same AI interactions. A compliance programme that addresses each framework sequentially will always be behind the enforcement curve. Platform-level governance that enforces all of them at the message layer — without requiring employees to navigate the matrix — is the only approach that scales.

Frequently Asked Questions

What AI regulations apply to financial services firms?
EU and US financial services firms face at minimum seven concurrent frameworks: EU AI Act, GDPR, MiFID II, DORA, FINRA, SEC Reg FD, and SOX. UAE operations add DFSA and CBUAE requirements. These are simultaneous obligations — not alternatives.

Does MiFID II apply to AI-generated investment advice?
Yes. MiFID II suitability requirements apply to investment advice regardless of whether it is generated by a human or an AI. An AI producing specific investment recommendations without suitability verification is a MiFID II violation. Content policies blocking investment advice query patterns are the standard mitigation.

How does DORA apply to AI vendors?
AI vendors — OpenAI, Anthropic, Azure, Google — are ICT third-party service providers under DORA. Financial entities must conduct due diligence on AI vendor operational resilience, include them in the ICT third-party risk register, assess concentration risk, and report significant AI incidents. DORA entered application in January 2025.

Do FINRA rules apply to AI-generated customer communications?
Yes. The SEC's position is explicit: existing securities rules apply to AI-generated content with the same force as human-authored content. AI-generated retail communications must meet fair and balanced standards. Records of AI customer interactions may be required communications records under Rule 17a-4.

Is credit scoring AI subject to the EU AI Act?
Yes. Credit scoring and creditworthiness assessment AI is explicitly High-Risk under EU AI Act Annex III, Category 5. High-Risk systems require technical documentation, accuracy testing, human oversight procedures, and registration in the EU AI Act database.

What is the maximum EU AI Act fine for a financial institution?
For Article 5 prohibited practice violations: €35M or 7% of global annual turnover — whichever is higher. For a bank with €10B in annual revenue, 7% is €700M. General High-Risk system violations carry €15M or 3% of turnover.

Let'sConnect

Trusted by

WIZCOAutomation AnywhereAppianUiPath
Luke Suneja

Flexible, fast, and focused — let's solve your tech challenges together.

Luke Suneja

Client Partner

What can we help you with?