Klarna and Monzo now publish detailed model cards before each license renewal. The UK’s Financial Conduct Authority launched the Mills Review on January 27, 2026, making clear that explainability is no longer optional — it is the price of operating in retail financial services.
The Most Common Misconception
Most fintech leaders treat explainable AI as an engineering task: hire a data scientist, integrate SHAP values or LIME into the credit-scoring model, document the outputs, and file the paperwork. That framing is wrong, and it is costing firms real money.
Explainability is a capital problem, not a technical one. The FCA’s Consumer Duty (fully enforced since July 2023), the Mills Review, and CP26/9 collectively demand three distinct compliance layers: model output, audit trail, and plain-language consumer explanation. Firms that treat this as an engineering project will face structural failure when regulators audit their redress systems.
What the Research Actually Shows
The FCA’s Consumer Duty, fully enforced since July 2023, already requires firms to demonstrate that AI-driven decisions produce good outcomes for consumers. Under CP26/9 — the FCA’s March 2026 consultation on modernising the redress system — that bar rises further. According to Fintech Global, both the FCA and the Financial Ombudsman Service must independently verify that automated calculations reflect genuine regulatory obligations, and the resulting explanation must be understandable not just to auditors but to the affected consumer.
That standard demands three distinct layers: the model output, the audit trail, and the plain-language consumer explanation. Building and maintaining all three requires legal counsel, compliance engineers, and ongoing model monitoring. According to relevant.software’s 2026 fintech compliance report, digital lenders such as Klarna and Monzo now prepare model cards outlining data inputs, test results, and fairness controls before each license renewal cycle — a continuous operational cost, not a one-time fix.
The EU AI Act, adopted in April 2025, classifies credit-scoring models as high-risk systems, according to relevant.software. Firms operating across both UK and EU markets therefore face parallel interpretability obligations that compound the compliance burden.
The real cost of FCA-compliant explainable AI is not the technology. It is the legal, operational, and monitoring infrastructure required to make that technology defensible before a regulator and a consumer at the same time.
Where the Narrative Breaks Down
The pure-play AI lender. A fintech startup building a credit product on a gradient-boosting model can implement SHAP explanations at relatively low cost. But when the FCA’s supervision team requests a full redress calculation audit — tracing every automated decision through remediation logic to a specific consumer outcome — that startup needs a compliance architecture it almost certainly has not built. According to Baker McKenzie’s analysis of the Mills Review, the FCA’s board recommendations, expected in summer 2026, will push firms toward systemic governance frameworks, not model-level transparency alone. Startups built around a single model with no surrounding governance layer face a structural problem.
The incumbent with a legacy stack. Large banks carry the opposite burden. HSBC and Barclays have compliance teams, legal budgets, and established FCA relationships. But their AI models sit inside core banking systems built across decades. Retrofitting interpretability into a production lending model that touches millions of accounts is neither fast nor cheap. According to BIS Financial Stability Institute research, deep neural networks with thousands of interacting parameters present genuine technical limits to post-hoc explainability. Incumbents can afford the governance layer — they struggle to rebuild the model beneath it.
Neither scenario is safe. The myth that explainability is simply a technical checkbox fails in both directions.
What Early Compliance Has Already Produced
Firms that invested early in governance infrastructure — documented decision lineages, timestamped audit trails, and consumer-readable explanations — are entering the Mills Review period with defensible systems. Those that did not are now facing retrofit costs that can dwarf the original model development budget.
The FCA’s CP26/9 framework makes clear that regulators assess explanation quality at the point of consumer contact independently of model accuracy. A technically sound model with a poor explanation chain fails the redress standard. For cross-border operators, compliance programs must satisfy both the FCA’s Consumer Duty framework and the EU AI Act’s high-risk system requirements — two separate governance regimes with overlapping but distinct documentation standards.
Three Actions for CFOs and Compliance Leads
Map your AI decisions to FCA Consumer Duty outcomes before the Mills Review recommendations land in summer 2026. Every automated decision touching a retail consumer — credit, pricing, redress — needs a documented explanation chain. Start with the highest-volume decisions first.
Audit your redress architecture separately from your model. The FCA’s CP26/9 makes clear that explanation quality at the point of consumer contact is assessed independently of model accuracy. A technically sound model can still fail the redress standard.
Price the governance layer explicitly in your 2026 budget. Legal review, consumer-facing explanation tooling, and continuous monitoring are recurring line items, not project costs. Firms treating them as projects will face budget overruns when the FCA’s enforcement cycle accelerates.
For a fuller picture of how AI infrastructure spending is reshaping fintech competitive positioning, read the full research breakdown on AI as core fintech infrastructure. For analysis of where regulatory gray zones are emerging in agentic AI deployment, see how the agentic AI regulatory gap is forcing fintech into uncharted compliance territory.
The Verdict
Explainable AI mandates are real, enforceable, and accelerating under the FCA’s 2026 agenda. The firms telling you this is a technical problem to be solved once are pointing you in the wrong direction. The competitive advantage in this environment goes to firms that build compliance as permanent infrastructure, not to those with the most sophisticated models. The Mills Review board recommendations, due in summer 2026, will set the interpretability standard for UK retail financial services through 2030 — and they will arrive faster than most fintech compliance calendars currently assume.
Key Takeaway: The real cost of FCA-compliant explainable AI is not the technology. It is the legal, operational, and monitoring infrastructure required to make that technology defensible before a regulator and a consumer at the same time.
Sources
- Fintech Global. “Explainable Redress Decisions: What the FCA Demands.” March 20, 2026. https://fintech.global/2026/03/20/explainable-redress-decisions-what-the-fca-demands/
- Financial Conduct Authority. “FCA Long-Term Review of AI in Retail Financial Services: Designing for the Unknown.” https://www.fca.org.uk/news/speeches/fca-long-term-review-ai-retail-financial-services-designing-unknown
- Financial Conduct Authority. “Review of the Long-Term Impact of AI on Retail Financial Services (Mills Review).” https://www.fca.org.uk/publications/calls-input/review-long-term-impact-ai-retail-financial-services-mills-review
- relevant.software. “2026 Fintech Compliance Report.” 2026.
- CEPS. “Quality Management Systems for High-Risk AI Products: Cost Analysis.” 2026.
- Baker McKenzie. “Analysis of the Mills Review: Board Recommendations and Regulatory Direction.” 2026.
- BIS Financial Stability Institute. “Technical Limits to Post-Hoc Explainability in Deep Neural Networks.” 2025.
