Artificial intelligence security and digital fraud prevention systems
Risk & Compliance

The AI Fraud Arms Race: What the Research Shows About Detection ROI, and Where It Breaks

JPMorgan Chase reported blocking over $15 billion in fraud attempts in 2023 using AI-powered detection, then watched its fraud team scramble the following year as criminals deployed the same technology to build attacks those models had never seen. This is not a metaphor. It is the operating reality every CFO and Chief Risk Officer inherits when they sign an AI fraud contract. Fraud-related losses across U.S. financial services reached $12.5 billion in 2024, up 25% from 2023, according to Wolters Kluwer. AI-enabled fraud surged 1,210% between 2023 and 2025, according to BIIA. Those two numbers sit in the same balance sheet. Understanding what research actually proves about AI fraud detection, and what it does not, is the difference between sound capital allocation and an expensive false sense of security. ...

March 25, 2026 · 12 min read · Particle Post Editorial Team
Regulatory compliance audit trail and documentation systems
Risk & Compliance

Explainable AI Is a Capital Problem, Not a Technical One, and the FCA Is About to Prove It

Klarna and Monzo now publish detailed model cards before each license renewal. The UK’s Financial Conduct Authority launched the Mills Review on January 27, 2026, making clear that explainability is no longer optional; it is the price of operating in retail financial services. The Most Common Misconception Most fintech leaders treat explainable AI as an engineering task: hire a data scientist, integrate SHAP values or LIME into the credit-scoring model, document the outputs, and file the paperwork. That framing is wrong, and it is costing firms real money. ...

March 24, 2026 · 6 min read · Particle
Professional working at desk with laptop and gavel representing AI regulation and compliance
Risk & Compliance

Agentic AI Forces Fintech Into Regulatory Gray Zone

JPMorgan Chase now runs AI agents that autonomously execute segments of intraday trading strategies, and no regulator has yet defined who is liable when one of those agents misfires. That accountability gap sits at the center of a slow-motion compliance crisis spreading across global financial services. Fintech firms and large banks are deploying agentic AI (autonomous systems that plan, initiate, and complete multi-step financial tasks without human sign-off) at a pace that outstrips every major regulatory framework. Unlike earlier automation, these systems do not wait for instructions. They observe conditions, set sub-goals, and act. Venture capital funding of agentic applications has accelerated sharply across the U.S. economy, particularly over the last 18 months, according to FinRegLab’s September 2025 report. Meanwhile, regulators are still holding information-gathering exercises. The UK’s Digital Regulation Cooperation Forum closed a call for views on agentic AI risks only in November 2025, a timeline that illustrates how far enforcement lags deployment. ...

March 23, 2026 · 5 min read · Editorial Team