JPMorgan Chase now runs AI agents that autonomously execute segments of intraday trading strategies — and no regulator has yet defined who is liable when one of those agents misfires. That accountability gap sits at the center of a slow-motion compliance crisis spreading across global financial services.

Fintech firms and large banks are deploying agentic AI — autonomous systems that plan, initiate, and complete multi-step financial tasks without human sign-off — at a pace that outstrips every major regulatory framework. Unlike earlier automation, these systems do not wait for instructions. They observe conditions, set sub-goals, and act. Venture capital funding of agentic applications has accelerated sharply across the U.S. economy, particularly over the last 18 months, according to FinRegLab’s September 2025 report. Meanwhile, regulators are still holding information-gathering exercises. The UK’s Digital Regulation Cooperation Forum closed a call for views on agentic AI risks only in November 2025 — a timeline that illustrates how far enforcement lags deployment.

The Accountability Vacuum

The core problem is not that agentic AI makes bad decisions. The problem is that no legal framework clearly assigns responsibility when it does. The SEC and FINRA have both stated that existing obligations — broker-dealer registration, suitability rules, and best-execution requirements — apply to AI-assisted investment recommendations, according to a Debevoise & Plimpton analysis published in October 2025. But “existing obligations apply” is not the same as a coherent enforcement regime. When an autonomous agent executes a sequence of trades that constitutes market manipulation, the developer, the deploying firm, and the compliance officer who signed off on the model card all face ambiguous exposure — and no current statute resolves that ambiguity.

Hogan Lovells flagged in 2025 that liability exposure sharpens further when third-party AI agents act on behalf of customers — a structure now common in wealth management platforms and embedded-finance products. The legal chain of accountability can pass through three or four entities before reaching anyone with a registered obligation.

Where the Arbitrage Window Opens

The gap between U.S. and EU approaches creates a secondary risk: regulatory arbitrage. The EU AI Act classifies certain financial AI applications as high-risk, imposing transparency and human-oversight mandates that take effect through 2025 and 2026, according to Braithwaite’s cross-border compliance analysis. The U.S. has no equivalent statute. A fintech firm can today deploy an autonomous loan-approval system in the United States that would require extensive documentation, bias testing, and explainability disclosures under European law.

Some firms already exploit this. Lenders operating primarily in U.S. markets have integrated agentic underwriting pipelines that approve or decline personal loans in milliseconds, with no human review path built into the standard workflow. The CFPB’s fair lending rules technically cover algorithmic underwriting, but the agency has not published guidance specific to agentic architectures — systems that adapt their own decision logic between audit cycles.

5% of intraday liquidity buffers at global systemically important banks expected to be run by agentic AI by year-end 2025 — Forbes contributor Zenon Kapron

The first supervisory stress tests that explicitly model agent failure are not scheduled until 2026. That is a 12-month window in which systemic exposure accumulates without a matching supervisory lens.

Key Takeaway: Firms deploying agentic AI in credit, trading, or payments today face a double liability: retroactive enforcement once regulators catch up, and civil exposure from counterparties and customers who can argue no human ever reviewed the decision that harmed them.

Operational Risk Nobody Has Priced In

The liability blind spot extends beyond regulatory enforcement into operational risk that most firms have not formally modeled. Agentic systems interact with each other. A loan-approval agent at one firm can feed data into a portfolio-weighting agent at another through open banking APIs. The compounding of autonomous decisions across institutions creates failure cascades that no single firm controls — and that no single regulator currently monitors.

Klarna, which has publicly positioned AI as central to its cost-reduction strategy, uses AI agents to handle customer interactions and credit decisions at scale. The company’s 2024 disclosures cited AI-driven efficiency gains that reduced headcount, but the corresponding oversight structure governing autonomous credit decisions affecting millions of consumers received far less prominence. Klarna operates across more than 45 countries, each with a different regulatory posture on algorithmic credit decisions, according to company filings. That jurisdictional patchwork is not a compliance strategy — it is a risk accumulation.

Taylor Wessing’s Fintech Outlook 2026 identifies agentic AI accountability as one of the defining legal challenges for financial institutions this year, noting that liability frameworks have not kept pace with the speed of commercial deployment. Law firms now advise clients to document human-review touchpoints even where none are operationally required — essentially building a paper trail in anticipation of enforcement that does not yet exist but almost certainly will.

What Executives Should Watch

Three developments will determine how quickly this gray zone closes. First, the CFPB’s posture on algorithmic underwriting guidance — any formal rulemaking will set the baseline liability standard for U.S. lenders. Second, the EU AI Act’s enforcement calendar: the first supervisory audits of high-risk AI systems begin in earnest in 2026, and any significant enforcement action against a U.S.-headquartered firm operating in Europe will reset risk calculations globally. Third, the outcome of the first major litigation in which a plaintiff argues that an autonomous agent — not a human — made the decision that caused financial harm. That case, whenever it arrives, will price the liability that markets currently ignore.

Firms that treat agentic AI governance as a legal formality will face the most exposure. The window to build defensible oversight architecture is narrow and closing.

Sources