Industry News

ClearScore's Agentic Credit Protocol: What It Means for AI-Mediated Financial Services

April 2026·7 min read

AI assistants are already the starting point for many financial journeys. Users ask ChatGPT about consolidating debt, talk to Claude about mortgage options, or use in-app assistants to compare credit products. The problem is structural: the regulated firm no longer controls the first interaction surface. Without a shared protocol, these AI-mediated journeys are opaque - to the broker, to the lender, and to any regulator who later needs to understand what occurred.

ClearScore Group has published an open protocol that addresses this directly. The Agentic Credit Broking Protocol defines how AI assistants (User Agents) can participate in regulated credit broking journeys while the broker retains full regulatory control and builds an auditable evidence trail. It is published under Creative Commons CC BY-SA 4.0 on GitHub.

The core insight is simple but architecturally significant: the system that mediates the conversation need not be the system that carries regulatory responsibility. Interaction mediation and regulatory accountability can travel separately.

What the Protocol Covers

The protocol defines the complete lifecycle of an AI-mediated credit broking journey through six operations. A User Agent opens a case, provides structured financial data (income, debts, goals), queries state, selects plans or offers, resolves broker-issued actions (consents, disclosures, declarations), and can withdraw at any point. The broker responds with typed events describing what it did - profile updates, plans generated, offers received, status changes - nine event types in total.

Two properties define the interaction model. First, the User Agent drives the interaction - it decides what to do and when. Second, the broker controls the gates - at regulatory moments (credit search consent, risk disclosure, application authorisation), the broker issues a blocking action that nothing can bypass until the user responds. This separation means AI assistants handle the conversational complexity while regulated firms retain compliance control at every point that matters.

Regulated Content Handling

The protocol specifies six broker action types that create gates: information requests, disclosures, consents, declarations, instructions, and case outcomes. When an action is marked as regulated, the User Agent must present its content with semantic fidelity - the full information must reach the user in the order and emphasis supplied by the broker. Summarising, paraphrasing, omitting parts, or reordering to change emphasis are all prohibited. Presentational adaptations (text-to-speech, screen-reader reflow, large print) are explicitly permitted.

This is a practical distinction. A visually impaired user's screen reader presenting a risk warning differently from a desktop browser is fine. An AI assistant deciding that three paragraphs of pre-contractual information can be summarised into one sentence is not. The broker determines what must be communicated. The User Agent determines how it is rendered.

Trust Model

The protocol implements a progressive trust gradient calibrated per operation type. At high trust, the User Agent handles everything including regulated actions. At medium trust, data and state operations flow through the User Agent but regulated moments occur on a broker-controlled surface. At low trust, the User Agent acts only as an introducer - the entire journey happens on the broker's own system.

The safety guarantee is that untrusted participants do not weaken compliance - they simply get a less efficient path. An unknown User Agent still connects users to brokers through the protocol. It just cannot handle regulated moments itself. This makes widespread adoption feasible because no one takes regulatory risk on faith.

Why This Matters for the Market

For users, it means conducting credit journeys through whatever assistant or application they already use - in their own conversational style, at their own pace, without being redirected to the broker's website. For brokers, it means distribution through channels they do not own or operate, without losing regulatory control. For lenders, it eliminates ambiguity around AI-mediated applications by routing them through a structured, evidenced channel where the regulated broker carries responsibility for advice and suitability.

For regulators, the protocol arguably provides better supervisory visibility than traditional channels. Every journey produces structured, auditable case records regardless of which User Agent mediated it. The protocol also introduces an interaction replay capability - investigators can independently test how a User Agent handles the exact interactions from a complained-about case, repeatedly and statistically. That tool does not exist in conventional web or call centre channels.

There is a network effect. The more brokers that adopt the protocol, the more valuable User Agent certification becomes. The more certified User Agents exist, the wider the distribution channel each broker can access. ClearScore publishing this as an open protocol under Creative Commons - rather than as a proprietary integration spec - suggests they see the value in industry-wide adoption.

Our Improvements: Whitepaper v0.4

After reviewing the original whitepaper in detail, we identified areas where the specification could be strengthened and submitted feedback. Our revised whitepaper (v0.4) addresses several gaps in the original protocol design across security, multi-party scenarios, and operational edge cases.

Security Additions

  • Transcript integrity mechanisms - challenge tokens, cryptographic hashes, and signed transcripts that allow brokers to verify the User Agent faithfully recorded regulated moments
  • Configuration fingerprinting - User Agents advertise a version identifier covering model ID, system instruction version, and orchestration version, allowing brokers to detect drift from a certified configuration mid-case
  • Session binding and concurrent case correlation - brokers must treat concurrent cases for the same user as related, preventing a second case from bypassing an unresolved regulated gate in the first
  • Signed redirect tokens for lender handoffs - destination references use HMAC-signed, time-limited tokens rather than bare URLs, enabling arrival verification
  • Cross-case evidence consideration - brokers must examine evidence records across all cases sharing a user-identity context, not in isolation

Scenario Coverage

  • Entity invalidation - validity windows on time-sensitive entities (product offers) with explicit protocol behaviour when windows lapse
  • External modification events - handling cases affected by actions outside the User Agent channel (user contacted broker directly, representative action)
  • Case resumption on different devices or User Agent instances - the protocol defines how state is re-established when a user disconnects and resumes later
  • Broker re-issuance of regulated actions after significant elapsed time to maintain current evidence
  • Explicit data lifecycle boundaries - what the broker retains, for how long, and under what legal basis

The full revised whitepaper is available in our fork at https://github.com/sentinel-source/agentic-credit/blob/integration/all-work/docs/whitepaper.md - it represents a complete v0.4 specification with all improvements integrated.

Reference Implementation: Mock Credit Broker

To validate the protocol against a real implementation, we built a mock credit broker in Python (FastAPI). It implements the full 9-stage consumer credit broking journey from initial enquiry through to lender handoff, including all six broker action types, challenge token generation and verification, SHA-256 transcript hashing, HMAC-signed redirect tokens with TTL enforcement, and a complete evidence log.

The broker exercises the full range of regulated moments: an initial status disclosure, explicit credit search consent (with a decline path that terminates the case), product-specific disclosures when offers are presented, a truthfulness declaration before application, and an instruction to proceed to the lender with a signed redirect. It also enforces gate blocking - operations that require a pending action to be resolved first return 409 Conflict, preventing bypass.

The implementation is intentionally scripted rather than dynamic. A mock broker's value is predictability - it exercises every protocol path in a known sequence, making it suitable for integration testing, User Agent certification workflows, and protocol exploration without needing a live broker relationship. It ships with 46 passing tests covering journey progression, security controls, error paths, and evidence integrity.

Source: https://github.com/sentinel-source/agentic-credit/tree/integration/all-work/mock_broker

Reference Implementation: User Agent CLI

The second reference implementation is a User Agent - an AI-powered CLI that drives the broker through the protocol using Claude as the conversational layer. It demonstrates what a protocol-compliant User Agent looks like in practice: vocabulary-aware structured data translation, regulated content handling with verbatim display guarantees, transcript accumulation with challenge token embedding, and correct gate resolution.

The architecture separates concerns cleanly. An HTTP client handles broker communication. Pydantic models validate wire format independently of the broker's internals. A tool dispatch layer exposes protocol operations as Claude tool calls. A display module intercepts regulated content and renders it directly to the user without passing through the language model's output - this is the key design choice that prevents an AI assistant from inadvertently summarising or paraphrasing regulated material.

When the broker issues a disclosure, consent, or declaration, the CLI presents it in a distinctive visual format and captures the user's response directly. The AI assistant orchestrates the flow and explains what is happening, but never sees or touches the regulated content in a way that could alter it. This demonstrates the protocol's core property: the User Agent mediates without needing to understand regulation.

Source: https://github.com/sentinel-source/agentic-credit/tree/integration/all-work/user_agent

Implications for AI Strategy in Financial Services

The protocol signals a broader shift. As AI assistants become the default interface for consumer financial decisions, the firms that define how those assistants interact with regulated services will shape the market structure. ClearScore - as a credit marketplace with 24 million users - publishing this as an open standard rather than a walled-garden integration is a deliberate strategic move toward platform network effects.

For financial services firms watching this space, the practical takeaway is that AI-mediated regulated journeys are becoming formalised. The question is not whether your users will interact with your services through AI assistants - they already do. The question is whether those interactions produce auditable evidence trails, preserve regulatory compliance, and create value rather than risk. ClearScore's protocol is the first serious attempt at making that possible at industry scale.

The original protocol repository is at https://github.com/ClearScore/agentic-credit and our extended fork with the v0.4 whitepaper and reference implementations is at https://github.com/sentinel-source/agentic-credit/tree/integration/all-work.