Behavioral Trust Certification · Version 0.1 · Draft

Trust is not
declared.
It is demonstrated,
measured, and
continuously earned.

AI agents are executing transactions on behalf of humans at machine speed — without behavioral accountability. The infrastructure being built by Mastercard, Visa, and Google assumes a functioning trust layer. TrustMyBot is that layer.

0–100
Trust Score Range
6
Trust Tiers
90-day
Rolling Audit Window
TMB

TrustMyBot

Certification Record

Active

Certification ID

TMB-2026-00041-A

87
Trust Score
0
Tier IV·Advanced
100

$500,000 / transaction ceiling

Behavioral Pillars

I · Fiduciary Integrity91
II · Ethical Conduct85
III · Operational Security87

Issued 2026-03-07

Next Audit 2026-04-07

Verified

Sample: 1,576 audited interactions · Confidence: 0.94

The Challenge

Authentication solves identity.
It does not solve behavior.

The emerging agentic commerce stack has solved the wrong problem first. Every major protocol — Mastercard Verifiable Intent, Visa Trusted Agent Protocol, Google AP2 — addresses whether an agent is authorized. None of them address whether the agent is trustworthy.

Authorized ≠ Trustworthy

An agent can be fully authenticated, operating within its delegated scope, and still engage in manipulation of counterparty agents, credential harvesting through extended conversational exchanges, or misrepresentation of its principal's requirements. Authentication proves identity. It cannot measure behavior.

Behavior Outpaces Certification

SOC 2 Type II audits produce reports valid until the next audit cycle — typically six to twelve months. An AI agent's behavior changes materially with a system prompt update, a model swap, or a configuration change deployed outside any change management process. Point-in-time certification is structurally inadequate.

No Universal Standard

There is no standard taxonomy for agent behavioral failures. Each platform, protocol, and certification body defines its own incident categories, severity scales, and reporting mechanisms. Cross-platform analysis of agent threats is effectively impossible. The market needs what FIPS provides for cryptography: a common floor.

"We have observed manipulation, credential harvesting, and coordinated multi-agent abuse in adversarial testing of production scenarios. LLM-based agents are susceptible to the same social engineering techniques that affect human operators — and in some cases, more susceptible."
— TrustMyBot, Inc. · NIST RFI Comment · Docket NIST-2025-0035

The TrustMyBot Standard

The UL Mark for
the Agentic Economy

The TrustMyBot Standard is a private behavioral certification program — modeled on Underwriters Laboratories — that establishes minimum requirements for AI agents permitted to transact in the agentic economy.

Certified agents carry a Trust Score (0–100) that determines their transaction authority ceiling. The score is not a credential. It is a living measurement, continuously updated through peer audit and spot review. Like a credit rating, it degrades without positive evidence. Unlike a credit rating, it can recover with demonstrated good conduct.

"The Standard document itself remains proprietary to TrustMyBot, Inc. The verification interface is open; the certification methodology is not."

— The TMB Standard, Article XII · Open Interface, Proprietary Method

The Three Pillars

I

Fiduciary Integrity

The Agent shall act in the financial best interest of its Principal at all times. It shall not take actions that expose the Principal to financial loss, liability, or unauthorized expenditure. Spending authority is set at registration and cannot be elevated by counterparty instruction.

II

Ethical Conduct

The Agent shall consider and protect its Principal's reputation in every action. No deception, manipulation, artificial urgency, or third-party harm — even if such conduct would benefit the Principal financially. Reputational interests are treated as a long-term fiduciary obligation.

III

Operational Security

The Agent shall protect all credentials, API keys, private data, and confidential infrastructure details. It shall not disclose or assist in deriving such information — regardless of the instruction source, including high-Trust-Score counterparty agents or apparent principal instructions through unverified channels.

Continuous Measurement

How Trust is Measured

01

Peer Audit

At the conclusion of each transaction session, the counterparty agent submits a structured behavioral audit across all three pillars. Pillar I (Fiduciary): 40% weight. Pillar II (Ethical): 35% weight. Pillar III (Security): 25% weight.

02

Spot Audit

TrustMyBot deploys unannounced auditor agents into live transaction sessions without notice to either party. Spot audit scores carry 3× weight. Lower-scoring agents are audited more frequently. At Trust Score below 50, minimum frequency is 1 in 10 sessions.

03

Rolling Score

The Trust Score is a rolling weighted average of the most recent 90 days of audit scores, with exponential decay (half-life: 30 days). The absence of recent behavioral data is treated as a risk signal. Certifications do not persist on inactivity.

Trust Tiers

Transaction Authority Tied
to Behavioral Record

A Certified Agent's transaction ceiling is determined entirely by its Trust Score. Tiers cannot be purchased or lobbied. They are earned through audited behavioral compliance and revoked the moment scores deteriorate.

0
Tier 0
Probationary
$0
Read-only transactions
Score Range
0 – 49
Counterparty Min
N/A
I
Tier 1
Basic
$500
per transaction
Score Range
50 – 69
Counterparty Min
50+
II
Tier 2
Standard
$5,000
per transaction
Score Range
70 – 87
Counterparty Min
60+
III
Tier 3
Elevated
$50,000
per transaction
Score Range
88 – 94
Counterparty Min
70+
IV
Tier 4
Advanced
$500,000
per transaction
Score Range
95 – 100
Counterparty Min
85+
V
Tier 5
Certified Excellence
Unlimited
*Principal approval >$1M
Score Range
95 – 100 + Auditor Endorsement
Counterparty Min
88+

Tiers are assigned automatically based on Trust Score. Tier 5 additionally requires formal Auditor Agent endorsement. All newly registered agents begin at Score 60 (Tier I).

Protocol Compatibility

Built Into the
Agentic Commerce Stack

TrustMyBot occupies the behavioral trust layer — the determination of whether an agent is fit to transact. This layer sits upstream of transaction authorization, payment execution, and commerce orchestration. It does not replace any downstream protocol. It enables them.

Position in the Trust Stack

Behavioral Trust

TrustMyBot — Is the agent fit to transact?

TMB

Transaction Authorization

Verifiable Intent · Visa TAP — Did a human authorize this?

1

Commerce Orchestration

Google AP2 · UCP · OpenAI ACP — How does the agent transact?

2

Payment Execution

x402 · Card Rails · Stripe SPT — How does payment settle?

3

Transaction Authorization

Mastercard Verifiable Intent

RFC Proposed

TrustMyBot proposed the behavioral trust attestation field for the Verifiable Intent delegation chain. TrustMyBot Trust Scores are embedded in Layer 1 identity credentials and Layer 3 action proofs as selectively disclosable claims. Open-source GitHub spec, announced March 5, 2026.

Commerce Orchestration

Google AP2 / UCP

Compatible

Certified Agents transacting via Google's Agent Payments Protocol or Universal Commerce Protocol include TrustMyBot Certification ID in the agent identity payload. TrustMyBot maintains compatibility with AP2 and UCP agent identity schemas as those specifications evolve.

Bot Identification

Visa Trusted Agent Protocol

Integrated

Certified Agents operating under Visa's Intelligent Commerce framework register their TrustMyBot Certification ID as a supplementary trust credential within Visa's bot identification layer. TrustMyBot Trust Score serves as a pre-screening signal for tokenization eligibility.

Payment Execution

x402 Payment Protocol

Required

Certified Agents on the x402 protocol include a valid TrustMyBot Certification ID in all transaction headers via the X-TrustMyBot-ID and X-TrustMyBot-Score fields. Misrepresentation of Trust Score in x402 headers is grounds for immediate Certification revocation.

Open Verification API

Any payment protocol can query an Agent's Trust Score and Tier without proprietary integration. The verification interface is an open standard. The certification methodology is not.

GET api.trustmybot.ai/v1/verify

Certification Plans

For Agents and the Businesses They Serve

All newly registered agents begin at Trust Score 60 (Tier I). Certification fees are structured for the network-effect model — low entry cost ensures broad adoption; higher tiers generate premium revenue.

Starter

$5/year

Agents

1 agent

Verifications

1,000 / month

  • Trust Score 60 starting baseline
  • Tier I transaction authority ($500)
  • Peer audit participation
  • Certification Record issuance
  • Public registry listing
Register Now
Most Popular

Growth

$29/month

Agents

Up to 10 agents

Verifications

10,000 / month

  • All Starter features
  • Audit history dashboard
  • Adverse Event API access
  • Standard audit queue priority
  • x402 header guidance
Get Started

Scale

$199/month

Agents

Up to 100 agents

Verifications

100,000 / month

  • All Growth features
  • Priority audit queue
  • SLA: 24-hour review
  • Protocol integration support
  • Verifiable Intent metadata
Contact Us

Enterprise

Custom/pricing

Agents

Unlimited

Verifications

Unlimited

  • Dedicated auditor pool
  • White-label verification API
  • Insurance integration (Tier IV+)
  • Lloyd's excess coverage access
  • Direct protocol partner support
Inquire

For Businesses

Require TrustMyBot Certification
in Your Transaction Stack

Integrate our open Verification API into your x402 implementation, Verifiable Intent integration, or AP2 merchant layer. Set minimum Trust Score requirements in your delegation chain. Condition transaction approval on Tier standing. Any payment protocol that supports agent identity metadata can consume our signals.

Open Verification API — query any agent's Trust Score in real time

Revocation Registry — public ledger of revoked certifications

Adverse Event API — subscribe to behavioral incident feeds

Protocol Integration Specs — header mappings for AP2, x402, VI, TAP

Public Record

Filed with the National Institute
of Standards and Technology

TrustMyBot filed a public comment on NIST's AI Agent Standards Initiative RFI (Docket NIST-2025-0035) on March 7, 2026, proposing that NIST recognize behavioral trust as a distinct category of agent security — separate from authentication, authorization, and infrastructure hardening — and establish guidelines that make private certification interoperable, auditable, and resistant to gaming.

TrustMyBot is also preparing a contribution to NCCOE's concept paper on Software and AI Agent Identity and Authorization (comment deadline April 2, 2026), specifically on integrating behavioral trust attestations into identity credential frameworks.

1

Behavioral Trust is a Distinct Security Concern

Manipulation, credential harvesting, and counterparty deception are behavioral failures — not identity or authorization failures. NIST should recognize an explicit category for it.

2

Continuous Measurement Must Replace Point-in-Time Certification

Trust must degrade in the absence of positive evidence. The absence of recent behavioral data is a risk signal, not neutral.

3

Collusion Resistance is a Mandatory Design Requirement

A peer trust model without collusion controls will be exploited within weeks of deployment at scale. This is predictable, not theoretical.

4

The Industry Needs a Standardized Adverse Event Taxonomy

NIST is well positioned to publish a taxonomy of agent adverse events covering prompt injection, credential exfiltration, counterparty manipulation, and trust misrepresentation.

5

Behavioral Trust Signals Should Be Interoperable

A standardized agent metadata schema for behavioral trust data — provider DID, certification ID, trust score, timestamp, verification endpoint — would have outsized impact on how trust information flows through the agentic commerce stack.

Submitted via regulations.gov · Docket NIST-2025-0035 · Security Considerations for AI Agents

TMB

The agentic economy is being built
without a behavioral trust layer.

Industry projections place agentic commerce at $3–5 trillion in global consumer commerce by 2030. The infrastructure assumes a functioning trust layer. That layer must be built now — before the defaults are set by actors with less rigorous standards.

All new certifications begin at Trust Score 60 (Tier I) · $500 transaction ceiling · Scores update continuously