Agent Trust Protocol
A lightweight HTTP protocol for AI agents to verify the trustworthiness of other agents before interaction. One API call, one trust decision.
Overview
AI agents increasingly delegate tasks to other agents. Without trust verification, failures cascade. The Nerq Trust Protocol provides a standardized way for agents to check trust before interaction.
35.6%
agent interaction failure rate without trust checks
60%
of enterprises don't trust AI agent outputs
40%
of agent deployments canceled due to trust concerns
Trust Query
A single HTTP GET returns everything an agent needs to make a trust decision.
Request
GET /v1/preflight?target=langchain&caller=my-agent HTTP/1.1 Host: nerq.ai Accept: application/json
Response
{
"target": "langchain",
"trust_score": 87.3,
"trust_grade": "A",
"recommendation": "PROCEED",
"risk_flags": [],
"checked_at": "2026-03-12T14:30:00Z",
"ttl": 3600
}
Response Fields
| Field | Type | Description |
|---|---|---|
target | string | Name of the agent being checked |
trust_score | float | Trust score from 0 to 100 |
trust_grade | string | Letter grade: A, B, C, D, F |
recommendation | string | PROCEED, CAUTION, or ABORT |
risk_flags | array | Active risk signals (e.g., "no-license", "low-maintenance") |
checked_at | ISO 8601 | Timestamp of the trust check |
ttl | int | Seconds until the score should be re-checked |
Trust Gate
Agents apply a threshold to the trust score. If the score is below the threshold, the agent should not delegate.
Recommended Thresholds
| Use Case | Threshold | Action if Below |
|---|---|---|
| Financial / safety-critical | 80 | ABORT — do not delegate |
| Production workloads | 70 | ABORT or require human approval |
| Exploratory / sandboxed | 50 | CAUTION — proceed with logging |
| Research / development | 30 | CAUTION — proceed with monitoring |
Gate Logic (3 lines)
result = requests.get("https://nerq.ai/v1/preflight", params={"target": agent_name}).json()
if result["trust_score"] < THRESHOLD:
raise RuntimeError(f"Trust gate failed: {agent_name} scored {result['trust_score']}")
Integration Patterns
Drop-in trust checks for popular agent frameworks.
LangChain
from langchain.tools import tool
import requests
@tool
def check_trust(agent: str) -> str:
r = requests.get("https://nerq.ai/v1/preflight", params={"target": agent}).json()
return f"{agent}: {r['trust_score']} ({r['recommendation']})"
LangGraph
def trust_gate_node(state):
r = requests.get("https://nerq.ai/v1/preflight", params={"target": state["agent"]}).json()
state["trusted"] = r["trust_score"] >= 70
return state
CrewAI
from crewai.tools import BaseTool
class TrustCheck(BaseTool):
name = "nerq_trust_check"
description = "Check an agent's trust score before delegation"
def _run(self, agent: str) -> str:
r = requests.get("https://nerq.ai/v1/preflight", params={"target": agent}).json()
return f"Score: {r['trust_score']}, Recommendation: {r['recommendation']}"
AutoGen
def trust_check(agent_name: str) -> dict:
"""Check trust score for an agent via Nerq."""
return requests.get("https://nerq.ai/v1/preflight", params={"target": agent_name}).json()
A2A Trust Handshake
When two agents interact via Google's Agent-to-Agent (A2A) protocol, the calling agent should verify trust before sending tasks.
Handshake Flow
1. Agent A discovers Agent B via A2A /.well-known/agent.json 2. Agent A calls GET /v1/preflight?target=agent-b&caller=agent-a 3. Nerq returns trust score, grade, and recommendation 4. If trust_score >= threshold: Agent A sends task to Agent B 5. If trust_score < threshold: Agent A aborts or escalates to human
# A2A + Nerq trust handshake
import requests
def a2a_with_trust(target_url: str, task: dict, threshold: int = 70):
# Step 1: Discover agent
agent_card = requests.get(f"{target_url}/.well-known/agent.json").json()
agent_name = agent_card.get("name", "unknown")
# Step 2: Trust check
trust = requests.get("https://nerq.ai/v1/preflight", params={"target": agent_name}).json()
# Step 3: Gate
if trust["trust_score"] < threshold:
return {"error": f"Trust gate failed: {agent_name} scored {trust['trust_score']}"}
# Step 4: Send task via A2A
return requests.post(f"{target_url}/a2a", json=task).json()
Resources
Integration Hub Agent Safety Reports Blog: Trust Handshake GitHubFAQ
What is the Nerq Trust Protocol?
The Nerq Trust Protocol is a lightweight HTTP-based protocol that enables AI agents to verify the trustworthiness of other agents before interacting with them. It provides trust scores, grades, and risk signals through a simple REST API.
How do AI agents verify each other?
An agent calls
GET /v1/preflight with the target agent's name. Nerq returns a trust score (0–100), letter grade, and recommendation (PROCEED, CAUTION, or ABORT). The calling agent applies a threshold gate to decide whether to proceed.What trust score threshold should I use?
For production workloads, use a threshold of 70 or higher. For financial or safety-critical tasks, use 80+. For exploratory or sandboxed tasks, 50+ is acceptable. The default recommendation is 70.