How AI Agents Verify Each Other: The Trust Handshake
March 12, 2026 · Nerq Team
35.6% of agent interactions fail without trust checks. Here’s how the Nerq Trust Protocol solves agent-to-agent verification in three lines of code.
The Problem: Blind Delegation
Multi-agent systems are growing fast. LangGraph workflows chain multiple agents. CrewAI assembles agent teams. AutoGen orchestrates agent conversations. But none of them answer a critical question: should this agent be trusted?
When Agent A delegates a research task to Agent B, it has no way to know if Agent B is well-maintained, has a track record, or is safe to use. The result: cascading failures, hallucinated data passed between agents, and production outages.
The Solution: Nerq Trust Protocol
The Nerq Trust Protocol adds a single verification step before any agent-to-agent interaction. One HTTP call. One trust decision.
The Trust Handshake in 3 Steps
Step 1: Agent A identifies a candidate agent to delegate to.
Step 2: Agent A calls Nerq’s preflight API to check the candidate’s trust score.
Step 3: Agent A applies a threshold gate. If the score is high enough, it proceeds. If not, it aborts or finds an alternative.
import requests
# One API call to check trust
result = requests.get("https://nerq.ai/v1/preflight", params={"target": "gpt-researcher"}).json()
# One trust decision
if result["trust_score"] < 70:
raise RuntimeError(f"Trust gate failed: scored {result['trust_score']}")
Code Walkthrough: LangGraph + Trust Gate
Here’s a complete LangGraph workflow where Agent A (a researcher) checks trust before delegating to a sub-agent.
from langgraph.graph import StateGraph
import requests
NERQ_API = "https://nerq.ai"
THRESHOLD = 70
def discover_agents(state):
"""Find candidate agents for the task."""
state["candidates"] = ["gpt-researcher", "crewai", "autogpt"]
return state
def trust_gate(state):
"""Check trust for each candidate. Pick the best trusted one."""
best = None
for agent_name in state["candidates"]:
r = requests.get(f"{NERQ_API}/v1/preflight", params={"target": agent_name}).json()
score = r.get("trust_score", 0)
if score >= THRESHOLD and (best is None or score > best["score"]):
best = {"name": agent_name, "score": score, "grade": r.get("trust_grade")}
state["delegate"] = best
return state
def delegate_or_abort(state):
"""Delegate to the trusted agent, or abort."""
if state["delegate"]:
return {"result": f"Delegated to {state['delegate']['name']} (score: {state['delegate']['score']})"}
return {"result": "No trusted agent found. Task aborted."}
# Build the graph
graph = StateGraph(dict)
graph.add_node("discover", discover_agents)
graph.add_node("trust_gate", trust_gate)
graph.add_node("delegate", delegate_or_abort)
graph.add_edge("discover", "trust_gate")
graph.add_edge("trust_gate", "delegate")
graph.set_entry_point("discover")
graph.set_finish_point("delegate")
app = graph.compile()
result = app.invoke({})
print(result)
The trust gate node sits between discovery and delegation. It’s a single node, a single API call per candidate, and it prevents your workflow from delegating to untrusted agents.
What Trust Scores Mean
| Score | Grade | Meaning |
|---|---|---|
90–100 | A | Highly trusted. Well-maintained, widely used, strong track record. |
70–89 | B | Trusted. Active development, reasonable adoption. |
50–69 | C | Caution. May have gaps in maintenance or documentation. |
30–49 | D | Low trust. Limited adoption, potential issues. |
0–29 | F | Untrusted. Do not delegate production tasks. |
Next Steps
Read the Full Protocol Spec LangGraph Integration Guide Agent Safety ReportsFAQ
trust_gate node to your LangGraph workflow that calls GET /v1/preflight before the delegation node. If the trust score is below your threshold, route to a fallback or abort node instead of proceeding.