nerq-langchain
Trust-gate your LangChain agents. Preflight trust checks before every tool call.
Installation
pip install nerq-langchain
Dependencies: langchain-core, requests. Python 3.9+.
trust_gate
Wrap any LangChain agent with automatic preflight trust checks on every tool call.
from nerq_langchain import trust_gate agent = initialize_agent(tools, llm) agent = trust_gate(agent, min_trust=60) # Every tool call now gets a trust check
What happens on each tool call:
| recommendation | trust score | behavior |
|---|---|---|
PROCEED | ≥ 70 | Runs silently |
CAUTION | 40–69 | Logs warning, proceeds |
DENY | < 40 | Raises TrustError |
UNKNOWN | — | Proceeds with warning |
| API unreachable | — | Proceeds with warning (never blocks) |
from nerq_langchain import trust_gate, TrustError
agent = trust_gate(agent, min_trust=70)
try:
result = agent.run("Analyze this code")
except TrustError as e:
print(f"Blocked: {e.tool_name} (trust={e.trust_score})")
# e.tool_name, e.trust_score, e.recommendation
NerqPreflight tool
LangChain tool that agents can use directly to check trust on any agent or tool.
from nerq_langchain import NerqPreflight
tool = NerqPreflight()
print(tool.run("SWE-agent"))
# Nerq Preflight: SWE-agent
# Recommendation: PROCEED
# Trust Score: 92.5/100
# Grade: A+
# Verified: Yes
# Category: security
# This agent is trusted. Safe to interact.
NerqSearch tool
Find the best agent for any task from 204K+ indexed agents.
from nerq_langchain import NerqSearch
tool = NerqSearch()
print(tool.run("code review"))
# Nerq Search: 'code review' — 5 results
# 1. SWE-agent/SWE-agent — Trust: 92.5, Category: security
# 2. skill-code-review — Trust: 86.0, Category: coding
# ...
Full example
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from nerq_langchain import NerqPreflight, NerqSearch, trust_gate
llm = ChatOpenAI(model="gpt-4")
tools = [NerqPreflight(), NerqSearch()]
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS)
# Trust-gate: block tools with trust < 60
agent = trust_gate(agent, min_trust=60)
result = agent.run("Is SWE-agent safe to use?")
With LangGraph
from nerq_langchain import NerqPreflight, NerqSearch from langgraph.prebuilt import create_react_agent tools = [NerqPreflight(), NerqSearch()] agent = create_react_agent(model, tools)
Why trust-gate your agents?
As AI agents delegate tasks to other agents, every interaction becomes a trust decision.
An agent accepting untrusted input or delegating to a compromised tool creates
a chain of liability. trust_gate adds a zero-config trust layer:
before any tool call executes, Nerq checks the tool's trust score across
204K indexed agents and makes a PROCEED/CAUTION/DENY decision in <50ms.
No API key required. Graceful fallback if Nerq is unreachable.
PyPI · Preflight API · KYA — Know Your Agent · Full API Docs