75% of AI Agents Are D-Grade: The Trust Crisis Nobody Is Talking About

Category: ecosystem analysis · Winner: Data Story

GradeCountPercentage
A+390.001%
A1,3630.03%
B+570.001%
B33,8620.74%
C822,85218.0%
D3,429,54475.2%
E/F267,4135.9%
StarsAverage Trust Score
053.4
1-9961.2
100-99965.9
1K-10K71.4
10K-100K74.3
100K+75.7
# 75% of AI Agents Are D-Grade: The Trust Crisis Nobody Is Talking About

We analyzed 4.5 million AI agents, tools, models, and datasets across GitHub, npm, PyPI, Docker Hub, and HuggingFace. The results are sobering.

The Numbers

The Nerq Ecosystem Trust Index currently stands at **64.1/100** — a C grade. But the weighted index hides the real story. The **median** trust score is just **52.7/100** — a solid D.

| Grade | Count | Percentage |
|-------|-------|------------|
| A+ | 39 | 0.001% |
| A | 1,363 | 0.03% |
| B+ | 57 | 0.001% |
| B | 33,862 | 0.74% |
| C | 822,852 | 18.0% |
| D | 3,429,544 | 75.2% |
| E/F | 267,413 | 5.9% |

**75.2% of all graded AI agents earn a D.** Only 0.03% earn an A.

What Makes an Agent D-Grade?

D-grade agents typically lack:

- **Security practices** — No vulnerability disclosure, no dependency scanning, no security policy
- **Documentation** — Missing or minimal README, no API docs, no usage examples
- **Active maintenance** — No commits in months, no response to issues
- **Community signals** — Few or zero stars, no forks, no contributors beyond the author
- **License clarity** — Unknown or missing license information

The average D-grade agent is a single-author project with 0 stars, no security policy, unknown license, and no activity in the past 90 days.

The Gap Is Growing

| Stars | Average Trust Score |
|-------|-------------------|
| 0 | 53.4 |
| 1-99 | 61.2 |
| 100-999 | 65.9 |
| 1K-10K | 71.4 |
| 10K-100K | 74.3 |
| 100K+ | 75.7 |

Popular projects score higher — but the gap is only 22 points between zero-star and 100K+ star projects. Community popularity alone doesn't guarantee trust.

Why This Matters Now

The AI agent ecosystem is entering its "deployment phase." Enterprises are integrating agents into production workflows. CI/CD pipelines are pulling agent dependencies. Autonomous systems are selecting tools at runtime.

Every integration without trust verification is a supply chain risk.

What Can Be Done?

1. **Add a security policy** — Even a simple SECURITY.md significantly improves trust signals
2. **Choose a license** — An unknown license is a legal liability for adopters
3. **Maintain actively** — Respond to issues, update dependencies, push commits
4. **Document clearly** — A good README is the single highest-ROI trust signal
5. **Use trust verification** — Tools like Nerq's preflight API (`nerq.ai/v1/preflight?target=your-agent`) provide instant trust assessment

The 0.03% of A-grade agents didn't get there by accident. They invested in the practices that make software trustworthy. The other 75% can learn from them.

Methodology

Trust scores are calculated from 13+ independent signals including GitHub metadata, NVD vulnerability data, OSV.dev advisories, OpenSSF Scorecard metrics, license analysis, download statistics, and community signals. No self-reported data. No pay-to-play. Pure signal analysis.

Data current as of March 13, 2026. The Nerq Ecosystem Trust Index is updated daily.

FAQ

What percentage of AI agents are D-grade?
75.2% of all graded AI agents in the Nerq index earn a D grade, based on analysis of 4.5M+ assets across GitHub, npm, PyPI, Docker Hub, and HuggingFace.
What is the average trust score for AI agents?
The median trust score across 4.5M+ AI agents is 52.7 out of 100. The weighted average (accounting for popularity) is 64.1/100.
How many AI agents earn an A grade?
Only 1,402 agents (0.03%) earn an A or A+ grade out of 4.5M+ total. These are established projects with strong security practices, active maintenance, and community trust.
What makes an AI agent trustworthy?
Trustworthy agents have: security policies, known permissive licenses, active maintenance (recent commits), good documentation, community engagement (stars, contributors), and no known vulnerabilities.
How is the Nerq Trust Score calculated?
Nerq Trust Scores are based on 13+ independent signals including GitHub metadata, NVD vulnerabilities, OSV.dev advisories, OpenSSF Scorecard, license analysis, download statistics, and community signals. No self-reported data.
We use cookies for analytics and caching. Privacy Policy