75% of AI Agents Are D-Grade: The Trust Crisis Nobody Is Talking About
Category: ecosystem analysis · Winner: Data Story
| Grade | Count | Percentage |
|---|---|---|
| A+ | 39 | 0.001% |
| A | 1,363 | 0.03% |
| B+ | 57 | 0.001% |
| B | 33,862 | 0.74% |
| C | 822,852 | 18.0% |
| D | 3,429,544 | 75.2% |
| E/F | 267,413 | 5.9% |
| Stars | Average Trust Score |
| 0 | 53.4 |
| 1-99 | 61.2 |
| 100-999 | 65.9 |
| 1K-10K | 71.4 |
| 10K-100K | 74.3 |
| 100K+ | 75.7 |
We analyzed 4.5 million AI agents, tools, models, and datasets across GitHub, npm, PyPI, Docker Hub, and HuggingFace. The results are sobering.
The Numbers
The Nerq Ecosystem Trust Index currently stands at **64.1/100** — a C grade. But the weighted index hides the real story. The **median** trust score is just **52.7/100** — a solid D.
| Grade | Count | Percentage |
|-------|-------|------------|
| A+ | 39 | 0.001% |
| A | 1,363 | 0.03% |
| B+ | 57 | 0.001% |
| B | 33,862 | 0.74% |
| C | 822,852 | 18.0% |
| D | 3,429,544 | 75.2% |
| E/F | 267,413 | 5.9% |
**75.2% of all graded AI agents earn a D.** Only 0.03% earn an A.
What Makes an Agent D-Grade?
D-grade agents typically lack:
- **Security practices** — No vulnerability disclosure, no dependency scanning, no security policy
- **Documentation** — Missing or minimal README, no API docs, no usage examples
- **Active maintenance** — No commits in months, no response to issues
- **Community signals** — Few or zero stars, no forks, no contributors beyond the author
- **License clarity** — Unknown or missing license information
The average D-grade agent is a single-author project with 0 stars, no security policy, unknown license, and no activity in the past 90 days.
The Gap Is Growing
| Stars | Average Trust Score |
|-------|-------------------|
| 0 | 53.4 |
| 1-99 | 61.2 |
| 100-999 | 65.9 |
| 1K-10K | 71.4 |
| 10K-100K | 74.3 |
| 100K+ | 75.7 |
Popular projects score higher — but the gap is only 22 points between zero-star and 100K+ star projects. Community popularity alone doesn't guarantee trust.
Why This Matters Now
The AI agent ecosystem is entering its "deployment phase." Enterprises are integrating agents into production workflows. CI/CD pipelines are pulling agent dependencies. Autonomous systems are selecting tools at runtime.
Every integration without trust verification is a supply chain risk.
What Can Be Done?
1. **Add a security policy** — Even a simple SECURITY.md significantly improves trust signals
2. **Choose a license** — An unknown license is a legal liability for adopters
3. **Maintain actively** — Respond to issues, update dependencies, push commits
4. **Document clearly** — A good README is the single highest-ROI trust signal
5. **Use trust verification** — Tools like Nerq's preflight API (`nerq.ai/v1/preflight?target=your-agent`) provide instant trust assessment
The 0.03% of A-grade agents didn't get there by accident. They invested in the practices that make software trustworthy. The other 75% can learn from them.
Methodology
Trust scores are calculated from 13+ independent signals including GitHub metadata, NVD vulnerability data, OSV.dev advisories, OpenSSF Scorecard metrics, license analysis, download statistics, and community signals. No self-reported data. No pay-to-play. Pure signal analysis.
Data current as of March 13, 2026. The Nerq Ecosystem Trust Index is updated daily.