AI Summary: OECD AI Principles is an voluntary AI regulation in INTL focused on governance_principles. Maximum penalty: N/A (voluntary principles). Nerq has assessed 41,734 agents against applicable risk classifications.

AI Agent Compliance: OECD AI Principles

OECD AI Principles is a currently voluntary regulation focused on governance_principles in INTL. It establishes a risk-based framework for AI systems with penalties up to N/A (voluntary principles) for non-compliance.

Overview

Statusvoluntary
Effective Date2024-05-02
RegionINTL
CountryINTL
Focus Areagovernance_principles
Max PenaltyN/A (voluntary principles)
Per ViolationN/A
SourceOfficial text

Risk Model

principles_based

Risk Classes

High-Risk Criteria

Requirements

Agent Risk Distribution

41,126
minimal
401
high
207
limited

Top Agents by Compliance Score

AgentComplianceRisk ClassTrust Score
openagents 100.0 minimal 90.0
manaflow-ai/cmux 100.0 minimal 87.0
auth0/auth0-mcp-server 100.0 minimal 85.3
donaldfilimon/abi 100.0 minimal 81.4
mcp-sequentialthinking-tools 100.0 minimal 81.2
atelier 100.0 minimal 80.0
attune-ai 100.0 minimal 79.2
gizmax/Sandcastle 100.0 minimal 77.7
hivemoot/hivemoot 100.0 minimal 77.6
coo-quack/calc-mcp 100.0 minimal 76.8

Frequently Asked Questions

What are the compliance requirements under OECD AI Principles?

OECD AI Principles requires AI systems to meet specific regulatory standards focused on governance_principles. Nerq automatically checks AI agents against these requirements.

How does OECD AI Principles classify AI risk?

OECD AI Principles uses a risk-based classification: principles_based. Nerq maps each agent to the applicable risk class.

What are the penalties under OECD AI Principles?

Non-compliance with OECD AI Principles can result in penalties up to N/A (voluntary principles). Use Nerq to identify compliance gaps before enforcement.

← All jurisdictions

We use cookies for analytics and caching. Privacy Policy