AI Summary: OECD AI Principles is an voluntary AI regulation in INTL focused on governance_principles. Maximum penalty: N/A (voluntary principles). Nerq has assessed 41,734 agents against applicable risk classifications.

AI Agent Compliance: OECD AI Principles

OECD AI Principles is a currently voluntary regulation focused on governance_principles in INTL. It establishes a risk-based framework for AI systems with penalties up to N/A (voluntary principles) for non-compliance.

Overview

Statusvoluntary
Effective Date2024-05-02
RegionINTL
CountryINTL
Focus Areagovernance_principles
Max PenaltyN/A (voluntary principles)
Per ViolationN/A
SourceOfficial text

Risk Model

principles_based

Risk Classes

High-Risk Criteria

Requirements

Agent Risk Distribution

41,126
minimal
401
high
207
limited

Top Agents by Compliance Score

AgentComplianceRisk ClassTrust Score
williamzujkowski/strudel-mcp-server 100.0 minimal 92.9
laravel/boost 100.0 minimal 91.2
openagents 100.0 minimal 91.2
harbor 100.0 minimal 90.5
microsoft/azure-devops-mcp 100.0 minimal 90.3
opal 100.0 minimal 90.2
GoogleCloudPlatform/agent-starter-pack 100.0 minimal 90.1
laravel/mcp 100.0 minimal 90.0
agentgateway/agentgateway 100.0 minimal 89.8
QwenLM/qwen-code 100.0 minimal 89.7

Frequently Asked Questions

What are the compliance requirements under OECD AI Principles?

OECD AI Principles requires AI systems to meet specific regulatory standards focused on governance_principles. Nerq automatically checks AI agents against these requirements.

How does OECD AI Principles classify AI risk?

OECD AI Principles uses a risk-based classification: principles_based. Nerq maps each agent to the applicable risk class.

What are the penalties under OECD AI Principles?

Non-compliance with OECD AI Principles can result in penalties up to N/A (voluntary principles). Use Nerq to identify compliance gaps before enforcement.

← All jurisdictions