Strategic AI Governance Resource

CertifiedML

AI Conformity Assessment & Pre-Market Certification Hub

Navigating EU AI Act Article 43 conformity assessment pathways, notified body procedures, and CE marking requirements for high-risk AI systems

Article 43 Conformity Assessment CE Marking for AI Notified Body Procedures CEN-CENELEC Standards
Assess Conformity Readiness

Strategic Safeguards Portfolio

11 USPTO Trademark Applications | 156-Domain Portfolio

USPTO Trademark Applications Filed

SAFEGUARDS AI 99452898
AI SAFEGUARDS 99528930
MODEL SAFEGUARDS 99511725
ML SAFEGUARDS 99544226
LLM SAFEGUARDS 99462229
AGI SAFEGUARDS 99462240
GPAI SAFEGUARDS 99541759
MITIGATION AI 99503318
HIRES AI 99528939
HEALTHCARE AI SAFEGUARDS 99521639
HUMAN OVERSIGHT 99503437

156-Domain Portfolio -- 30 Lead Domains

Executive Summary

Challenge: The EU AI Act requires pre-market conformity assessment for all high-risk AI systems under Article 43, yet no harmonized standards have been published by CEN-CENELEC (Q4 2026 at earliest). This means the Article 42 "presumption of conformity" pathway does not exist -- organizations must navigate conformity assessment without the standardized benchmarks the regulation was designed around. The European Commission has acknowledged: "These standards are not ready." The Digital Omnibus Act (COM(2025) 836) proposes a conditional backstop delay to December 2, 2027 for Annex III high-risk systems, but this Omnibus is under Parliamentary and Council negotiation and is unlikely to be adopted before August 2, 2026.

Market Catalyst: ISO/IEC 42001:2023 -- the world's first certifiable AI management system standard -- provides the closest available proxy for conformity evidence, with hundreds certified globally and Fortune 500 adoption accelerating (Google, IBM, Microsoft, AWS, KPMG, Workday, Autodesk). Veeam's Q4 2025 acquisition of Securiti AI for $1.725B -- the largest AI governance acquisition ever -- and F5's September 2025 acquisition of CalypsoAI for $180M cash (4x funding multiple) validate enterprise AI governance valuations. The standards vacuum creates both risk and opportunity: organizations that establish robust internal assessment frameworks now will be positioned to demonstrate conformity when harmonized standards eventually arrive.

Resource: CertifiedML.com provides comprehensive guidance on navigating EU AI Act conformity assessment pathways, including self-assessment procedures (Annex VI), third-party notified body assessment (Annex VII), CE marking requirements, and quality management system integration. Part of a complete portfolio spanning governance (SafeguardsAI.com), high-risk classification (HighRiskAISystems.com), foundation models (ModelSafeguards.com), oversight (HumanOversight.com), risk management (RisksAI.com), and testing (AdversarialTesting.com).

For: Certification bodies, AI system providers, notified body candidates, conformity assessment professionals, quality management teams, and organizations subject to EU AI Act high-risk system requirements under Annex III.

The Conformity Assessment Challenge

0 Standards Published
CEN-CENELEC Harmonized Standards for AI Act -- Q4 2026 Earliest

The EU AI Act's conformity assessment framework (Article 43) was designed around harmonized standards that do not yet exist. Without published CEN-CENELEC standards, the Article 42 "presumption of conformity" pathway is unavailable. Organizations must navigate self-assessment (Annex VI) or third-party assessment (Annex VII) using internal documentation, ISO 42001, and best-practice frameworks as evidence.

Enterprise AI Governance Requires Complementary Layers

Governance Layer: "SAFEGUARDS" (Compliance Requirements)

What: Statutory terminology in binding regulatory provisions

Where: EU AI Act Chapter III (40+ uses across Articles 5, 10, 50, 57, 60, 81, Recitals), FTC Safeguards Rule (13 uses + title), HIPAA Security Rule (framework)

Who: Chief Compliance Officers, legal teams, audit functions, certification auditors

Cannot be substituted: Regulatory language is binding in compliance filings and certification documentation

Implementation Layer: "CONTROLS/GUARDRAILS" (Technical Mechanisms)

What: Auditable measures and technical tools

Where: ISO 42001 Annex A controls (38 specific controls), AWS Bedrock Guardrails, Guardrails AI validators

Who: AI engineers, security operations, technical teams

Market terminology: Often called "guardrails" in commercial products

Semantic Bridge: Organizations implement "controls" (ISO 42001, AWS, Guardrails AI) to achieve "safeguards" compliance (EU AI Act, FTC, HIPAA). Industry discourse naturally uses "safeguard" to describe the PURPOSE of technical controls. ISO 42001 creates formal terminology bridge between regulatory mandates and operational frameworks.

Conformity Assessment Landscape

Article 43: Assessment Pathways

Self-Assessment (Annex VI)

Internal conformity control for most Annex III high-risk systems -- provider declares conformity through internal QMS and technical documentation review

Third-Party (Annex VII)

Mandatory notified body involvement for biometric AI (Annex III Section 1) and certain critical infrastructure systems -- external audit of full compliance

CE Marking

Required declaration of conformity and CE marking before placing high-risk AI system on EU market or putting into service

Standards Gap

CEN-CENELEC Status

No harmonized standards published. Q4 2026 at earliest. JTC 21 working groups active but behind schedule by 8+ months

Commission Position

Acknowledged standards are not ready. Digital Omnibus (COM(2025) 836) proposes conditional delay: backstop December 2, 2027 for Annex III requirements

CEN-CENELEC/FRA MoU

January 2026 MoU formally connects fundamental rights assessment to harmonized standards process -- rights considerations embedded from the outset

ISO 42001 Bridge

Conformity Evidence

40-50% overlap with EU AI Act requirements provides strongest available starting point for conformity documentation

Fortune 500 Adoption

Hundreds certified globally -- Google, IBM, Microsoft, AWS, KPMG, Workday, Autodesk among early adopters

Microsoft SSPA Mandate

September 2024: ISO 42001 mandatory for AI suppliers with "sensitive use" -- procurement requirement accelerating adoption

Strategic Insight: The standards gap creates a window where organizations with robust internal assessment frameworks and ISO 42001 certification gain competitive advantage -- demonstrating conformity readiness before harmonized standards formalize the pathway.

Conformity Assessment Pathways

Framework: EU AI Act Article 43 establishes the conformity assessment procedures that high-risk AI system providers must complete before placing systems on the EU market. The pathway depends on system classification, with most Annex III systems eligible for internal self-assessment while certain categories require third-party notified body involvement.

Pathway Comparison

Aspect Self-Assessment (Annex VI) Third-Party (Annex VII)
ScopeMost Annex III high-risk systemsBiometric (Annex III Sec. 1) + certain critical infrastructure
AssessorProvider (internal QMS)Notified body (designated authority)
DocumentationInternal technical file + QMS recordsFull technical file submitted to notified body
OutcomeProvider-issued declaration of conformityNotified body certificate + provider declaration
CE MarkingRequired before market placementRequired, with notified body identification number
TimelineProvider-determined (internal cycle)Subject to notified body availability and review
Cost Range$50K-$150K (internal resources)$150K-$500K+ (notified body fees + preparation)
Harmonized StandardsNot yet available (Q4 2026 earliest)Not yet available (Q4 2026 earliest)

Key Conformity Requirements

Quality Management System (Article 17)

Mandatory for all high-risk AI providers:

  • Documented QMS policies and procedures
  • Design and development verification controls
  • Testing and validation frameworks
  • Data management and governance systems
  • Post-market monitoring processes
  • Corrective action and incident reporting

ISO 42001 alignment: 38 Annex A controls map to QMS requirements, providing structured implementation foundation

Technical Documentation (Article 11)

Required for conformity declaration:

  • System description and intended purpose
  • Design specifications and architecture
  • Training data governance documentation
  • Risk management system records
  • Testing and validation results
  • Human oversight measure specifications

Standards gap impact: Without harmonized standards, documentation must demonstrate compliance through internal frameworks and best practices

Notified Body Framework (Articles 28-39)

Third-party assessment infrastructure:

  • Member state designation of notified bodies
  • Independence and competency requirements
  • Cross-border recognition procedures
  • Conformity assessment certificate issuance

Current status: Only 3 of 27 member states have fully designated competent authorities -- notified body infrastructure is still being established across the EU

Post-Market Monitoring (Article 72)

Ongoing conformity obligations:

  • Continuous system performance monitoring
  • Serious incident reporting requirements
  • Systematic logging and record retention
  • Corrective action when non-conformity detected

Lifecycle requirement: Conformity is not a one-time event -- providers must maintain compliance throughout system deployment

Digital Omnibus Act: Timeline Implications

The Digital Omnibus Act (COM(2025) 836 final) proposes modifications to EU AI Act implementation timelines that directly affect conformity assessment planning. Understanding the Omnibus status is critical for compliance roadmap decisions.

Timeline Scenarios

Scenario High-Risk Deadline GPAI Obligations Planning Impact
Omnibus NOT adoptedAugust 2, 2026 (original)August 2, 2026Prepare for both deadlines simultaneously
Omnibus adoptedDecember 2, 2027 (Annex III backstop)August 2, 2026 (unchanged)GPAI still urgent; high-risk gets 16-month extension

Current assessment (March 2026): The Digital Omnibus is under Parliamentary and Council negotiation. Adoption before August 2, 2026 is unlikely. Organizations should plan for the original August 2026 deadline while monitoring Omnibus progress. Note: the Omnibus does NOT delay GPAI obligations under any scenario -- only Annex III high-risk system requirements may be extended.

Penalty Exposure

Conformity Assessment Readiness

Evaluate your organization's preparedness for EU AI Act conformity assessment. This assessment covers key requirements from Articles 43, 17, and 11 for high-risk AI systems, factoring in the current standards gap and Digital Omnibus timeline uncertainty.

Analysis & Recommendations

About This Resource

CertifiedML provides comprehensive guidance on EU AI Act conformity assessment, navigating the standards gap created by delayed CEN-CENELEC harmonized standards, and building conformity evidence through ISO 42001 certification and internal quality management systems. The resource demonstrates the two-layer architecture where governance layer ("safeguards" = regulatory compliance) sits above implementation layer ("controls/guardrails" = technical mechanisms), with conformity assessment bridging both layers through documented evidence of safeguards implementation.

Complete Portfolio Framework: Complementary Vocabulary Tracks

Strategic Positioning: This portfolio provides comprehensive EU AI Act statutory terminology coverage across complementary domains, addressing different organizational functions and regulatory pathways. Veeam's Q4 2025 acquisition of Securiti AI for $1.725B--the largest AI governance acquisition ever--and F5's September 2025 acquisition of CalypsoAI for $180M cash (4x funding multiple) validate enterprise AI governance valuations.

Domain Statutory Focus EU AI Act Mentions Target Audience
SafeguardsAI.comFundamental rights protection40+ mentionsCCOs, Board, compliance teams
ModelSafeguards.comFoundation model governanceGPAI Articles 51-55Foundation model developers
MLSafeguards.comML-specific safeguardsTechnical ML complianceML engineers, data scientists
HumanOversight.comOperational deployment (Article 14)47 mentionsDeployers, operations teams
MitigationAI.comTechnical implementation (Article 9)15-20 mentionsProviders, CTOs, engineering teams
AdversarialTesting.comIntentional attack validation (Article 53)Explicit GPAI requirementGPAI providers, AI safety teams
RisksAI.com + DeRiskingAI.comRisk identification and analysis (Article 9.2)Article 9.2 + ISO A.12.1Risk management, financial services
LLMSafeguards.comLLM/GPAI-specific complianceArticles 51-55Foundation model developers
AgiSafeguards.com + AGIalign.comArticle 53 systemic risk + AGI alignmentAdvanced system governanceAI labs, research organizations
CertifiedML.comPre-market conformity assessmentArticle 43 (47 mentions)Certification bodies, model providers
HiresAI.comHR AI/Employment (Annex III high-risk)Annex III Section 4HR tech vendors, enterprise HR
HealthcareAISafeguards.comHealthcare AI (HIPAA vertical)HIPAA + EU AI ActHealthcare organizations, MedTech
HighRiskAISystems.comArticle 6 High-Risk classification100+ mentionsHigh-risk AI providers

Why Complementary Layers Matter: Organizations need different terminology for different functions. Vendors sell "guardrails" products (technical implementation) that provide "safeguards" benefits (regulatory compliance)--these are complementary layers, not competing terminologies.

Portfolio Value: Complete statutory terminology alignment across 156 domains + 11 USPTO trademark applications = Category-defining regulatory compliance vocabulary for AI governance.

Note: This strategic resource demonstrates market positioning in AI conformity assessment and certification. Content framework provided for evaluation purposes--implementation direction determined by resource owner. Not affiliated with specific certification bodies or notified body organizations. Regulatory references reflect EU AI Act status as of March 2026.