Skip to content

This certification is intended for companies and institutions that deploy AI systems in their internal operations, services, or products. It supports responsible and compliant use of AI across various roles in the value chain — from development and integration to application and governance.

How to Qualify

To be eligible for this certification, organizations must demonstrate that they meet key requirements for responsible AI deployment within the value chain. The qualification process includes:

Conducting a Risk Assessment
Evaluate potential risks associated with each AI system you deploy — including its intended use, potential impact, and level of autonomy.

Ensuring User Transparency
Provide clear and accessible information to users about how the AI system functions, its limitations, and the implications of its decisions.

Complying with Data Privacy & Governance
Align with GDPR and other applicable regulations, ensuring secure data handling, access control, and ongoing data integrity.

Implementing Human Oversight
Establish mechanisms that allow for human review and the ability to override or contest critical decisions made by AI systems.

Detecting and Mitigating Bias
Put in place tools and procedures to regularly audit your AI systems for bias and ensure fairness in both data and decision-making.

Certification Process

1. Self-Assessment

The organization conducts an internal risk assessment and documents compliance with applicable legislation.

2. External evaluation

An independent auditor reviews the implementation of AI systems and checks compliance with the EU AI Act and GDPR.

3. Certification

Upon successful completion of the audit, the EU-AIA CDC certification is granted.

4.Periodic review

The certification is reassessed every three years or earlier if there are significant changes in the organization’s use of AI.