Processing special categories of personal data under the GDPR and the AI Act

Introduction: A shift in mindset

The GDPR has long been the gold standard for safeguarding personal data in the EU. One of its most protective mechanisms concerns the processing of special categories of personal data—data that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, and data concerning health, sex life or sexual orientation, and biometric/genetic data. Under the GDPR, the processing of such data is strictly prohibited unless one of the limited exceptions under Article 9(2) applies. Traditionally, this has created a culture of caution: DPOs and privacy teams often advise against processing such data unless absolutely necessary and legally justified. In many cases, these functions serve as gatekeepers, erring on the side of minimization and risk-aversion.

However, with the emergence of the EU AI Act, a shift in mindset is becoming increasingly apparent. In certain contexts—such as bias monitoring, fairness auditing, and improving algorithmic transparency—processing special category data may not only be desirable but necessary to meet the AI Act's fairness and non-discrimination requirements. This shift puts the privacy and AI governance functions on potentially conflicting paths. While the GDPR framework encourages minimization and risk mitigation, the AI Act may incentivize or require processing sensitive attributes to ensure lawful and ethical AI development.

So what should organisations do when their DPOs/privacy teams raise red flags about processing, while their AI governance teams argue that the data is essential to prevent discrimination? This article examines the legal and ethical challenges at the intersection of the GDPR and the AI Act, outlining the lawful bases for processing under the GDPR and offering practical guidance on reconciling the two frameworks.

1. The GDPR’s prohibition and exceptions: A quick recap

Under Article 9(1) of the GDPR, processing special category data is prohibited unless one of the exceptions under Article 9(2) applies, such as:

• Explicit consent (Art. 9(2)(a))

• Employment, social security, and social protection law (Art. 9(2)(b))

• Substantial public interest (Art. 9(2)(g))

• Archiving and scientific research (Art. 9(2)(j))

For bias mitigation or fairness assessment, controllers often look to the substantial public interest or research exceptions, though both come with strict conditions and national implementations.

2. The AI Act’s push toward fairness and non-discrimination

The AI Act introduces a legal framework aimed at ensuring high-risk AI systems are transparent, explainable, and fair. In particular, Article 10 mandates appropriate data governance and the need to detect, prevent, and mitigate bias in training, validation, and testing datasets. In practice, this may require the processing of special category of personal data—e.g., information about ethnicity or gender—to determine whether an AI model is exhibiting unfair bias.

Article 10(5) specifically addresses this need. It allows providers of high-risk AI systems to process special categories of personal data where it is strictly necessary for detecting, preventing, or correcting bias subject to specific safeguards. While this provision responds to the practical demands of fairness, it does not, on its own, provide a lawful basis under the GDPR. The AI Act does not override or amend the GDPR, and any processing of sensitive data must still comply with the conditions set out in Article 9(2) GDPR.

Recital 70 of the AI Act provides interpretative guidance, suggesting that such processing may fall under the "substantial public interest" exception provided in Article 9(2)(g) of the GDPR. However, it is important to remember that recitals are not legally binding. To rely on the substantial public interest exception, the processing must be based on Union or Member State law, be necessary and proportionate, and be accompanied by suitable safeguards.

3. Bridging the gap: Reconciling legal requirements

Ultimately, while Article 10(5) AI Act signals an acceptance that fairness in AI may require sensitive data, organisations must not interpret it as a free pass. Compliance with the GDPR remains paramount, and reconciling these two frameworks requires deliberate legal analysis, appropriate legal bases, and well-documented governance procedures.

To resolve this, organisations should consider the following:

  • Lawfulness first: Any processing of special category data for AI governance must still be grounded in one of the Article 9(2) exceptions. The fairness obligations under the AI Act do not themselves constitute a lawful basis.
  • DPIAs and risk-based dialogue: When relying on Article 9(2) to process special categories of personal data, it is essential to document the legal basis, safeguards, and mitigation measures in a Data Protection Impact Assessment (DPIA). The DPIA must clearly assign responsibility for each mitigation measure to a designated owner, who is accountable for its implementation, for monitoring its effectiveness, and for making necessary adjustments. The DPIA should be regularly reviewed and updated to reflect these changes.

In this context, the DPIA takes on a more strategic and collaborative role. It is no longer just a compliance document for the eyes of the DPO—it becomes a platform for cross-functional dialogue, bringing together privacy, legal, AI governance, risk management, and technical teams to jointly assess and address the risks involved. This holistic approach is essential to ensure that both privacy and fairness obligations are meaningfully addressed.

4. Conclusion: Toward collaborative compliance

Rather than pitting privacy against fairness, organisations must foster a coordinated governance model. The AI Act and GDPR may appear at odds, but their shared goal is ethical, lawful, and non-discriminatory data use. Where one regime seeks for example to protect individuals from intrusive profiling, the other seeks to ensure that AI systems performing the profiling are fair and equitable for all. The key lies in transparent decision-making, interdisciplinary collaboration, and ongoing risk evaluation.

More from our team