The European Data Protection Board (EDPB) does not place any major obstacles in the way of the development and use of artificial intelligence (AI) models. This comes from a statement published on Wednesday by the European Data Protection Commissioner on the regulation of AI in relation to the General Data Protection Regulation (GDPR).
Advertisement
According to data protection experts, Meta, Google, OpenAI & Co. can theoretically rely on “legitimate interest” as a legal basis for the processing of personal data by AI models. However, the EDPB attaches several conditions to this release.

3-step test
National data protection authorities must use a 3-step test to assess whether a legitimate interest exists. First of all, it should be checked whether the right to data processing is valid or not. This is followed by a “necessity test” to determine whether data processing is necessary. Finally, the fundamental rights of data subjects and the interests of AI providers must be considered.
With a view to balancing fundamental rights, the EDPB emphasizes that the development or deployment phase of AI models may pose “specific risks” to civil rights. To assess such impacts, supervisory authorities must take into account “the type of data processed by the model”, the context and “potential others”. In principle, “the special circumstances of the individual case” should be taken into account.
The committee cites as an example: in the statement A voice assistant designed to help users improve cybersecurity. Such services may be beneficial to individuals and may be based on legitimate interest. However, this only applies if the processing is absolutely necessary and the balance of all rights involved is maintained.
Clarification on anonymization
According to the EDPB, if unlawfully processed personal information was used in the development of an AI model, its use may be banned altogether. Exception: Everything is reasonably unknown. There are standards for this too: In the case of anonymity, it is very unlikely that people can be “directly or indirectly identified.” It should also be ensured that such personal information cannot be extracted from the model through queries.
EDSA established a taskforce around ChatGPT in mid-2023 at the insistence of the Irish Supervisory Authority (DPC). This was their response to the brief ban imposed on the system by the Italian Data Protection Authority. With the joint statement, data protection authorities want to ensure uniform legal enforcement across the EU.
“We must ensure that these innovations are done ethically and safely, and that everyone benefits from them,” EDSA Chairman Anu Talas stressed. IT association CCIA welcomed the clarification on legitimate interest. They are “an important step towards greater legal certainty”.
(vbr)
