Excerpt from Heise Online Article, Published on December 21, 2025

The EU Commission’s latest digital policy initiative is facing strong criticism from legal scholars and consumer protection groups, who warn it could establish what amounts to an “unlimited special legal zone” for artificial intelligence. According to experts, the proposal risks weakening long – standing privacy safeguards while offering disproportionate advantages to large technology companies operating at scale.

At the center of the debate is the draft digital omnibus package, which introduces a proposed Article 88c. This provision would allow broad exemptions for data processing when information is used to develop or train AI systems. Legal experts caution that the loosely defined concept of an “AI system” could enable organizations to bypass strict requirements set out in the General Data Protection Regulation (GDPR), effectively diluting one of Europe’s strongest privacy frameworks.

Analysts from the law firm Spirit Legal, working on behalf of the Federation of German Consumer Organisations (vzbv), argue that the proposal marks a fundamental shift away from GDPR’s technology – neutral principles. Instead, it introduces a technology – specific exception that prioritizes corporate innovation over individual rights. Under the draft rules, even highly sensitive data — including health information or political opinions — could be processed more freely if it is claimed to serve AI development purposes. Critics say this approach rewards large – scale data collection rather than placing meaningful limits on it.

Further concerns focus on enforcement. Essential protections, such as the right for individuals to opt out of data use, are reportedly confined to non – binding explanatory sections rather than embedded in enforceable law. This could leave regulators with limited authority, particularly in cases involving automated web scraping or opaque data collection practices.

In response, experts recommend clear legal requirements for accessing personal data for AI training, unless anonymized or synthetic alternatives can achieve similar outcomes. They also urge mandatory technical safeguards to prevent AI systems from unintentionally reproducing personal information, a risk commonly referred to as data leakage.

Consumer advocates warn that without stronger safeguards, the proposal may erode public trust and create legal loopholes that benefit global technology giants at the expense of European citizens and businesses.

To delve deeper into this topic, visit Heise Online.