Excerpt from Cybernews Article, Published on May 28, 2025.

Meta has officially begun training its AI models using public posts from EU-based Facebook and Instagram users, sparking intense scrutiny and privacy concerns. The tech giant’s move, which went into effect on May 27, 2025, allows it to access a vast repository of user-generated content across its platforms. While Meta argues this data is crucial to improving its AI capabilities, privacy advocates across the EU see it as a troubling precedent.

Despite opposition, Meta received a green light from key EU data protection authorities. Ireland’s Data Protection Commission did not object to the plan, and Germany’s Higher Regional Court of Cologne dismissed an attempt to halt Meta’s AI data collection. Even the Hamburg DPA backed away from challenging Meta directly, opting instead to await a broader EU-wide evaluation. However, experts believe this isn’t the end of the road. Critics argue that Meta’s opt-out model violates the EU’s GDPR, which requires clear and informed consent for data processing. Currently, EU users must actively find and submit forms to prevent their data from being used in AI training—something many believe should be opt-in by default.

Cybersecurity expert Dr. Ilia Kolochenko warned that Meta’s AI training raises serious questions around children’s data, correction and deletion rights, and the ingestion of harmful or illegal content. According to him, Meta may soon be forced by EU regulators to make fundamental changes, possibly switching to an opt-in model and enforcing stricter content screening. While Meta claims its AI developments will benefit millions in the EU, the backlash suggests growing discomfort with how tech giants handle user data. As AI grows more powerful, the EU’s push to defend digital rights could be the deciding force in how Meta—and others—navigate the ethical minefield of AI development in the region.

To delve deeper into this topic, please read the full article Cybernews.