German Court Greenlights Meta Using Public User Data for AI Training
German court backs Meta's AI data use, tilting the balance between corporate 'legitimate interest' and user privacy.
May 24, 2025

A significant legal decision has emerged from Germany, as a higher regional court cleared Meta to utilize public data from Facebook and Instagram users for the purpose of training its artificial intelligence systems. The ruling by the Cologne Higher Regional Court dismissed an emergency lawsuit brought forward by a consumer protection group, Verbraucherzentrale NRW (Consumer Protection Organization of North Rhine-Westphalia), which sought to halt Meta's plans.[1][2][3][4][5][6] This judgment allows Meta to proceed, at least for now, with its strategy to harness user-generated content to enhance its AI capabilities within the European Union, a move that has ignited considerable debate regarding data privacy and the future of AI development.[1][5][7]
The core of the legal challenge revolved around Meta's announcement that it would use public posts, comments, and other interactions from adult users on its platforms like Facebook and Instagram to train its AI models, including the Llama language models and its Meta AI chatbot.[8][2][6][9] Meta has stated that this data collection, set to commence formally around May 27, 2025, will not include private messages or data from users under the age of 18.[8][2][10][9] The company informed users about this intended data usage through in-app notifications and emails, providing a mechanism for users to object and opt-out.[8][11][9] Verbraucherzentrale NRW contested this approach, arguing that it could violate the General Data Protection Regulation (GDPR), particularly concerning the potential use of sensitive personal information and the adequacy of an opt-out system versus requiring explicit opt-in consent.[1][2][5] The consumer group also raised concerns about potential violations of the Digital Markets Act (DMA) regarding the combination of data from different platforms.[1]
In its ruling, the Cologne Higher Regional Court found that Meta's actions did not breach EU law, specifically the GDPR or the DMA in this preliminary injunction context.[1][3][4] The court determined that Meta is pursuing a legitimate interest in using the data to train AI systems, a purpose it argued cannot be reasonably achieved through less intrusive means, given the vast amounts of data required for effective AI training.[3][4] The court's decision indicated that the balance of interests tilted in Meta's favor, especially since the company intends to use only publicly available data – information that could also be found via search engines – and had implemented measures to mitigate the impact on users, such as clear communication and opt-out procedures.[3][4][6] The court acknowledged that even de-identified data could still be considered personal but deemed Meta's business interests in AI development to outweigh user privacy concerns in this specific instance, referencing a December 2024 opinion from the European Data Protection Board (EDPB) which allows for the use of publicly available adult data under certain conditions.[4][6] Furthermore, the court stated that Meta's approach did not constitute an unlawful combination of personal data under the DMA as it does not combine data relating to individual users in a prohibited manner.[1] This decision aligns with a positive assessment Meta reportedly received from the Irish Data Protection Commission (DPC), its lead EU regulator, after Meta made improvements to its transparency notices and objection forms.[1][12]
The German court's decision carries substantial implications for the rapidly evolving AI industry in Europe and beyond. It provides a degree of legal affirmation for tech companies looking to leverage publicly accessible user data to train their AI models, a practice common among major AI developers like Google and OpenAI.[10][7] This ruling could set a precedent for how "public data" is interpreted in the context of AI training and may influence other regulators and courts grappling with similar issues.[5][7] However, the legal landscape remains complex and contested. While Meta secured a victory in this specific German preliminary proceeding, the broader debate over data scraping, user consent, and the ethical boundaries of AI development is far from settled.[2][7] Other privacy advocacy groups, such as NOYB, led by Max Schrems, have also challenged Meta's plans, issuing cease and desist letters and threatening further legal action, arguing that "legitimate interest" is not a sufficient legal basis under GDPR and that an opt-in system for consent is necessary.[13][2][14][15] NOYB contends that users do not reasonably expect their historical public posts to be repurposed for training general-purpose AI and that exercising data subject rights like erasure becomes difficult once data is ingested into AI models.[13][15] The Hamburg Data Protection Commissioner has also reportedly initiated urgent proceedings against Meta, seeking to prohibit AI training on German data for a period, indicating that legal and regulatory scrutiny will persist.[1]
Looking ahead, this ruling is a significant, albeit potentially provisional, moment in the ongoing effort to balance innovation in artificial intelligence with the fundamental right to data privacy.[1][6] The decision underscores the critical role of clear user notifications and accessible opt-out mechanisms, which the Cologne court found to be important mitigating factors.[1][3][4] However, the core disagreement between tech companies relying on "legitimate interest" and privacy advocates demanding explicit "opt-in" consent for data usage in AI training remains a central point of contention.[13][1][7] As AI technologies become more integrated into society, the legal and ethical frameworks governing their development will continue to be shaped by such court decisions, regulatory actions, and public discourse. The European Union, with its comprehensive GDPR and emerging AI Act, remains a key battleground for these precedent-setting confrontations, the outcomes of which will likely influence global standards for AI governance and data protection. The ongoing dialogue and potential for further legal challenges suggest that the path forward for AI data utilization will involve continuous negotiation and refinement of legal interpretations.
Sources
[2]
[7]
[8]
[9]
[12]
[13]
[15]