Tired of general newsletters that skim over your real concerns? Dastra Insights, offers legal and regulatory monitoring specifically designed for DPOs, lawyers, and privacy professionals.
Each month, we go beyond a simple recap: we select about ten decisions, news, or positions that have a concrete impact on your missions and organizations.
🎯 Targeted, useful monitoring grounded in the real-world realities of data protection and AI.
Here is our selection for February 2026:
Risks related to AI‑generated images: the Data Protection Authority signs a joint statement
On 23 February 2026, 61 data protection authorities worldwide, including the Belgian Data Protection Authority (Autorité de protection des données, APD), published and signed a joint statement warning of the privacy and fundamental rights risks posed by images and videos generated by artificial intelligence.
This initiative was carried out under the auspices of the Global Privacy Assembly (GPA) and coordinated by the International Enforcement Cooperation Working Group (IEWG).
What are the main concerns?
The signatory authorities express serious concerns about AI systems capable of producing ultra‑realistic images and videos depicting identifiable persons, often without their knowledge or consent, notably:
- content that may infringe on the privacy or dignity of the persons depicted;
- the creation of intimate or defamatory deepfakes that could harm reputations or be exploited for malicious purposes;
- harms specific to children and other vulnerable groups.
Principles and recommendations
The statement does more than sound the alarm: it sets out fundamental principles that organizations developing or using image‑generation systems should respect:
- Implement robust measures to prevent abusive use or non‑consensual dissemination of personal data;
- Ensure meaningful transparency about systems’ capabilities and limitations, as well as permitted uses;
- Provide effective mechanisms allowing data subjects to request prompt removal of harmful content involving their data;
- Mitigate risks specific to children, including enhanced protections and information tailored to young people, their parents and educators.
Why this matters
The APD and its counterparts recall that generative AI technologies offer significant opportunities, but can also infringe fundamental rights (such as the right to privacy or human dignity) if deployed without appropriate safeguards. They therefore call on developers, vendors, platforms and users to cooperate with authorities to ensure that technological innovation does not come at the expense of people’s freedoms and rights.
Documentation: CNIL publishes the 2026 update of the Tables Informatique et Libertés
CNIL has published the 2026 edition of its Tables Informatique et Libertés, a key doctrinal resource compiling and structuring the essentials of case law and decisional practice on the protection of personal data, at both national and European levels.
The Tables Informatique et Libertés are an organized corpus of thematic summaries of major decisions from:
- French courts (e.g. the Conseil d’État or the Cour de cassation);
- European courts (notably the Court of Justice of the European Union);
- CNIL itself (decisions, corrective measures, doctrinal positions);
- the European Data Protection Board (EDPB).
Presented according to a detailed thematic classification (principles, legal bases, data subject rights, data security, international transfers, sanctions, etc.), they provide a coherent overall view of the doctrine applicable to the GDPR and the French Data Protection Act (Informatique et Libertés).
CNIL emphasizes that these Tables have a dual purpose:
- Internal: to ensure a homogeneous appropriation of doctrine among CNIL staff in the face of the constantly increasing number of legal questions arising from GDPR application.
- External: to make doctrinal positions and legal points more accessible to professionals, academics and legal practitioners, which are not always published in individual decisions.
This document will be updated regularly to reflect ongoing developments in data protection practice, notably with regard to new technologies and case law.
Call for testers of a GDPR auditing tool for AI models
The French National Cybersecurity Agency (ANSSI), in partnership with CNIL, PEReN and the IPoP project of the Cybersecurity PEPR led by Inria, has launched a call for expressions of interest (AMI) to select stakeholders willing to test a new privacy auditing tool for artificial intelligence models. This initiative is part of the PANAME project (Privacy Auditing of AI Models).
The main objective of the project is to develop a software library to technically audit the privacy of AI models, in particular against information‑extraction tests that can reveal personal data from training datasets. This aims to help organizations assess the GDPR compliance of their models, especially when they use or are trained on sensitive data.
This call for expressions of interest is open from 26 February to 28 March 2026 to all public or private entities established in the European Union: companies, startups, research laboratories, public administrations or any other organization using or developing AI models. Selected applicants will participate in a practical testing phase to validate and enrich the tool’s functionalities.
United Kingdom: Information Commissioner’s Office (ICO) fines Reddit £14.47m for failures in protecting children’s data
On 24 February 2026, the Information Commissioner’s Office (ICO), the UK’s independent data protection authority, announced a sanction of £14.47 million (approximately €17 million) against Reddit, Inc., for unlawfully using personal data of children under 13 and failing to implement adequate age‑verification mechanisms.
According to the ICO:
- Reddit did not implement a robust age‑verification mechanism, relying until July 2025 on a simple self‑declaration by users, which was easily circumvented.
- The company did not carry out a Data Protection Impact Assessment (DPIA) before January 2025 to assess and mitigate risks related to the processing of children’s data, as required by UK law and the GDPR.
- The ICO concluded that many children under 13 were present on the platform without a lawful basis for processing their data, which could have exposed them to inappropriate or potentially harmful content.
This sanction is part of the ICO’s strengthened enforcement of the Children’s Code (Age Appropriate Design Code) and UK data protection legislation, notably by imposing enhanced obligations on platforms likely to be used by minors.
Croatia: real estate agency fined €100,000 for GDPR breaches
On 19 February 2026, the Croatian Personal Data Protection Agency (Agencija za zaštitu osobnih podataka – AZOP) imposed an administrative fine of €100,000 on a real estate agency acting as controller for multiple GDPR violations.
Grounds for the sanction
AZOP found that the agency breached several fundamental GDPR obligations, notably:
- Violation of the storage limitation principle: the agency retained personal data of 11,887 clients well beyond the time necessary for the processing purpose.
- Lack of a legal basis for certain data: copies of sensitive documents (for example 898 ID cards, 6 passports, 3 copies of bank cards, one health insurance card, one minor’s ID card, etc.) were stored in the archives without a clear legal justification.
- Insufficient technical and organizational measures: the agency had not ensured adequate supervision and training of staff accessing the data, contrary to the requirements of Article 32 GDPR.
During the inspection, AZOP noted that many brokerage contracts for property purchase or rental, concluded between 2010 and 2019, were still archived with attached data, even though these documents were no longer necessary for the original processing purpose.
Implications of this decision
This ruling highlights several key points:
- The data minimisation and storage limitation principle requires that data be kept only for as long as necessary for the original purpose (Art. 5 GDPR).
- The lawfulness of processing requires a clear legal basis for each type of data stored, particularly for sensitive documents such as copies of identity documents or bank cards (Art. 5 GDPR).
- Organisational and technical security measures (staff training, access monitoring, clear processing rules) must be proportionate to the risk (Art. 32 GDPR).
Poland: DPD Polska fined 11 million PLN (~€2.5 million) for subcontracting and security failures
The Polish Office for Personal Data Protection (UODO – Urząd Ochrony Danych Osobowych) imposed fines totalling 11,000,000 PLN on DPD Polska Sp. z o.o. following several serious breaches concerning the processing of customers’ personal data.
UODO found that DPD Polska:
- Had not concluded data processing agreements (DPAs) with its external subprocessors (notably carriers with access to shipping labels containing personal data), in breach of Article 28(3) GDPR (contractual obligations with processors).
- Did not ensure that employees processed personal data only pursuant to appropriate authorisations (Art. 32(4) GDPR). The company’s internal system generating pseudo‑authorisations lacked essential elements such as name and signature, undermining traceability and the lawfulness of processing.
The 11 million PLN (~€2.5 million) sanction reflects the seriousness of the breaches: the company had not properly secured the contractual relationship with its subprocessors, nor regulated internal processing by employees, exposing customers’ data to uncontrolled risks.
European Union: CJEU annuls the General Court’s order in Ireland v. WhatsApp
On 10 February 2026, the Court of Justice of the European Union (CJEU) annulled an order of the General Court (formerly the Tribunal of the European Union) that had declared inadmissible the action brought by WhatsApp Ireland Ltd challenging a decision related to proceedings initiated by the Irish Data Protection Authority.
The Irish authority’s decision
Following the entry into force of the GDPR, the Irish Data Protection Commission (DPC) received several complaints concerning the transparency of WhatsApp’s processing, in particular regarding possible data sharing with other entities of the Facebook group (now Meta).
In its final decision, the DPC found that WhatsApp had breached:
- the transparency principle (Article 5(1)(a) GDPR);
- the information obligations set out in Articles 12 to 14 GDPR.
Pursuant to Article 58(2) GDPR, the authority imposed four administrative fines, totalling €225 million.
The General Court’s inadmissibility ruling
Several European supervisory authorities raised objections to the Irish draft decision, and the European Data Protection Board (EDPB) was consulted.
The EDPB adopted a binding decision under Article 65 GDPR, requiring the DPC to incorporate certain analyses and to revise specific elements, notably concerning the qualification of infringements and sanctions. The final Irish decision was therefore adopted taking the EDPB’s position into account.
Claiming that this binding decision directly affected its legal situation, WhatsApp brought an action for annulment before the General Court, challenging the legality of the EDPB’s decision.
The General Court had, however, declared the action inadmissible, finding that WhatsApp was not directly concerned by the contested EDPB decision, which was formally addressed to the Irish authority.
The CJEU’s annulment
The Court essentially found that the General Court’s analysis of the absence of direct effect was incorrect. Indeed, the European decision in question produced binding legal effects likely to affect WhatsApp’s legal situation directly, notably as regards the content of the final decision and the amount of the fines.
By annulling the General Court’s order of inadmissibility, the Court of Justice of the European Union allows a substantive examination of the action brought by WhatsApp Ireland Ltd.
