Eva Jarbekk
Partner
Oslo
Newsletter
by Eva Jarbekk and Sofie Axelsson
Published:
This month’s Privacy Corner highlights a familiar theme with renewed urgency: transparency remains one of the most persistent fault lines in GDPR compliance. Across enforcement actions, court decisions and coordinated regulatory initiatives, authorities continue to focus on whether individuals are genuinely informed about how their personal data is used — not in theory, but in practice.
Several of the cases covered this month underline how often transparency failures sit alongside deeper structural problems, from unclear legal bases and weak consent mechanisms to inadequate handling of data subject rights. As regulators prepare for the EDPB’s 2026 coordinated enforcement action on information obligations, the message is clear: privacy notices, consent flows and internal procedures will be scrutinised closely, and “paper compliance” will not be enough.
The newsletter closes with the CJEU’s decision in the Brillen Rottler case, which adds an important nuance to the right of access. While reaffirming that individuals do not need to justify their requests, the Court also confirms that data protection rights are not immune from abuse. Transparency, once again, sits at the centre — both as a safeguard for individuals and as a boundary that defines where legitimate rights end and bad faith tactics begin.
Sweden has decided to give its police real-time AI facial recognition. A new law, proposed to enter into force on 1 July 2026, would allow police to use real-time facial recognition to locate missing persons suspected of being victims of crime such as abduction or human trafficking, and to prevent imminent threats to life or physical safety. Serious offences carrying a maximum sentence of at least four years' imprisonment would also fall within scope, both for investigative purposes and for the enforcement of sentences. Use requires authorisation from a prosecutor or court, though in time-critical situations police may begin using the system without prior authorisation and seek approval within 24 hours.
The government's case is straightforward: gang violence has declined, but the pressure cannot ease, and police need effective tools to address the problem.
What about the AI Act?
The AI Act, directly applicable across the EU since February 2025, generally prohibits real-time biometric identification in publicly accessible spaces for law enforcement purposes. Member states may, however, opt in to permit such use under strictly defined conditions as Sweden has chosen to do. In its current version, the Swedish AI Facial Recognition Law has been drafted to reflect the AI Act's own language on necessity, proportionality and the specific circumstances in which use is permitted.
However, that alignment did not come easily. During the legislative process, the proposal met with sharp criticism. The Council on Legislation, which reviews proposed legislation for compatibility with Swedish and EU law, and the Swedish Bar Association both took issue with the threshold for use, the standard of suspicion required, and the scope of individual authorisations. Many of the concerns were addressed in the final proposition.
What was not addressed is the absence of public counsel in authorisation proceedings. The Swedish Bar Association argued that the significant interference with personal integrity warranted adversarial proceedings, and that without them the law risks conflict with the right to a fair trial under Article 6 of the European Convention on Human Rights and Article 47 of the EU Charter of Fundamental Rights. The government disagrees, drawing a distinction between real-time facial recognition as a policing method and the covert surveillance measures that trigger the public counsel regime.
Further reflections
Behind the legal detail lies a more fundamental question about what it means to live in a society where the state can scan your face in real time on a public street. As the government bill acknowledges, the technology does not only affect those who are sought. When the system is in use, the biometric data of every person passing the relevant camera is processed, including people who are neither suspected nor convicted of any offence. That processing of a potentially large number of people is not a marginal side effect. It is the mechanism by which the system finds the person it is looking for.
The law as proposed includes meaningful safeguards. It also reflects a broader trajectory: AI is moving from the margins of law enforcement to its centre, and the legal architecture must keep up. Sweden will not be the last jurisdiction to face these questions, and the answers it provides in court, in parliament, and in practice, will be watched closely
You can read more about the matter here.
With the speed and ease of AI tools, harmful manufactured imagery of real individuals can be used to harm people at a new scale. Data protection authorities around the world have noticed. And they are starting to coordinate.
Background
On 23 February 2026, a group of data protection and privacy regulators from across the globe issued a Joint Statement on AI-generated imagery, coordinated through the Global Privacy Assembly's International Enforcement Cooperation Working Group. The statement is a direct response to the proliferation of AI image and video generation tools that can produce non-consensual intimate imagery, defamatory depictions, and other harmful content featuring identifiable real people. The signatories are particularly concerned about the risks to children and other vulnerable groups, including cyber-bullying and exploitation.
The statement sets out a clear set of expectations for organisations developing or deploying AI content generation systems. Robust safeguards against misuse must be built in from the outset, not retrofitted after harm has occurred. Transparency about what a system can do, how it can be misused, and what the consequences of misuse are must be meaningful, not performative. Mechanisms for individuals to request removal of harmful content must be effective and accessible, and responses to such requests must be rapid. And where children may be depicted or affected, enhanced safeguards and age-appropriate information are required as a baseline, not an optional extra.
Why does this matter?
I think the most significant aspect of this statement is not what it says, much of it reflects obligations that already exist under data protection law, but who is saying it, and in what format. A coordinated joint statement from regulators across multiple jurisdictions signals an intention to treat this as a shared enforcement priority, not just a matter for individual national authorities to address in isolation. The statement explicitly notes that the creation of non-consensual intimate imagery already constitutes a criminal offence in many jurisdictions. Regulators are not waiting for new legislation. They are signaling that existing legal frameworks apply and that they intend to use them.
For organisations in the AI and social media space, the direction of travel is clear. Embedding privacy and dignity protections into AI content generation systems is no longer a matter of good practice or reputational management. It is a regulatory expectation, backed by an increasingly coordinated international enforcement community. The question worth asking internally is whether your current safeguards would withstand scrutiny from a regulator applying exactly the standards set out in this statement. If the honest answer is uncertain, that gap needs addressing now.
You can read more about the matter here.
The EDPB has just launched its 2026 Coordinated Enforcement Framework action. This time, the focus is on transparency and information obligations under Articles 12, 13 and 14 GDPR.
What is the CEF, and why does it matter?
The Coordinated Enforcement Framework is the EDPB's mechanism for running synchronised enforcement exercises across multiple EU and EEA jurisdictions simultaneously. The pattern is well established by now: a theme is selected, national authorities investigate organisations in their respective jurisdictions, findings are pooled, and a consolidated report is adopted by the EDPB, often followed by targeted enforcement action at both national and EU level. Previous CEF rounds have covered cloud services in the public sector, the designation and position of data protection officers, the right of access, and, most recently, the right to erasure.
Transparency is a logical next step. It is also, arguably, the area where the gap between legal obligation and operational reality is widest.
What will authorities be looking at?
Twenty-five data protection authorities across Europe will be participating. They will be contacting controllers from various sectors, either through formal enforcement actions or fact-finding exercises. Those contacted should not assume that a fact-finding exercise is the gentler option; the EDPB has been explicit that follow-up enforcement action may be initiated if the findings warrant it.
The focus will be on how well organisations are actually informing individuals when their data is being processed. That means privacy notices, layered information, timing of disclosure, accessibility and clarity, the full picture of what Articles 12 to 14 require in practice. Given that last year's CEF on the right to erasure identified consistent and widespread failures across 33 countries, there is little reason to expect the transparency findings to be more flattering.
What should your organisation do now?
The CEF timetable gives a window of opportunity. Authorities will be contacting controllers during the course of 2026, with findings shared in the second half of the year. That is not a long runway, but it is something.
The practical starting point is straightforward: read your privacy notices as if you were a data subject encountering them for the first time. Are the purposes clear? Is the legal basis explained in plain language? Is the information easy to find, or buried at the bottom of a page behind two further links? Are retention periods stated? If the honest answers to any of those questions are uncomfortable, the time to address them is now — not when a regulator's letter arrives.
Just as the EDPB turns its attention to transparency for 2026, it has published the results of last year's coordinated enforcement action. The findings make for sobering reading. Throughout 2025, 32 supervisory authorities across the EEA investigated how controllers handle the right to erasure. A total of 764 controllers responded to the investigation questionnaire. The overall compliance level was assessed as, at best, average.
Recurring failures
Seven recurring issues emerged from the exercise. Controllers struggled to determine appropriate retention periods and to delete personal data held in back-up systems. Some relied on anonymisation techniques that did not actually meet the standard. Many lacked internal procedures for handling erasure requests at all, and failed to provide data subjects with adequate information when responding. Others found it difficult to carry out the balancing tests required when erasure conflicts with other rights or legal obligations, a particularly common issue given that the right to erasure is not absolute.
Recommendations from the EDPB
Based on its findings, the EDPB has set out a number of practical recommendations for controllers. These are worth taking seriously, particularly given that transparency, the focus for 2026, raises many of the same underlying organisational challenges:
Looking ahead
It is difficult not to read these findings as a preview of what the 2026 transparency action may uncover. The same organisational weaknesses, absent procedures, inadequate information to data subjects, unclear internal accountability, are precisely the ones that may surface when privacy notices and information obligations are scrutinised. Organisations that have not yet addressed the issues identified in last year's erasure action should treat the transparency sweep as a second opportunity to get their house in order, before a regulator does it for them.
You can read more about the matter here.
Seven years is a long wait. But the French Conseil d'État has now had the final word, and it is not good news for the Paris-based advertising platform Criteo: the €40 million fine imposed by the CNIL back in 2023 stands in full.
Background
Criteo is a large European ad-tech company, offering so-called behavioural retargeting services across thousands of websites. The model is straightforward: place tracking cookies on websites, analyse browsing habits, build a profile, serve targeted advertising. The company holds data on approximately 370 million people in Europe. The complaints that triggered this case were filed back in December 2018 and concerned Criteo's failure to give users a proper option to withdraw consent. From there, the CNIL's investigation broadened considerably, eventually uncovering failures on transparency, the right of access, and the right to erasure. The €40 million fine followed in 2023, approved by data protection authorities across Europe.
Criteo's appeal centered on a question that goes to the heart of how modern ad-tech operates: are pseudonymous identifiers, the kind used to track users across websites without directly naming them, actually personal data? Criteo argued that because it did not itself hold all the information necessary to re-identify a specific individual from a given identifier, the data fell outside the GDPR's scope entirely. The Conseil d'État disagreed. Data is only truly anonymous, it held, if the risk of re-identification is so insignificant as to be impracticable. In Criteo's case, the very purpose of the processing, offering targeted advertising, involves cross-referencing vast amounts of information for a given identifier. Criteo itself had acknowledged that identification of certain individuals was not technically impossible. That was enough.
Why this matters now
The timing lends the decision an added edge. It lands in the middle of a debate over the European Commission's Digital Omnibus proposal, which includes a revised definition of personal data that would make the concept dependent on the subjective circumstances of the controller. In practice, that could narrow the GDPR's scope considerably — and potentially provide companies engaged in behavioural tracking with a legal argument that looks remarkably similar to the one Criteo just lost. Privacy advocates and legal experts have been vocal in their criticism of the proposal. The Conseil d'État's ruling, reaffirming that pseudonymous identifiers are personal data when re-identification is not genuinely impracticable, sits in direct tension with the direction the Commission appears to be heading.
I find it difficult to read the two developments together without without pause for thought. On one hand, a court has just confirmed that tracking identifiers tied to browsing behaviour constitute personal data and must be handled accordingly. On the other, the legislative process is moving towards a definition that might render that conclusion obsolete. Whether that tension resolves itself through the legislative process or through further litigation remains to be seen.
What does this mean in practice?
For any organisation operating in the ad-tech space, or relying on behavioural data for targeting purposes, this decision is a firm reminder that the current legal framework applies in full — regardless of where the Omnibus proposals eventually land. Pseudonymous identifiers are personal data. Consent must be valid. Transparency must be genuine. And the right to erasure must be honoured. Waiting to see how the legislative reform plays out is not a compliance strategy.
You can read more about the matter here.
This was one of the largest GDPR fines ever issued. Now Luxembourg's Administrative Court has annulled the penalty entirely, not because Amazon was found to have complied with the GDPR, but because the authority that issued the fine failed to follow the correct procedure in calculating it.
Background
The case has its roots in a collective complaint filed on 28 May 2018 by La Quadrature du Net, a French digital rights organisation, on behalf of over 10,000 individuals. The complaint targeted Amazon's privacy policy and its use of practically all user actions — searches, device fingerprints, location data, playlists — for behavioural analysis and advertising targeting. Crucially, the complaint noted that no document published by Amazon suggested it intended to base its behavioural analysis and advertising targeting on user consent. Nor, the complaint argued, could the processing be justified on the basis of legitimate interest, since its purpose was to profile users for advertising targeting, something that could not be authorised without prior consent.
The Luxembourg DPA, the CNPD, issued its decision on 16 July 2021. At the time, it was the largest GDPR fine ever issued: €746,000,000. The findings included a lack of legal basis under Article 6 for targeted advertising, failures under Articles 12, 13 and 14 on transparency and information, and an obligation to stop targeted advertising or obtain genuine consent.
Why has the fine now been annulled?
The Administrative Court confirmed that GDPR violations had indeed occurred. But it found that the CNPD had issued the penalty almost automatically, without conducting the assessment of fault, intentional or negligent, that ECJ case law explicitly requires before a fine can be imposed. The fine has been annulled and the CNPD ordered to reassess the case. Amazon, for its part, has expressed satisfaction and reiterated that it worked in good faith when the GDPR came into force in 2018 without, as it puts it, clear guidance on personalised advertising.
What does this mean in practice?
I find it difficult to read this as a vindication for Amazon. The court did not find that Amazon had complied with the GDPR, it found that the authority calculating the fine had skipped a procedural step. The CNPD will now reassess. A new fine is entirely possible.
The broader lesson from the original case remains valid: sanctions are rising, and DPAs are not the only actors driving compliance, collective actions, joint complaints, and processor audits all play a role. Cases take time, but the passage of time does not mean fines will not ultimately be imposed.
For organisations still relying on implied consent or contractual necessity to justify behavioural profiling and targeted advertising, the underlying message of the original decision has not changed: consent frameworks and privacy policies need to be reviewed, profiling without genuine consent carries real risk, and assumptions about what a contract "necessitates" must be assessed with precision. As the Norwegian DPA observed at the time, it is not uncommon for companies to define what customers want their data used for, without asking them, and often contrary to the evidence. And giving freedom of choice on paper while hiding the relevant settings so well that most people give up along the way does not constitute voluntary consent.
The fine may have been annulled. The underlying compliance questions have not.
You can read more about the matter here.
Few debates in European privacy law have generated as much heat as the question of whether websites can offer users a binary choice: pay for privacy, or hand over your data. The Dutch DPA has just broken its silence on the matter. And Meta is still very much at the centre of the storm.
Where does the law actually stand?
The Dutch DPA has, for the first time, stated its position explicitly: consent-or-pay models are "undesirable, but not forbidden." Privacy is a fundamental right, the authority argues, and that protection should not depend on whether someone can afford to pay for it. People with limited financial means are more likely to accept data collection — not because they want to, but because they have no realistic alternative. That is a structural problem. It is not, however, an automatic legal violation. The GDPR requires only that consent be freely given, specific, informed and unambiguous. If those conditions are met, and users genuinely have the option to withdraw consent, the model is not prohibited by definition.
Meta and the DMA: a fine, a tweak, and more questions
In April 2025, Meta was fined €200 million by the European Commission for breaching the DMA with its original consent-or-pay model. The model presented users with a stark choice between paying for an ad-free experience or accepting personalised advertising based on behavioural tracking. Following that fine, Meta introduced a revised model in November 2025, and then, somewhat quietly, announced a third model in January 2026, adding an option to see "less personalised" ads.
The sequence of events raises some questions. The second model was in place for barely two months before being replaced, and no formal assessment of its legality has been published. Whether the Commission has already evaluated the third model, or whether we are at the beginning of another lengthy investigation cycle, remains to be seen.
BEUC, the European consumer organisation, has already answered part of that question: the third model still does not comply. The "less personalised ads" option is not presented upfront alongside the other choices. Meta uses non-neutral language and interface design techniques that steer users towards the personalised ads option. And the wording of the prompts remains ambiguous enough to mislead. BEUC argues violations of the DMA, the GDPR and the Unfair Commercial Practices Directive all persist.
What comes next — and when?
The honest answer is: unclear. The Commission said in December that it would monitor Meta's implementation of the third model to ensure it is effective and had previously warned that daily penalties could follow if the breach were not rectified. But no timeline has been confirmed, and no formal assessment of the third model has been published. The Commission and French authorities have also been investigating Meta's model through the EU Consumer Protection Cooperation Network since November 2023, with no conclusion yet.
The underlying question, whether any version of Meta's consent model produces genuinely free and informed consent, has not gone away. And for organisations watching from a distance, the Dutch DPA's position is a useful reference point: these models are not automatically unlawful, but they require careful design, genuine transparency, and a real choice. Anything less invites the kind of scrutiny Meta has been unable to escape.
Can a company satisfy a GDPR access request simply by pointing a user to their online account? Not always. The Austrian DPA has just drawn that line in a case involving a console gaming platform.
A user submitted an Article 15 access request asking for his full transaction history, information about gains and losses, and details about which group company had contracted with him. The controller responded by directing him to his user account, where transaction data was already visible, and declined to provide further information, suggesting the request was really about gathering evidence for a civil claim rather than exercising a genuine data protection right.
The DPA rejected that reasoning. A data subject does not need to justify an access request, and the fact that the information might later be used in litigation does not, by itself, make the request abusive. On the portal question, the DPA took a nuanced position. Providing access through a secure user account can be a perfectly lawful way to comply with Article 15, provided the information available there is complete, accessible and understandable. Where it is not, the controller must go further. In this case, the user account did not cover purposes of processing, categories of data, recipients, retention periods, data sources, or third-country transfer safeguards. The complaint was partly upheld and the controller ordered to provide a complete response within four weeks.
The practical takeaway is simple: a self-service portal is a legitimate compliance tool, but only if it actually answers the full scope of what Article 15 requires.
You can read more about the matter here.
Not every data breach gives rise to a damages claim under Article 82 GDPR. A German court has just reminded us of that, and added a twist that is worth paying attention to.
A software provider suffered a breach in 2020 in which email addresses, names and other user data were accessed by unknown third parties and later published online. The affected user sought damages of at least €3,000, citing emotional distress and loss of control over their personal data. The court dismissed the claim.
The reasoning on damages is straightforward enough: a GDPR infringement alone is not sufficient. The data subject must demonstrate actual damage and a causal link between the infringement and that damage. Discomfort and distress cannot be purely hypothetical. What makes this case particularly interesting, however, is the court's finding on loss of control. The same email address had already been exposed in a separate, earlier, and entirely unrelated data breach. The court held that the data subject had therefore already lost control over that data before this incident even occurred. This meant the breach in question could not be said to have caused the harm complained of.
It is a legally coherent conclusion, though one that raises questions. Does a prior breach by a different controller effectively insulate subsequent controllers from liability for their own security failures? That seems a generous result for negligent data handlers.
You can read more about the matter here.
Restructuring a business is one thing. Using customer data to decide who gets moved where, without telling them clearly, without asking them, and without a valid legal basis, is quite another. Italy's Garante has just fined Intesa Sanpaolo nearly €18 million for doing exactly that.
What happened?
Italy's largest bank transferred the accounts of 275,000 customers to its wholly-owned subsidiary, with plans to follow up with a further 2.1 million. The customers selected for transfer were identified as "predominantly digital", meaning the bank had used data points such as age and familiarity with online channels to profile them before deciding who would be moved. Five complaints triggered an investigation. The Garante found violations at almost every stage of the process.
Where did it go wrong?
The bank argued that the account transfer itself could rest on legitimate interest, and the Garante largely accepted that. Transferring accounts within a corporate group is not, in principle, an activity that requires explicit consent. But the profiling used to select which customers would be transferred was a separate processing activity entirely, with its own legal basis requirement. And that is where things unravelled.
The Garante found that identifying "predominantly digital customers" through automated systems fell squarely within the scope of Article 22 GDPR. The selection was based on behavioural and demographic characteristics and produced a decision with legal effects on those customers. The bank's attempt to rely on legitimate interest for that profiling failed because it had not carried out a proper balancing test. Without evidence or assessment of whether the processing fell within their reasonable expectations, it had simply asserted that the processing had no negative effects on data subjects. The Garante concluded that only consent could serve as a legal basis for that processing. Consent had not been obtained. The violation of Article 6(1) GDPR followed.
On transparency, the picture was no better. The privacy notice had been made available through the customers' online account or app, but without any alert or notification drawing their attention to it. Several complainants said they had simply never seen it. The notice itself referenced profiling only in the context of direct marketing, not account transfers. And the bank had implemented what the Garante described as "silent consent", setting a deadline by which customers had to object, with silence treated as acceptance. That is not how consent works under the GDPR.
What does this mean in practice?
For any organisation considering a corporate restructuring that involves moving customer data between entities, this case sets out the analysis clearly: the transfer and the selection process are separate activities, each requiring its own legal basis. Profiling customers to determine who is affected, even where the underlying transfer might be justified, triggers Article 22 if automated systems are involved and the outcome has legal or similarly significant effects. And if legitimate interest is to be relied upon, a genuine balancing test is required, not a bare assertion that no harm will result.
The transparency failures here are, if anything, even more instructive ahead of the EDPB's 2026 coordinated enforcement action on information obligations. Burying a privacy notice in an app without alerting users to it, drafting it in terms that do not accurately describe the processing, and treating silence as consent, each of those failures is exactly the kind of issue regulators will be looking for this year.
You can read more about the matter here.
An individual subscribes to a company's newsletter. Thirteen days later, a formal access request arrives. Shortly after that, a compensation claim follows. Repeat across dozens of companies. The CJEU has now considered whether that constitutes a legitimate exercise of data subject rights or something else entirely.
The case involved a German optician, Brillen Rottler, and an Austrian individual who had subscribed to its newsletter before promptly submitting an Article 15 access request. When the optician refused the request, pointing to publicly available reports suggesting the individual had done exactly the same thing to numerous other companies, the individual claimed at least €1,000 in compensation for the refusal itself. The local court in Arnsberg referred the questions to Luxembourg.
The CJEU's answer is carefully framed but practically significant. Even if it formally meets all the conditions the GDPR sets out, a first access request can, in certain circumstances, already be regarded as excessive and therefore abusive. The key question is whether the request was made to understand and verify the lawfulness of processing, or whether it was made with the sole purpose of artificially manufacturing the conditions for a compensation claim. The fact that publicly available information shows a pattern of access requests followed by compensation claims across multiple controllers is a factor that can be taken into account when assessing that intention.
On compensation, the Court reaffirmed that damage must actually have been suffered. It cannot be hypothetical. And crucially, a data subject cannot recover compensation where their own conduct is the determining cause of the damage. If you engineered the situation that produced the harm, you cannot then claim for it.
I find this a welcome clarification, though I wonder how easy it will be to apply in practice. Demonstrating abusive intent requires evidence, and most controllers will not have access to the kind of cross-company pattern data that Brillen Rottler was able to point to. For the vast majority of access requests this doesn't change anything. The right of access remains fundamental, and controllers cannot refuse requests simply because they suspect an ulterior motive. But for those facing what looks like a coordinated compensation strategy, the CJEU has confirmed that the GDPR's abuse mechanism is available, and that a first request is not automatically immune from scrutiny.
You can read more about the matter here.
Partner
Oslo
Associate
Stockholm
Managing Associate - Qualified as EEA lawyer
Oslo
Partner
Oslo
Partner
Oslo
Partner
Oslo
Partner
Oslo
Senior Associate
Oslo
Senior Lawyer
Stockholm
Senior Lawyer
Stockholm
Associate
Stockholm
Partner
Oslo
Senior Associate
Oslo