Eva Jarbekk
Partner
Oslo
Norway, Sweden, Denmark, UK
by Eva Jarbekk and Sofie Axelsson
Published:
In this edition, we highlight key takeaways from the EDPB’s draft research guidelines. It matters to more organisations than you might expect—especially where research has commercial goals. We also walk through a set of cases with clear spill-over value. Italy’s data protection authority is tightening the rules on sharing personal data between companies in M&A and similar transactions. Another case looks at whether portals can be used to handle access requests—or whether they can’t. Does LinkedIn track more about you than you realise? Will age verification on social media actually work? And does it matter which vendors your employer lets on-site and into workplace meetings? We wrap up with a quick note on the EDPB’s DPIA template—and whether it’s worth getting familiar with. Happy reading.
For years, researchers and legal teams across Europe have been working in something of a grey zone. The GDPR has always contained special provisions for scientific research, but what those provisions actually mean in practice has been left largely to interpretation, with national data protection authorities filling the gaps in quite different ways. A 2019 study commissioned by the EDPB confirmed the divergence. Interim guidance in 2021 acknowledged the problems but did not resolve them. So when the EDPB adopted Guidelines 1/2026 on 15 April, it was a genuinely significant moment.
The guidelines are now open for public consultation until 25 June 2026.
So, what counts as "scientific research"?
The obvious sounding question has caused headaches in practice. The GDPR says the concept should be interpreted broadly, but how broadly? The EDPB's answer is a six-factor framework:
Tick all six boxes, and your activity is presumed to be scientific research. Miss a few, and you will need to argue your case. What is particularly useful, and commercially important, is the EDPB's confirmation that a profit motive does not disqualify an activity. A clinical trial run by a pharmaceutical company can qualify. An AI start-up conducting bias research in partnership with a university can qualify. Internal marketing analytics, on the other hand, does not, no matter what you call it internally. The key is how you conduct the research, not how you label it.
Consent: a long-overdue recalibration
One of the most discussed aspects of the guidelines is the treatment of lawful bases, and for good reason. The EDPB has long taken a sceptical view of consent as a lawful basis in clinical research, on the grounds that patients cannot freely consent when there is a power imbalance between them and their treating clinician. The guidelines now dial that back significantly. The position is that a patient's status as a healthcare recipient does not, in itself, prevent freely given consent, it is only where a person's capacity is genuinely and severely affected that the concern arises.
It is a welcome shift. The previous position always felt paternalistic, and it created practical difficulties for research sponsors trying to design legally sound consent frameworks. The guidelines also give a clearer green light to broad consent, where specific research purposes are not yet known at the time of data collection, provided that ethical standards are observed and additional safeguards are put in place. Dynamic consent, where individuals are asked to consent to specific projects as purposes crystallise, is also endorsed. A combination of the two is possible, and the two can even be presented together in a single information pack.
On legitimate interests, the guidelines bring good news for those cautious about relying on consent, not least because consent can be withdrawn at any time. The EDPB confirms that scientific research can constitute a legitimate interest under Article 6(1)(f) GDPR, and that it can carry significant weight in the balancing test, particularly where robust Article 89(1) safeguards are in place. The guidelines also clarify that this pathway is not limited to public bodies under Article 6(1)(e), private companies may also rely on public interest under Article 6(1)(e) where EU or Member State law authorises their research activities. Where special category data is involved, a separate Article 9 condition must still be identified — legitimate interests alone will not suffice.
What else does this change in practice?
Quite a lot, actually. On further processing, the guidelines confirm that using personal data for scientific research is presumed compatible with the original purpose of collection, meaning controllers do not need to run the standard compatibility test under Article 6(4) GDPR. They must still check that the original legal basis remains suitable, but the presumption removes a significant compliance burden.
On data subjects' rights, the guidelines clarify that the rights to erasure and to object actually can be restricted in a research context, but only on a case-by-case basis. Blanket refusals are explicitly ruled out. Each request must be assessed individually, with documented reasoning. That requires proper processes to be in place, and organisations that have not yet built them should treat this as a prompt to do so.
Reflections
It is hard to overstate how much this guidance has been needed. The fragmentation across Member States has been a genuine obstacle to pan-European research collaboration, particularly in life sciences. Whether the final guidelines will resolve all the outstanding questions remains to be seen — the interaction with the upcoming European Health Data Space, the treatment of AI-enabled research, and the continued divergence in national Article 9 derogations are all areas to watch. And organisations with something to say should use the consultation period: it closes on 25 June, and the EDPB has explicitly invited feedback.
Practical recommendations right now:
You can read more about the matter here.
Data protection fines most commonly rest on the usual suspects: lack of legal basis, consent failures, security breaches. So it is worth noting when a DPA reaches for a different tool. The AEPD recently fined Spanish energy supplier Gaolania Servicios €30,000 with Article 5(1)(d) GDPR, the data accuracy principle, as the primary basis for sanction. The clear message was that controllers cannot rely blindly on information provided by third parties.
The facts were as follows. A third party provided Gaolania with an incorrect unique code assigned to an electricity supply point in Spain. Without verifying the code, Gaolania switched the electricity provider of the wrong person. The data subject had no relationship with Gaolania and found out only after the switch had already happened. Gaolania argued the error originated with the third party — the Spanish DPA was having none of it.
The relevant identifier is 20–22 characters long and inherently susceptible to error, exactly the kind of foreseeable risk that should have triggered a verification step.
The fine reflected negligent rather than intentional conduct, but it is a reminder that Article 5 has teeth. If your organisation processes data based on identifiers or reference data supplied by third parties, ask yourself: when did you last check whether your verification procedures are actually fit for purpose?
You can read more about the matter here.
GDPR compliance is a standard item on the M&A due diligence list, and Italy's data protection authority has just provided a €1.25 million illustration of what happens when it is not taken seriously enough.
What happened?
When Alitalia sold assets to its successor ITA Airways in 2021, ITA needed staff quickly. Alitalia was Italy's historic national carrier, founded in 1946 and for decades the country's flagship airline. After years of financial difficulties and repeated government bailouts, it was placed into receivership and wound down. ITA Airways was set up by the Italian government as a leaner successor to take over parts of Alitalia's operations, though, crucially, not its workforce.
The practical solution was to hand over the personal data of Alitalia's entire aviation division workforce: names, contact details, salary information, marital status, professional qualifications, and employment history. No legal basis was identified. No privacy notices covered the transfer. A former employee and union representative eventually filed access requests with both companies. ITA confirmed it was not currently processing his data, technically true, but entirely silent on the past processing he had explicitly asked about. Alitalia did not respond at all until the regulator intervened.
Three legal bases, three rejections
In their defence, the controllers worked through the standard toolkit, and each argument failed.
Contract under Article 6(1)(b) was rejected because Alitalia had shared data on its entire workforce, not just those who had applied to ITA. Transferring more data than any legal basis can cover is a classic transaction risk.
Legal obligation under Article 6(1)(c) failed because the relevant legislative decree authorised the asset sale but said nothing about transferring personal data. A legal framework for the transaction does not automatically create a legal basis for the data flows that accompany it.
Legitimate interest under Article 6(1)(f) collapsed immediately: neither controller had carried out a legitimate interest assessment. A properly documented assessment might not have saved them, but not having one at all left them with nothing to argue.
The structural question beneath the surface
The data subject was not simply exercising his rights out of curiosity, he was looking for evidence. Former Alitalia employees have argued in court that the asset sale was in substance a disguised transfer of an entire business unit, which under Italian law would have obliged ITA to take over existing employment contracts. A bulk HR data transfer between the two entities was precisely the kind of indicator supporting that argument. The DPA noted that if the contract had indeed been a business unit transfer, a legal obligation basis might have been available, but the controllers could not invoke that argument whilst simultaneously maintaining in litigation that it was an asset sale. Data protection analysis and commercial structuring had pulled in opposite directions.
Reflections
The legal basis for transferring personal data always need to be identified and documented before the transfer takes place, not reconstructed after a regulator starts asking questions. For anyone working on deals involving employee data, the checklist is straightforward: What data is being transferred? Is the scope proportionate? Have privacy notices been updated? Have the relevant assessments actually been carried out and documented?
A €1.25 million fine for getting those questions wrong is a fairly compelling argument for asking them early.
You can read more about the matter here.
Handling subject access requests by redirecting data subjects to a self-service portal and charging them for the second attempt is not GDPR compliance. Finland's data protection authority has just made that point in a reprimand against credit information agency Dun & Bradstreet.
What happened?
When data subjects emailed Dun & Bradstreet requesting access to their personal data, the company replied with a standard email directing them to its OmaData portal. No confirmation of what measures would be taken were provided. No acknowledgement of the request. Just a link. The DPA found that this left data subjects reasonably believing that the portal was their only option, effectively making it harder to exercise their rights in breach of Article 12(2) GDPR.
The company also charged €9.90 for any access request made more than once within a 12-month period, justifying this under national credit information legislation. The DPA rejected the argument: the GDPR right of access and the statutory right to a credit information extract serve different purposes, and a controller cannot sidestep GDPR's free access rules by relabelling an Article 15 request as a paid extract. Critically, the company applied the fee automatically, without any individual assessment of whether the repeat request was genuinely repetitive, manifestly unfounded, or excessive. In credit information specifically, data can change multiple times a year, meaning a second request within 12 months may be entirely justified.
Two reminders
Two practical reminders here. First, a self-service portal can be a legitimate way to fulfil access requests, but it cannot replace a proper response. Controllers must confirm what action they are taking, and data subjects must have a meaningful alternative if they choose not to use the portal. And of course, you must make sure the portal actually has all relevant information. Second, fees for excessive requests require individual assessment every time. An automatic charging policy, applied without looking at the specific request, will not survive regulatory scrutiny.
You can read more about the matter here.
According to research published by the trade association Fairlinked e.V., LinkedIn injects a JavaScript script into users' browsers that operates silently in the background. The script is allegedly capable of checking for over 6,000 known browser extensions, collecting dozens of device attributes, CPU cores, memory, screen resolution, time zone, and combining that information to generate a unique device fingerprint linked to the user's LinkedIn profile. None of this, it is claimed, is disclosed through consent pop-ups or explained in the privacy policy.
What makes it particularly pointed from a GDPR perspective is that some of the extensions the script scans for may reveal sensitive information — religious beliefs, political preferences, or health conditions. The script also reportedly detects job search tools and identifies software from competing providers such as Apollo, Lusha, and ZoomInfo. It does sound rather intrusive.
LinkedIn's defence is that the system is necessary to detect scraping and bot activity that violates its terms of service. That may well be true, but found necessity is not always the same as lawfulness under the GDPR, and a legitimate aim does absolutely not remove the obligation to be transparent about how it is pursued.
This is not LinkedIn's first encounter with European regulators. In 2024, the Irish Data Protection Commission fined the platform €310 million for GDPR violations related to its use of behavioural data for targeted advertising. A pattern of findings does not look good when the next investigation comes around.
The processing of data that reveals special category information, even indirectly, through the scanning of browser extensions, requires either explicit consent or a clearly identified legal basis under the GDPR. Silent, background profiling of users' devices and software environment sits uncomfortably with that requirement, whatever the underlying justification. I wonder whether LinkedIn's privacy notices, if examined closely, would withstand that scrutiny. The Irish DPC may soon have occasion to find out.
You can read more about the matter here.
What if the most capable cybersecurity threat on the planet was not a state-sponsored hacking group, but an AI model sitting in a lab in San Francisco? That is not a hypothetical anymore. Anthropic recently announced that its newest model, Claude Mythos, is so capable at finding and exploiting software vulnerabilities that the company has decided not to release it publicly, at least not yet. The EU welcomed the decision, but the threat was never just about one model from one company.
What can Mythos do?
Quite a lot, and quite quickly. Independent evaluation by the UK's AI Security Institute found that Mythos solves 73 per cent of expert-level cyber tasks and completed a 32-step simulated network attack entirely on its own, work that would take a human expert around 20 hours. It has already identified thousands of previously unknown vulnerabilities across major operating systems and browsers, some of which had gone undetected for over 20 years. These are zero-day vulnerabilities: if exploited, the owner of the affected system has zero days to prepare.
Anthropic has shared the model with selected cybersecurity firms and major tech companies through its Project Glasswing initiative. The EU's AI Office welcomed the staged rollout, noting that Anthropic, as a general-purpose AI model provider under the EU AI Act, is obliged to ensure an adequate level of cybersecurity protection.
The problem with "just don't release it"
Here is the uncomfortable truth: withholding Mythos is sensible, but it is not a security mechanism. It buys time, not safety.
Researchers at OsloMet and NTNU have demonstrated exactly why. Using a swarm of small, freely available open-source language models, the kind that run on an ordinary laptop, they tested their framework against a sample application with nine planted vulnerabilities. The swarm found all nine. In under two minutes. At essentially zero cost.
The underlying capability that makes Mythos dangerous already exists in models anyone can download for free. Any motivated actor with basic programming skills can replicate this. I find that deeply sobering. The conversation about Mythos risks focusing on the wrong thing, the question is not whether Anthropic releases this particular model responsibly, but what happens when equivalent capabilities are available to anyone, without asking permission.
The regulatory picture
Anthropic operates under the EU AI Act and has signed up to the EU's code of practice, committing to address risks from enabling large-scale cyberattacks. That framework exists, but it applies to Anthropic's own conduct, not to the open-source ecosystem where similar capabilities are already proliferating.
What this means for your organisation
For organisations in healthcare, finance and public administration, the practical implication is clear: do not assume someone else is solving this. Short-term basics of cybersecurity remain essential, updated software, robust access controls, comprehensive logging. But the longer-term challenge is building independent capacity to test AI systems before deployment, capacity that does not depend on asking the companies building the systems to evaluate them themselves. That is not a criticism of Anthropic specifically. It is a structural problem, and one that regulators and governments need to take seriously before the next Mythos moment arrives.
Giving a supplier access to internal systems is perfectly normal. Most large IT projects require it. But not every supplier carries the same baggage. The reaction from NHS staff to the news that Palantir engineers have been granted NHS email accounts tells you something important about why vendor identity matters in data-sensitive environments.
What has happened?
Engineers from Palantir have been given NHS.net email accounts as part of the Federated Data Platform rollout. Palantir won the £300 million contract in 2023 to help NHS England connect patient records held across different systems. Those email accounts come with access to a staff directory containing the contact details of up to 1.5 million NHS employees, as well as internal SharePoint and Microsoft Teams systems. Some NHS staff discovered they had been in virtual meetings with Palantir engineers, joining under NHS email accounts, without knowing whom they were speaking to.
Palantir's response is straightforward: this is standard practice for government suppliers, and government guidance actually encourages the use of government systems over suppliers' own infrastructure for security reasons. NHS England has confirmed that all data access remains under NHS control and is governed by strict contractual confidentiality obligations. The access arrangement may well be entirely lawful and consistent with NHS mail policy.
Not just any contractor
This is where I think the nuance matters. Palantir's founders include controversial Peter Thiel as well as Alex Karp, who has boasted that the company's surveillance technology helps clients "scare" and "kill" enemies. The company's software is already used by UK police forces and the Ministry of Defence, and critics have raised concerns about the interoperability of its systems enabling broader data sharing between health and immigration enforcement.
One resident doctor put it plainly: he did not want his personal number and email being accessible to someone working for Palantir on the NHS today who might be working on systems for drone strikes next month.
From a GDPR perspective, the key question is not whether access was technically permitted, but whether NHS staff were adequately informed that this contractor would have access to their personal data. Transparency is a core GDPR principle, and staff discovering Palantir engineers in their Teams meetings without prior disclosure suggests communication may have fallen short of that standard.
Beyond the contract
The access arrangement itself is not unusual, quite the opposite. In many cases giving suppliers access to internal systems is the right approach from a security standpoint.
But the values, activities and affiliations of a supplier matter too. Particularly when staff and service users have not been informed of that supplier's involvement in their data environment. That tension deserves an honest conversation, not just a technical defence.
You can read more about the matter here.
The European Commission has declared its age verification app technically ready and urged online platforms to start using it. The political ambition is clear. The reception, however, has been more complicated.
What the Commission is proposing
The app is designed to allow users to prove their age through government-approved systems, a passport, a national ID, or trusted providers such as banks or schools, without revealing any additional personal information to the platform checking it. Von der Leyen has framed it in unambiguous terms: "Online platforms can easily rely on our age verification app so there are no more excuses."
The Commission expects Europe-wide versions to be available for download within weeks, developed by companies it will verify. EU countries are expected to launch their own national versions later this year. The app has already been tested in France, Denmark, Greece, Italy, Spain, Cyprus and Ireland, and the Commission's stated ambition is for it to become a global standard, similar to the Covid vaccination certificate.
The broader context is one of mounting pressure. In 2022, 96 percent of 15-year-olds in the EU used social media daily, with 37 percent spending more than three hours per day on platforms. A survey of 40,000 adolescents across four EU countries found that nearly half of 15-year-olds struggle with depression, with higher social media use linked to worse outcomes. At least ten EU member states have proposed or are close to proposing social media bans for minors, and Australia set a global benchmark in late 2025 by banning under-16s from major platforms outright.
The security problems
The Commission's announcement was followed almost immediately by uncomfortable news. Within hours of the app being declared technically ready, hackers had found holes in the software. The Commission responded by releasing an updated version, but critics were not convinced. The data minimisation principle under GDPR sits at the heart of the concern: existing age assurance methods, which often require scanning IDs or biometrics into third-party databases, have already proven vulnerable, with one breach of a Discord third-party service previously exposing government-issued photo IDs of more than 70,000 users. The Commission's app is designed to improve on that, but whether the updated version has adequately addressed the vulnerabilities identified remains contested.
The Commission has said it is open to feedback and that the version showcased was a demonstration build rather than the final product. But the episode has not helped confidence, among either security experts or member states.
Member state scepticism
The rollout has also run into political headwinds. A number of member states have indicated reluctance to adopt the EU app, with several preferring their own existing national solutions instead. Estonia was among the most direct, with its minister stating that the security and privacy concerns raised in April were "a red flag" that further reduced the already unlikely prospect of adoption. The broader tension is one the Commission has struggled with throughout: many countries have been building their own digital identity infrastructure for years and see the EU app as secondary to, or in competition with, those efforts. As Greece's digital minister put it, there will be "many wallets", the question is whether the EU's version will be among the ones that actually get used.
Deeper concerns
Beyond the specific security flaws, some experts argue the problems are more fundamental. Belgian cryptographer Bart Preneel, one of over 400 scientists who have called for a halt to age verification measures, warns that the objections are "much more fundamental than a bug in an app." His concern is that a system designed to verify identity online could erode anonymity more broadly, potentially allowing governments to identify people who criticise them anonymously. He also raises the risk of digital exclusion. People without official documentation, such as refugees and migrants, may be locked out of online services entirely.
Others question whether age verification is simply too easy to bypass to be effective. VPNs, fraudulent apps and the migration of younger users to less regulated platforms are all cited as likely responses to stricter verification. Some critics go further, arguing that the focus on age verification misses the point entirely. As one senior policy researcher put it, the solution cannot only be age assurance, the more effective lever may be targeting the algorithms and addictive design features that cause harm in the first place, using tools already available under the DSA.
An open question
The goal here is legitimate and the urgency is real. But an app announced as technically ready, found to have critical flaws within hours, and met with scepticism by a significant number of the member states it is intended for, has some way to go before it can credibly be called a solution. The Commission's instinct to harmonise is the right one. A patchwork of national bans risks both legal fragmentation and collision with existing EU law. Whether this particular app is the vehicle for that harmonisation remains an open question.
Meta is no stranger to legal trouble in Europe. But a decision handed down by a Milan court this week adds a new chapter. This time, the door has been opened for what could become one of the largest privacy class actions in Italian history.
A Milan court has accepted a class action brought by the Italian consumer association CTCU against Meta, arising from a data scraping incident that affected approximately 533 million Facebook users globally. The incident occurred between January 2018 and September 2019 and was not disclosed by Meta until 2021. In Italy, around 35 million users may have been affected. The CTCU is now seeking compensation on behalf of those who lost, or feared losing, control over their personal data in breach of the GDPR.
Meta's response was immediate and predictable: the ruling is procedural only, no finding of any legal violation has been made, and the claim will ultimately be dismissed. Of course, Meta is correct that the ruling is on admissibility only.
However, the court's decision to admit the class action is significant. Italy is not alone in developing collective redress mechanisms for privacy violations, and cases like this signal that consumer organisations are increasingly willing to use them. The question of whether 35 million Italian Facebook users can recover compensation for a data breach that occurred eight years ago is one that will take some time to answer. For now, the answer to the threshold question: "Can they even try?" is yes.
This one is far from over; we will be following it closely.
You can read more about the matter here.
On 3 April, the temporary derogation from the EU's ePrivacy Directive, widely known as "Chat Control", expired. The European Parliament declined to extend it. The result of Parliament's inaction is a regulatory gap that puts tech companies, child protection advocates and privacy lawyers in an uncomfortable position simultaneously, with no resolution currently in sight.
Since 2021, the derogation had permitted companies to voluntarily scan their platforms for child sexual abuse material (CSAM), grooming and sextortion using automated detection technologies. It was a carve-out from the general prohibition on scanning private communications under the ePrivacy Directive, a narrow, time-limited permission that allowed what would otherwise have been unlawful. When Parliament chose not to vote on an extension, that permission lapsed.
The legal tension that remains
The expiry creates a legal dilemma. Under the ePrivacy Directive, scanning private communications is now, for practical purposes, prohibited. But under the Digital Services Act, platforms remain liable for illegal content hosted on their services. Companies that continue to scan risk breaching ePrivacy. Companies that stop scanning risk failing to detect, and therefore failing to remove, illegal content in breach of the DSA. Google, Meta, Snap and Microsoft have announced they will continue to scan voluntarily, but the legal basis for doing so is, at best, unclear.
The permanent legislative framework that was intended to resolve this tension, the proposed CSAM Regulation, which has been under negotiation since 2022, remains unfinished. Parliament has stated that work on it is ongoing but has offered no timeline.
Not a straightforward privacy question
Privacy advocates have long argued that automated scanning of private communications, particularly where end-to-end encrypted services are involved, amounts to a form of mass surveillance incompatible with fundamental rights. That concern is not without legal foundation: scanning encrypted communications necessarily requires either breaking the encryption or scanning content locally on the device before encryption occurs, both of which raise serious questions under EU data protection law and the Charter of Fundamental Rights.
At the same time, child protection organisations point to the concrete consequences of detection gaps. A similar legal gap in 2021 was followed by a 58% fall in CSAM reports from EU-based accounts over an 18-week period. The technology used for detection relies on hash values, unique digital fingerprints of known illegal images, rather than open-ended content analysis, which its proponents argue is meaningfully different from generalised surveillance.
What this means in practice
For now, the law offers no clean answer. The legislative process must produce a permanent framework that reconciles the obligations under the DSA with the protections of the ePrivacy Directive — and does so in a way that is compatible with the right to privacy and the integrity of encrypted communications. That is a genuinely difficult legal and political challenge, and the current gap is a direct consequence of the fact that no such framework yet exists.
For organisations navigating compliance obligations in this space, the picture is uncertain. The public debate is likely to intensify in the coming months as the consequences of the lapse become more visible, and as pressure mounts on Parliament to move the permanent legislation forward.
You can read more about the matter here.
On 14 April, the EDPB adopted a harmonised template for Data Protection Impact Assessments. The aim is straightforward: help organisations structure and document their DPIAs in a consistent way across Europe. The template is accompanied by an explainer document designed to walk controllers through the key concepts and address common knowledge gaps — which, if you have ever stared at a blank DPIA form, is a welcome addition.
Use of the template is not mandatory, and controllers remain free to apply whatever methodology they prefer. But that framing may be somewhat misleading. Following a public consultation closing on 9 June, all national data protection authorities will take steps to adopt the template either as their sole standard or as a meta-template to which national templates will align. In practice, this template is likely to become the benchmark against which supervisory authorities measure the quality of a DPIA — mandatory or not.
It is worth reviewing the template and considering whether your current processes would hold up against it. The public consultation also offers an opportunity to influence the final product before it becomes the standard. That window closes 9 June.
You can read more about the matter here.
Partner
Oslo
Associate
Stockholm
Managing Associate - Qualified as EEA lawyer
Oslo
Partner
Oslo
Partner
Oslo
Partner
Oslo
Partner
Oslo
Associate
Oslo
Senior Lawyer
Oslo
Senior Lawyer
Stockholm
Senior Associate
Stockholm
Associate
Stockholm
Partner
Oslo
Senior Associate
Oslo
Partner
Copenhagen
Senior Associate
Copenhagen