Newsletter

Privacy Corner

by Eva Jarbekk, Trygve Karlstad and Maja Helene Christiansen Steien

Published:

Computer and lock

Happy New Year and welcome to the year's first newsletter!

New year, new rules – or rather: new clarifications of "old" rules. Because the end of 2025 has already given us a good portion of privacy and technology news to digest.

In this newsletter, you'll find everything from online marketplaces that suddenly become controllers for others' content, to payment services that know a bit too much about what you're buying. We'll take a look at what happens to data transfers to the US when Trump declares documents invalid via Truth Social, and why Sweden now is tightening procurement rules to keep antagonistic states at bay. You'll also find out why the EU Commission has faced fierce criticism for its "omnibus" proposal, what happens when users actually get a real choice about tracking (spoiler: they'd rather avoid it), and why even American Express can fail at cookie implementation.

And of course: Elon Musk, X and the DSA-fine that has become a full-blown geopolitical conflict. Plus a handful of rulings showing that privacy rules are actually being enforced – even when it comes to lawyers, data breaches and facial recognition in exams.

There's a lot to take in, but I promise it's worth it. So grab yourself a cup of coffee and settle in. Happy reading!

When online marketplaces become controllers - a new ruling from the EU Court of Justice

Are you running an online marketplace or working with a client who does? Then you should take note of a recent ruling from the EU Court of Justice that could change the rules of the game for how platforms handle user-generated content.

Background

The case started with something quite unpleasant: An anonymous user posted a fake advertisement on the Romanian marketplace publi24.ro, owned by Russmedia Digital. On the website, one can list advertisements, either for free or for a given price. The advertisement falsely presented a woman as a sex worker, with real photographs and phone number, without consent.

Russmedia removed the advertisement relatively quickly, approximately within an hour of notification. However, the damage was already done. The advertisement was copied and spread to other websites. The woman sued Russmedia for breach of the GDPR, and the case ended up in the EU Court of Justice.

The Court's conclusion:

The Court reached a decision that could have major consequences for platform operators. Here are the main points:

1. Russmedia was a joint controller:

Even though Russmedia didn't publish the advertisement themselves and removed it quickly, they were deemed to be a joint controller together with the user. The Court's reasoning for this was that the platform's design made publication possible, and Russmedia had their own purposes for the service. The Court considered that Russmedia had a decisive influence on how the information was made available online. The platform determined the framework for publication, including who could see the advertisements, how they were presented, how long they remained online and how they were categorised. By allowing anonymous advertisements, Russmedia made it possible to publish personal data without consent.

In addition, Russmedia had extensive usage rights to the content in their terms and conditions, which the Court also emphasised in its assessment. According to the terms, Russmedia had the right to distribute, modify and remove content from the platform.

2. The E-Commerce Directive does not exempt from GDPR liability

Furthermore, the Court concluded that when user-generated content contains personal data, the GDPR still applies in full. In this way, one cannot invoke any of the liability exemptions in the E-Commerce Directive to escape GDPR obligations.

3. Marketplaces must implement security measures

Platforms that are controllers must ensure that personal data is processed lawfully before publication. For special category data, identity verification and consent are required. This can make anonymous use impossible in practice.

In addition, platforms must implement technical measures that make it difficult to copy and spread unlawful content. The Court clarified that operators are not strictly liable for all dissemination, however they must be able to document that the measures are sufficient.

What does this mean in practice?

A natural interpretation of the ruling is that platform operators face stricter obligations when handling user-generated content. For many, this will entail prior review of content – which challenges the principle of no general monitoring obligation. Notice-and-take-down is not enough, proactive monitoring becomes necessary to reduce the risk of GDPR breaches.

Many have pointed out an interesting dilemma in the wake of the ruling. The EU Court of Justice establishes that platforms must review content for special categories of personal data before it is published. But how can one do that without going through all advertisements? It sounds like general monitoring – something the E-Commerce Directive and the Digital Services Act expressly prohibit.

The Court claims this is not a general monitoring obligation, but a specific obligation to prevent publication of special category data without consent. Nevertheless, it's difficult to see how this can be implemented in practice without checking all content.

It is also unclear how extensive the checks must be. Are simple text filters enough, or must platforms use advanced technology and AI to analyse both text and images? How thorough must identity verification be? And what happens if something still slips through, even though the platform has done its best?

We will likely see more cases going forward. In the meantime: Conduct a specific assessment of how this affects your platform – and seek advice if you're in doubt. Some recommendations from me:

  • Review terms and conditions – particularly clauses about usage rights.
  • Assess platform design – functionality can affect liability.
  • Implement systems to identify special categories of personal data.
  • Establish identity verification where necessary.
  • Introduce technical measures against copying and dissemination.
  • Consider the need for a joint controller agreement – it doesn't have to be very complicated.


You can find the ruling here.

What's happening with EU-US data transfers?

Donald Trump has, via Truth Social, declared all documents signed with Biden's so-called "autopen" invalid. It may sound like a bizarre social media storm, but for everyone transferring personal data to the US, this could be the start of complete chaos.

Most EU-US data transfers are based on the Data Privacy Framework (DPF). The problem? The entire system rests on fragile legal foundations. DPF relies heavily on Biden's executive order EO 14.086, and Trump has now expressed that all documents signed with "autopen" are invalid. It is currently unclear whether EO 14.086 is actually signed with an autopen, or whether Trump's Truth Social posting has any formal legal effect. However, Trump can also follow up with a formal executive order.

At the same time, the US Supreme Court will soon decide whether independent supervisory bodies like the Federal Trade Commission (FTC) are even in line with the US Constitution in the case Trump v. Slaughter. The case concerns President Trump's removal of FTC Commissioner Slaughter in March 2025, justified on the grounds that her continued service was "inconsistent with [the] Administration's priorities" – without meeting the statutory requirement of "inefficiency, neglect of duty, or malfeasance in office".

Trump thus argues that such independent supervisory bodies are unconstitutional, and supports the "unitary executive" theory. This theory is a disputed constitutional theory in the US which claims that the US President has absolute authority over the entire executive branch of government. Slaughter argues that Trump lacked authority to remove her from the FTC, and points to 90 years of precedent (Humphrey's Executor v. United States from 1935). The decision is expected by June-July 2026 at the latest. FTC-independence is crucial for DPF, since the EU Charter requires that personal data is monitored by an independent body.

The so-called "Data Protection Review Court" – which is meant to give Europeans remedies against US surveillance – has also been established solely through EO 14.086, not by statute. If the US Supreme Court establishes that independent bodies are unconstitutional, its independence may also fall.

Read more about the case here:

Trump's statement on Truth Social
EU-US Data Transfers: Time to prepare for more trouble to come

Sweden tightens up by strengthening procurement laws and new provision on breach of confidentiality

The Swedish government has received a report on how countries outside the EU can participate in public procurement. Today, Swedish procurement laws make no explicit distinction between suppliers from the EU and third countries – everyone can participate and are entitled to equal treatment. However, the EU Court of Justice has, in two rulings, established that suppliers from countries without a free trade agreement with the EU do not have the same rights.

The report therefore proposes that procurement laws be amended so that they no longer apply to suppliers from states lacking a free trade agreement with the EU. This gives public authorities the option to choose whether these suppliers should be allowed to participate in procurements, and increases opportunities to take into account suppliers' nationality and exclude those from, for example, antagonistic states. Civil Minister Erik Slottner says the proposal is intended to reduce the risk of important sectors such as IT, infrastructure and energy being infiltrated by hostile states. The report's proposals will now be processed further in the Government Offices.

At the same time, Sweden is working on increased information exchange between authorities to prevent, forestall or detect crime, investigate offences, prevent, forestall, detect or investigate incorrect payments and fraud, and various types of violations. This is done by introducing a new provision stating that such information can be shared without it constituting a breach of confidentiality. The new provision entered into force on 1 December 2025.

Read more about the case here, here and here.

Critical views on the "omnibus" proposal

As most are aware, the EU can choose to make major changes to key digital regulations, including the GDPR and the AI Regulation, through the so-called omnibus proposals. Among the most controversial proposals are the simplification of consent for cookies, as well as the possibility for AI developers to process personal data based on legitimate interests without explicit consent. It is also proposed that pseudonymised data in certain cases be exempted from the scope of the GDPR.

These changes have faced fierce criticism from various privacy advocates. Among them, over a hundred organisations, associations and defenders of the public interest have joined together to urge the EU Commission to prevent what they call "the greatest rollback of fundamental digital rights in EU history".

It's not surprising that GDPR amendments provoke strong reactions. The regulations affect both fundamental rights and major economic interests, so disagreement is natural. It's also too early to say how this will end. The proposals must go through consideration in the European Parliament and the Council, and an intense lobbying campaign is expected. The final result may be quite different from the current draft.

Justice Commissioner Michael McGrath dismisses the criticism and believes the proposals are "appropriate" and will "make a positive difference". However, he emphasises that the Commission has been conscious not to go too far, but acknowledges that further significant changes could jeopardise the high standard of privacy protection for which the EU is known. When the EU's Justice Commissioner says they have stretched as far as possible without weakening privacy, it's a signal that should be taken seriously.

Read more here.

Users don't want to be tracked if they can avoid it

A new survey commissioned by the privacy organisation Noyb shows something that perhaps doesn't surprise: Given a real choice, people would rather not be tracked online.

The background for the survey is the so-called "Pay or Okay" model that has become popular among European internet companies. Meta, media companies and others offer users two alternatives: Pay a price, or accept tracking. Almost everyone, more precisely 99.9%, ends up accepting tracking. But is this because people actually want to be tracked? No.

The survey shows that when people are asked if they want to share data with companies, only 20% say yes. When the same people encounter the "Pay or Okay" choice, however, 90% accept tracking. The difference? In the second case, they lack an alternative. When a third option is introduced, advertising without tracking, 70% choose this solution.

Unsurprisingly, many media houses and platforms disagree with EDPB that websites must offer three choices instead of two. Meta has attempted to sue EDPB over this, while media actors claim they need the current model to survive. I have previously written – and still believe – that it's quite remarkable that EDPB can require businesses to offer free services without it being subject to ordinary democratic scrutiny.

The conclusion from the survey is nevertheless clear. Users accept advertising as a financing model, they just don't want to be tracked around the internet. This applies regardless of platform, whether it's social media or news media.

Read more about the case:

‘Pay or Okay’ study: Users prefer a tracking-free “third option”
Personvernrådet om "pay or okay": Kan ikke presse brukerne til å samtykke | Datatilsynet

EU maps its own AI systems

The European Data Protection Supervisor (EDPS) established a new AI unit in October 2024 and has since then been preparing for the AI Regulation to enter into force. Early in 2025, EDPS launched a voluntary mapping of high-risk AI systems among EU institutions to help them prepare for the new rules. A full 87% of institutions participated, and around 186 AI systems were reported.

Many institutions reported all their AI systems, not just those potentially being high-risk. Here are some interesting findings:

  • As much as 45% of institutions use only ready-made AI systems, while only 16% develop their own
  • Over half of the systems (53%) use generative AI
  • Many systems in the pipeline – around 43% of systems are either under development or in pilot phase
  • Most institutions didn't know where their AI systems were hosted (!)


The institutions themselves identified which systems could fall under the Annex III categories for high-risk, and the main findings are set out below:

  • The systems covered areas such as recruitment, personnel management and self-employment, as well as migration, asylum and border control
  • Generative AI is not the dominant technique among high-risk systems, other machine learning dominates
  • A full 41% of high-risk systems are already operational, while 43% are under developmen


The most important thing to remember is that "high-risk" doesn't mean the systems can't be used – it just means that higher safety standards and accountability are required to ensure benefits while minimising potential harm.

You can find the report here.

The consequences of poor cookie practices

The French subsidiary of American Express has received a fine of 1.5 million euros from the French data protection authority CNIL for unlawful use of cookies. The case shows that even though there are proposals to amend cookie rules, the "old" rules still apply as of today.

In January 2023, CNIL conducted several inspections of American Express's French website and offices. The result was disappointing, and three matters were particularly criticised.

Firstly, cookies were installed as soon as the user visited the website – before the user had the opportunity to accept or reject them. The rules are clear: consent must be given before cookies are placed. Secondly, advertising cookies were placed even though the user had said no. And finally, cookies continued to collect data even after the user had withdrawn their consent. This undermines the entire point of the right to withdraw consent.

Why was the fine so high? CNIL emphasised that cookie rules are well known and have been communicated for a long time. This is not new or unclear regulation. At the same time, they took into account that American Express corrected the errors along the way, which likely reduced the fine.

Cookie rules are being enforced and breaches have consequences. It's not enough to have a nice banner, the implementation must also work. Test your systems and remember that ignorance is not an excuse.

Read more here.

Elon Musk, the USA and DSA

Sometimes enforcement and compliance with regulations can have geopolitical ripple effects. This is the case for X (formerly Twitter) and the EU Commission. The case has evolved from a compliance question into a full-blown political conflict between the EU and the USA. Here's what has happened.

In December, the EU Commission gave X a fine of 120 million euros for breach of the Digital Services Act (DSA). This was the first fine under DSA, which in itself is a historic milestone. The Commission held that X had violated three key transparency requirements, namely misleading use of the blue verification checkmark, inadequate advertising register and obstacles to researchers' access to public data. In short: X has made it difficult for users to know who is behind accounts, and for researchers to uncover fraud and information operations.

The fine is significant and interesting in itself. Yet it seems that for Elon Musk it is more symbolic than economic. As expected, Musk reacted strongly and called the fine an attack on freedom of expression. American politicians followed up with the same rhetoric, and Secretary of State Marco Rubio claimed that "the days of censoring Americans online are over". The fine itself, however, is not about censorship, but about lack of transparency and platform design.

Then the real turning point came: The US responded with sanctions, more specifically entry bans to the US, against five European citizens, among them former EU Commissioner Thierry Breton – the architect behind DSA. American authorities claim the EU discriminates against American companies and suppresses freedom of expression, and threaten further countermeasures such as tariffs and restrictions. This has made the case high politics. EU parliamentarians call the sanctions "intimidation" and ask the EU to resist the pressure and reduce dependence on American technology.

The conflict shows how geopolitical technology regulation has become. It's no longer merely a matter of law and technology, but power and values. The EU defends its rules as necessary to protect users, while the US sees them as protectionism. The Trump administration has even signaled that it will support nationalist forces in Europe to weaken the EU.

The question now is what happens next? We are now within the 60 days that X has to rectify the use of blue checkmarks and 90 days to deliver a plan to fix the advertising register and researcher access. If they don't follow up, new sanctions may come. At the same time, the political conflict will likely escalate.

This case shows that DSA actually has "teeth", and that the EU is willing to enforce the regulations, even against significant actors. It also shows how heated the balance between freedom of expression and platform responsibility has become. And most seriously: The US is now using sanctions against European officials, which could have long-lasting consequences for relations between the EU and the US.

I'm afraid this is only the beginning. Technology regulation has become a central part of the political climate. For those of us working with technology regulation, it means we must keep up – not just with legal changes, but also with the political game.

There are daily comments in the media about this. Some posts can be found here, here and here.

When the lawyer requests access

An Austrian lawyer asked an online platform for user data after the client's name and address appeared in an advertisement the client had never posted.

It turned out that another person had used the client's address in their profile, and the lawyer presented the information as evidence in a trial.

The person who had created the profile claimed the lawyer had violated privacy rules and complained to the data protection authority in Austria. But the Austrian data protection authority disagreed: The lawyer was a controller, and the processing was lawful based on legitimate interests under GDPR Article 6(1)(f). The purpose was to uncover unauthorised use of the client's identity, and there was no less intrusive way to achieve this.

The balancing of interests fell in the lawyer's favour, and the complaint was dismissed. So sometimes privacy wins – even when it's about others' privacy.

Read more about the case here.

When can you get compensation for data breaches?

What does it actually take to get compensation after a data breach? A recent ruling from the Regional Court in Vienna provides some answers, and perhaps some surprises.

The case started with a data breach in 2020, where a hacker posted large amounts of data from the Austrian population register for sale. The data subject, whom we can call X, first learned in 2023 that his personal data was among the leaked data. He requested access under GDPR Article 15, but the company that was controller took almost three months to respond, which is far beyond the deadline.

X took legal action and claimed 200 euros in material damage for legal fees and 200 euros in non-material damage for worry and annoyance. The first instance court dismissed both claims. Legal fees were considered a cost one must cover oneself, and no real emotional harm had been proven.

The court of appeal partly disagreed. It opened for legal fees in certain cases being considered "rescue expenses" and thus material damage, and sent the case back to the first instance court for reassessment. Regarding non-material damage, the court upheld the dismissal. A breach of the GDPR does not automatically give the right to compensation – specific harm and causation must be proven. General annoyance or worry is not enough.

For companies, the ruling may involve some relief, namely that not all GDPR breaches automatically create liability for compensation. For data subjects, it means one must document the harm well – for example with medical certificates or other specific documentation. The ruling also shows that legal fees can be compensable in special cases, which may have significance going forward.

Read more here.

CNIL is active on cookies

France's data protection authority (CNIL) has cracked down on the website VanityFair.fr, which ignored users' cookie choices. On 20 November 2025, the French company Les Publications Conde Nast – which publishes Vanity Fair magazine – was fined €750,000.

Three things went wrong. Firstly: The moment you opened vanityfair.fr, cookies were already tracking you. Before you even saw the banner where you were supposed to choose. Secondly, some cookies were marked as "strictly necessary" – that is, the kind that doesn't require consent. These had nothing to do with technical necessity, however. They were meant to track users and show advertising. Finally: When people clicked "Reject all", nothing happened. New cookies were still planted, and those already there continued to function. "Reject all" was quite simply a button without function.

This case is yet another clear example that data protection authorities actually enforce cookie rules.

Can your payment service know too much? New German ruling on special categories of personal data

Have you ever thought about what your payment service actually knows about you? Not just how much you spend, but what you buy? A recent ruling from a German court shows this can be more problematic than many think.

The case concerns Paydirekt GmbH, a payment service that stored detailed information about what their customers bought online. We're not just talking about amounts and dates – but the actual goods. Eye drops from an online pharmacy. Skincare products. And yes, products from an online shop for sex toys.

A user, represented by noyb, reacted and complained to the data protection authority in Hessen. The argument was simple: This is sensitive information revealing health and sex life, and the controller has no lawful basis to store it.

Initially, the Data Protection Authority in Hessen did not agree with this. They believed Paydirekt could process the information based on a legitimate interest, namely to prevent people from cancelling payments mid-transaction, and to prevent fraud. Moreover, the authority held that information about purchased goods was not sensitive information at all. The user appealed the case to the court and asked for the authority's decision to be overturned.

Then something interesting happened. The EU Court of Justice issued a ruling in another case (C-21/23, the Lindenapotheke case), and the data protection authority then changed its mind. They admitted that some of this information actually could be sensitive. But they gave no proper explanation of what this meant in practice.

The Court's decision

In the meantime, Paydirekt went bankrupt and shut down operations in 2025. The company claimed to have deleted the user's data. Although the case technically became moot because the company no longer existed, the court took the trouble to assess the outcome – among other things to determine who should cover legal costs. The Court's conclusion was clear, namely that the complaint would have prevailed.

The Wiesbaden court identified several weaknesses in the Data Protection Authority in Hessen's decision of 22 July 2022. According to the ruling, the authority had not conducted a sufficient assessment of the proportionality of using legitimate interest as legal basis for processing detailed information about the user's shopping basket.

The court held that it's not sufficient to justify storage of purchase information by wanting to minimise abandoned payments. This is not covered by the provision on legitimate interests in GDPR Article 6(1)(f). The court expressed "serious doubt" whether displaying complete product names during payment confirmation could be justified by business interests in reducing abandoned transactions. According to the ruling, reduction of abandoned payments does not automatically outweigh individuals' fundamental right to self-determination in the balancing of interests required under Article 6(1)(f) GDPR.

The data protection authority had acknowledged that displaying shopping baskets for payment confirmation does not require long-term storage, and therefore required the payment service to reduce the storage period to 48 hours after a system update in spring 2023.

The court also doubted whether the processing could be justified by fraud prevention, since there are less intrusive ways to achieve the same.

According to the court documents, the payment service had only given "general statements" about the need for fraud prevention without demonstrating that less intrusive alternatives would be insufficient. The court noted that fraud detection could potentially be achieved through access to the number of products or item numbers instead of descriptive product names, although the data protection authority had not required the payment service to explore such alternatives.

Reflections

In the wake of the ruling, one can ask the question: What could the payment service actually store?

Based on the ruling and the fundamental privacy principles, a payment service should only store what is strictly necessary to carry out the payment transaction. This would presumably be:

  • Amount
  • Date and time
  • Place of purchase/recipient (the online shop's name)
  • Transaction ID
  • Payment status

What Paydirekt did wrong was to store detailed information about the specific goods that were purchased – namely eye drops, skincare products, sex toys etc. This goes beyond what is necessary to complete a payment. One can, however, ask how far this can be interpreted? Could one imagine that it would have been acceptable if the number of items purchased had been stated, but not which specific items? For some businesses, one can imagine that this too would be considered special categories of personal data, if the business primarily offers goods that may indicate, for example, information about sexual relations or orientation.

Another central question is, if shopping basket information is special category data under Article 9, what could be a relevant legal basis. This would likely require explicit consent under Article 9(2)(a). The problem is that explicit consent in this context would be difficult to obtain in a lawful manner, since it must be voluntary – and if the payment service requires consent to process special category data that is not necessary for the service, it will hardly be voluntary according to the standard in GDPR Article 7.

The ruling may have broader implications for payment services and e-commerce platforms. Payment service providers typically process transaction information to facilitate purchases, prevent fraud and generate business analytics. The Wiesbaden court's finding that such processing may involve special categories of personal data under Article 9 GDPR may create challenges for many payment services.

Payment services that gain access to shopping basket content must identify applicable Article 9 exceptions or implement alternative approaches that avoid processing special categories of data. Technical solutions based on product identifiers or purchase amounts without access to product names could provide less privacy-intrusive alternatives. As mentioned, however, this too can present problems.

Payment platforms have a rather special position in e-commerce. Unlike merchants who directly sell products to customers, payment service providers typically process transaction information as processors rather than controllers. However, court documents from Wiesbaden suggested that Paydirekt GmbH maintained shopping basket information for its own business purposes beyond merely facilitating transactions, potentially establishing independent controllership.

It can be argued that these payment platforms have their own purposes for processing, including for fraud prevention purposes. Unusual purchasing patterns, mismatches in transaction value or suspicious product combinations may indicate fraudulent activity that generic transaction data might not reveal. The Wiesbaden court did not take a position on whether such fraud detection justifications meet the necessity requirements when processing involves special categories of data under Article 9.

In the wake of the case, industry analysts have pointed out that payment providers often do not differentiate between different merchant categories when designing data processing systems. Solutions that process transactions for both general stores, pharmacies and a shop for sexual products often use similar routines for collection and storage, regardless of how sensitive the purchase is. The Wiesbaden ruling indicates that payment actors may need to implement category-specific controls to meet Article 9 requirements for transactions related to pharmacies and sex shops.

The data protection authority's requirement for a 48-hour storage period in spring 2023 was seen as an attempt to balance functionality with the principle of data minimisation. The court wrote, however, that even such short storage may contravene Article 9 if it encompasses special categories of data. This suggests that time limitation alone is not sufficient for GDPR compliance when the underlying legal basis is unclear.

It's important to emphasise that the decision of 28 November does not create binding precedent outside the Wiesbaden Administrative Court's jurisdiction. Nevertheless, it may have some significance for interpretation of the provision in other jurisdictions as well. At the same time, the ruling builds on the EU Court of Justice's principles from the Lindenapotheke case, which is binding for all member states. National courts must interpret the GDPR in line with EU law, and the Wiesbaden court's application of these principles to payment solutions provides a model that other courts may follow in similar cases.

It can be imagined that payment providers must reassess their practices for transactions and differentiate between what kind of shop the purchase was made in.

You can find more about the case here.

Fine upheld by the Data Privacy Board

The Data Privacy Board has upheld the Data Protection Authority's decision on an administrative fine of 1.5 million NOK to the American company Argon Medical Devices. The decision has a clear message: Deadlines for notification of personal data breaches must be observed – regardless.

In July 2021, Argon discovered a serious security breach affecting employees' personal data. Not until September, two months later, did they notify the Data Protection Authority. The company claimed they needed full oversight before notification and had built their routines on this.

The Data Protection Authority disagreed. In their view, the 72-hour deadline runs from when you discover the breach, not once all investigations are complete. The Data Privacy Board is clear that companies cannot wait for detailed investigations before notifying. The Data Protection Authority therefore imposed a fine of 2.5 million NOK in March 2023.

Argon challenged the ruling, and the case was referred to the Privacy Board. The Board agreed with the Data Protection Authority's ruling and, like the authority, emphasised the guidelines from the Privacy Board.

Due to long processing time before consideration in the Privacy Board, the fine was reduced from 2.5 million to 1.5 million NOK.

The Board also established indicative rates for when long processing time should result in fine reduction. Processing of 1 year gives 0-10% reduction, 1-2 years gives 10-20% reduction, while 3-5 years gives 30-50% reduction.

The case is discussed here.

Spanish university fined for facial recognition in exams

A Spanish university has received a fine of €650,000 for forcing students to use facial recognition during digital exams online. Universidad Internacional Valenciana (VIU) required all students to use a remote proctoring system based on facial recognition, without offering any alternative. "Remote proctoring" is a service that mimics the role of a physical exam invigilator by verifying the candidate's identity and ensuring exam integrity – for example through internet-based surveillance. The system conducted continuous biometric facial verification, monitored students through cameras and took screenshots of activity on students' computers. The university had used this technology since 2017 and claimed they had legal basis in consent, contract performance and legitimate interests. They also argued that cheating in online education had increased.

But the Spanish data protection authority AEPD established that facial recognition processes biometric data to uniquely identify persons, which makes it special categories of personal data. The students had not given genuine consent – they had no other exam options. The authority also believed that continuous biometric surveillance was not necessary to ensure exam integrity, and that less intrusive measures could have worked just as well with lower risk. The authority suggested as an example "remote-proctoring without biometric processing, or in-person options" – but it's unclear to me how that could actually happen. Perhaps the idea is that a person should sit and watch to make sure it's really the student writing- but I doubt that's much of an improvement.

The result was two fines: €300,000 for breach of Article 9 GDPR (special categories) and €350,000 for breach of the data minimisation principle. The conclusion is hardly very surprising. Even though cheating is a problem, one cannot just scan students' faces for hours without solid legal basis.

Read more about the case here.

LastPass fined - when the password protector itself needs protection

LastPass UK has received a fine of 1.2 million pounds from the British data protection authority ICO after a data breach that affected 1.6 million people. Ironically enough, this is about a password protector that failed to protect itself.

The password manager had not implemented sufficiently robust technical and security measures, which made it possible for a hacker to break into the company's backup database. In August 2022, a hacker managed to get hold of a business laptop belonging to one of LastPass's employees and then gain access to the company's development environment. The attacker failed to steal personal data, but got away with encrypted company information. LastPass thought the encryption keys were safe because they were located elsewhere on the network that the hacker didn't have access to.

But the attacker didn't stop there. The attacker then targeted a senior employee with access to the decryption keys, exploited a known vulnerability in a streaming service, installed a keylogger and bypassed two-factor authentication by using a trusted device cookie. With the employee's master password, the hacker gained access to both personal and business LastPass vaults, including AWS access keys and decryption keys, and could thus extract the contents of the backup database with personal data on 1.6 million people – names, emails, phone numbers and saved web addresses. ICO's investigation found no evidence that encrypted passwords and other credentials could be decrypted by the hacker. This was due to LastPass's use of a 'zero-knowledge' encryption system, where the master password required to access the password vault is stored locally on the customer's own device and never shared with LastPass. As ICO Commissioner John Edwards put it: "LastPass customers had a right to expect the personal information they entrusted to the company would be kept safe and secure. However, the company fell short of this expectation, resulting in the proportionate fine being announced today,". The point is simple: Password protectors are invaluable tools, but they're only as good as the security behind them.

Read more about the case here.

Helpful about Data Act from the EU

As part of the EU's Data Union strategy, the EU Commission has launched a legal helpdesk for the Data Act. The goal is simple: to make it easier for businesses, public authorities and other organisations to understand what the rules mean in practice.

The helpdesk is particularly aimed at small and medium-sized enterprises, which rarely have large legal departments to decipher complicated legislation. Here everyone can get guidance on their requirements, rights and obligations under the Data Act.

Read more here.

EU launches first draft of guidelines for labelling and categorising AI-generated content

On 17 December, the EU Commission published the first draft of a code of conduct for labelling AI-generated content, with the aim of finalising the code by June 2026. Article 50 of the AI Act requires providers to label AI-generated or manipulated content in a machine-readable format, while users who deploy generative AI systems professionally must clearly label deepfakes and AI texts on matters of public interest.

The draft consists of two parts: The first covers rules for labelling and identifying AI content for providers of generative AI systems, while the second concerns labelling of deepfakes and certain AI-generated or manipulated text for users of such systems.

The Commission is requesting feedback on the draft by 23 January, and the second draft should be ready by mid-March 2026.

The code sets out a multi-layer solution for labelling. Until an EU-wide icon is developed, actors can use a temporary icon consisting of a two-letter abbreviation for artificial intelligence, which can also be letters from the translation into member states' languages (e.g. AI, KI, IA).

The rules for transparency about AI-generated content enter into force on 2 August – So now we need to stay alert!

Read more about the case and the first draft itself here.

Do you have any questions?