Newsletter

Privacy corner

by Eva Jarbekk

Published:

Computer and lock

February is a short month, but much has happened since the last newsletter. I have seen increasing discussion about the future of the Data Protection Framework (DPF) for transfers to the USA. While there are no definitive answers, the best advice remains to ensure exit strategies for the systems in use and to prepare for increased use of standard contractual clauses and related third-country assessments. 

Pseudonymization plays a crucial role in this context. In addition to the draft guidelines on pseudonymization, many are awaiting the ruling in the so-called SRB case. I have serious doubts that this case, regardless of the outcome, will change our perspective on personal data, as the main argument seems to be that necessary assessments of the possibility of re-identification must be made. For those who were around before GDPR, the arguments in the so called Breyer case are still relevant. More on this below.

Furthermore, it is pleasing to see that the EU is refraining from regulating certain areas. Both the AI Liability Directive and the ePrivacy Regulation have been put on hold. Does this mean there are no rules and it's a free-for-all? Absolutely not! And at the same time they are working with the new Digital Fairness Directive. That will be really interesting, as it is targeting some of the very most important consumer rights. 

One of the very most important pending cases is the so-called SRB-matter

On February 6, 2025, Advocate General Spielmann released his opinion in the case, C-423/23 P, EDPS v SRB, between the European Data Protection Supervisor (EDPS) and the Single Resolution Board (SRB). This case revolves around whether pseudonymized personal data shared by the SRB with its consultants, Deloitte, should be considered personal data for Deloitte.

The Advocate General concluded that the coded opinions expressed by individuals and shared by the SRB with Deloitte are indeed personal data relating to those individuals. However, the Advocate General also stated that the EDPS should not have automatically concluded that the pseudonymized data shared by the SRB with Deloitte is personal data for Deloitte. Instead, the EDPS should have verified whether Deloitte had the means to identify the individuals concerned. 

In my opinion, the view of the advocate general is quite in line with the original decision. And as far as I can see, the Advocate General is not taking a stance on whether the data actually was anonymous in the hands of Deloitte. 

The Advocate General emphasized that data can only escape classification as personal data if the risk of identification is non-existent or insignificant. Additionally, the SRB should have informed individuals about the disclosure of their pseudonymized data to Deloitte, even if that data may be anonymous for Deloitte. 

The Court of Justice of the European Union (CJEU) will now render its judgment in light of the Advocate General's opinion. Historically, the Court has tended to follow the Advocate General's opinion in the majority of cases, but it is not bound by it. Going forward, I believe the really interesting debate will be the threshold for what is non-existing og insignificant probability of identification.  I am in doubt if this case will solve that problem. That, in turn, may lead us back to the old Breyer-case where only a quite theoretical possibility of identification (from a third party) was considered to be sufficient. In that case, the SRB-case will not be such a liberal interpretation to what is a personal information as some like to present it as.  Any way – this is to be continued! 

An important case for companies using automated decisions

In the case of CK v Dun & Bradstreet Austria (C-203/22), the Court of Justice of the European Union (CJEU) addressed the issue of what information a data controller must provide to a data subject under Article 15(1)(h) of the General Data Protection Regulation (GDPR). This article grants data subjects the right to obtain meaningful information about the logic involved in automated decision-making, including profiling.

The case arose when an Austrian mobile phone operator refused to conclude a €10 per month phone contract with the applicant, citing insufficient financial creditworthiness. The applicant requested information from the credit assessment provider to understand her credit rating. The information provided was minimal and contradictory to the refusal of the phone contract. Dun & Bradstreet Austria refused to provide further information, leading the applicant to initiate the case.

The CJEU ruled that Dun & Bradstreet Austria violated Article 15(1)(h) GDPR by not disclosing sufficient information about the logic involved in the automated decision-making process. The Court emphasized that data subjects have the right to receive detailed information about the criteria and methods used in automated decision-making, including the factors taken into account and their relative importance. This ensures transparency and allows data subjects to understand and challenge decisions affecting them.

The Court also addressed the issue of trade secrets in this context. Dun & Bradstreet Austria argued that providing detailed information about the automated decision-making process would reveal trade secrets. The CJEU acknowledged the importance of protecting trade secrets but stated that this does not automatically exempt the controller from providing the required information. Instead, the controller must provide the allegedly protected information to the competent supervisory authority or court, which will then balance the rights and interests at issue to determine the extent of the data subject’s right of access to that information.

The Court did not agree with Dun & Bradstreet Austria's argument that trade secrets should completely shield them from disclosing the logic involved in automated decision-making. The Court emphasized that the protection of trade secrets must be balanced against the data subject's right to access information. The Court concluded that the mere invocation of trade secrets cannot justify a blanket refusal to provide the required information.

As a result, Dun & Bradstreet Austria was forced to share comprehensive information about the automated decision-making process, including the logic, criteria, and methods used to assess the applicant's creditworthiness, while ensuring that trade secrets were adequately protected through the involvement of supervisory authorities or courts.

Implications for Companies

This judgment has significant implications for companies that rely on automated decision-making processes. Companies must be prepared to provide detailed information about the logic and criteria used in these processes to data subjects, even if this involves disclosing sensitive information. To protect their trade secrets, companies should ensure that they have robust confidentiality agreements in place with supervisory authorities and courts. Additionally, companies should consider implementing measures to anonymize or pseudonymize data to minimize the risk of revealing trade secrets while still complying with transparency requirements.

Overall, this case underscores the importance of balancing transparency and the protection of trade secrets, and companies must navigate this balance carefully to comply with GDPR while safeguarding their proprietary information.

New complaint from NOYB in Sweden

Swedbank has come under scrutiny for refusing to be transparent as to the logic behind its automatic interest rate calculation. The bank claimed that the logic used in these calculations constitutes a trade secret, which they argued should exempt them from disclosing this information to data subjects under the General Data Protection Regulation (GDPR). 

Swedbank's refusal to offer transparency has significant implications for customers. Without access to detailed information about the logic behind the automatic interest calculations, customers are left in the dark about how their interest rates are determined. Noyb underlines that customers are unable to challenge or appeal decisions effectively, as they do not have the necessary information to understand the basis of the calculations.

Swedbank's argument hinges on the protection of trade secrets. The bank contends that revealing the detailed logic and criteria used in their automated interest calculations would expose proprietary information that could be exploited by competitors. Trade secrets are considered valuable intellectual property. However, the GDPR mandates that data subjects have the right to obtain meaningful information about the logic involved in automated decision-making processes, including profiling.

The European Data Protection Board (EDPB) and other regulatory bodies have emphasized that the protection of trade secrets does not automatically exempt companies from their transparency obligations under the GDPR. Instead, a balance must be struck between protecting trade secrets and ensuring data subjects' rights to transparency. This means that companies may be required to disclose sufficient information to allow data subjects to understand and challenge automated decisions, while still protecting the core elements of their trade secrets.

In conclusion, while protecting trade secrets is important, companies must carefully navigate their transparency obligations under the GDPR to avoid regulatory and legal challenges. Striking the right balance between transparency and trade secret protection is crucial for maintaining compliance and building trust with customers. And this particular case will likely have a long life in the judicial system before finding its final decision.

The Withdrawal of the AI Liability Directive

The European Commission recently decided to withdraw the AI Liability Directive from consideration due to a lack of agreement and pressure from the technology industry for simpler regulations. Initially conceived in 2022, the directive aimed to establish uniform rules for non-contractual civil liability for damage caused by AI systems. 

The decision to abandon the proposal has been met with mixed responses. Some experts, including German Member of European Parliament Axel Voss, argued that the directive was necessary to address harms caused by AI systems. The Commission explained that it would assess whether another proposal should be tabled or if a different approach should be chosen. This withdrawal reflects a potential shift in the EU's approach to digital regulation, aiming to reduce administrative burdens and foster more opportunities, innovation, and growth for businesses and citizens.

The withdrawal of the AI Liability Directive has several implications for companies. Without a unified approach, businesses will now have to navigate the varying legal frameworks of 27 individual EU Member States regarding AI liability. This could lead to increased complexity with different national regulations and standards.

The Withdrawal of the ePrivacy Regulation

Additionally, the European Commission recently decided to withdraw the ePrivacy Regulation on February 11, 2025. The ePrivacy Regulation, initially proposed to complement the General Data Protection Regulation (GDPR) by regulating the protection of communications data, faced two main challenges that led to its withdrawal.

Firstly, there was a lack of agreement between the Council of Member States and the European Parliament, making it difficult to reach a consensus. Secondly, the proposal was considered outdated in terms of both the technological and legislative landscape. And there were many issues with overlapping regulations with GDPR. After eight years of negotiations, it was deemed no longer relevant.

Without the ePrivacy Regulation, businesses will continue to operate under the old ePrivacy Directive and to quite a lot of relevant case law.

But they are preparing the Digital Fairness Act

The EU Digital Fairness Act (DFA) is a legislative proposal by the European Commission aimed at addressing various consumer protection issues in the digital space. Public consultations on the planned EU Digital Fairness Act will take place in the coming months. These consultations will seek input on key consumer protection issues in the digital space, exploring legal and technical uncertainties surrounding "fairness by design" and addressing regulatory gaps left by existing legislation. The act is expected to be proposed in 2026.

The Digital Fairness Act will tackle issues such as dark patterns, personalization, contracts, and influencer marketing. The European Commission has stated that there will be a 12-week long public consultation starting in spring 2025, and the Digital Fairness Act will not be proposed before 2026.

For more information, see Review of EU consumer law - European Commission

New guidelines on the definition of an AI System under the AI Act

The Commission Guidelines on the definition of an AI system were published on 6 February 2025. They provide a framework for identifying which AI systems fall within the scope of the regulation. According to the guidelines, an AI system is defined as a machine-based system designed to operate with varying levels of autonomy and may exhibit adaptiveness after deployment. These systems infer from the input they receive to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. The guidelines are intended to evolve over time and will be updated as necessary, particularly in light of practical experiences, new questions, and use cases that arise.

For companies, this definition means that any software or system that meets these criteria will be subject to the regulatory requirements of the AI Act. This includes obligations related to risk management, data quality, transparency, human oversight, and accuracy. Companies will need to ensure that their AI systems comply with these requirements to avoid penalties and ensure the safe and ethical use of AI technologies.

An example of an AI system within the scope of the AI Act is a predictive maintenance system used in industrial machinery. This system uses machine learning algorithms to analyze data from sensors and predict when a machine is likely to fail. Since it operates autonomously, adapts based on new data, and influences physical environments by triggering maintenance actions, it falls within the definition of an AI system under the AI Act.

Conversely, a simple rule-based system that automates basic tasks without any adaptiveness or autonomy, such as a basic email filter that sorts emails based on predefined rules, would not fall within the scope of the AI Act. This is because it does not exhibit the level of autonomy or adaptiveness required by the definition.

These definitions will be increasingly important going forwards. Also note, that the Swedish IMY, data protection authority, in a recent guideline suggests that it may not be necessary to conduct DPIAs for simple and well-tested AIs.

Sometimes governments are effective

The Norwegian government has proposed several key changes to the regulation of December 18, 2004 on data centers. Regulations are not often changed only weeks after being implemented, so this is unusual. The backdrop of the changes is to meet the increasing demands for security, privacy, and sustainability. Frequent data attacks and increased focus on the environment and privacy, makes it necessary to update the rules to ensure that data centers operate in a safe and sustainable manner. 

The proposition is yet another 50 pages to read and understand. The main change is that the data center is obliged to know whom its customers are and where in the data center the equipment of the customers is. 

This information shall, when needed, be shared with Nasjonal kommunikasjonsmyndighet (Nkom), NSM, Politiets sikkerhetstjenesten (PST) as well as police and prosecution authority. 

New Norwegian law for privacy in sports

The Norwegian government has proposed a new law on the processing of personal data in the Norwegian Olympic and Paralympic Committee and Confederation of Sports (NIF) and their organizational units to prevent, detect, and respond to sexual abuse, harassment, and violence. The main points of this proposal include:

  1. Permission to process special categories of personal data: The law allows the processing of sensitive personal data and information about criminal convictions and offenses when necessary to prevent, detect, or respond to sexual abuse, harassment, or violence in sports.
     
  2. Sharing of personal data within sports organizations: Provisions are proposed for sharing personal data between different organizational units in sports to ensure a comprehensive approach to preventing and handling such cases.
     
  3. Information security: To ensure information security, provisions on access control and confidentiality are proposed.
     

A specific law was necessary to ensure that the Norwegian Olympic and Paralympic Committee and Confederation of Sports (NIF) and their organizational units have clear guidelines for processing personal data in connection with the prevention, detection, and response to sexual abuse, harassment, and violence. This is important to protect those involved and ensure that sensitive information is handled safely and legally. 

Digital Services Act is coming to Norway as well

The government has initiated work on a new law aimed at making the internet safer and strengthening consumer rights. The regulation is an implementation of the EU's Digital Services Act (DSA) and will be particularly important for providing safer online environments for children. The main rules are the following:

  • There will be a ban on behavioral, or targeted, advertising towards minors. Particular emphasis is placed on protecting children and young people from harmful content and preventing them from developing addictions. 
     
  • It will be prohibited to display behavioral advertising based on someone's sensitive personal data, such as information about sexual orientation, ethnicity, and religion. 
     
  • It will become easier to remove illegal content, products, and services online.
     
  • The DSA includes rules that advertisements must be recognizable, and it must be disclosed why a particular advertisement is being shown. 
     
  • The DSA establishes a ban on so-called manipulative design, which aims to prevent platforms from "tricking" users into consenting to something they do not actually want through the design of websites.
     
  • The largest platforms are required to conduct risk assessments for the spread of illegal content and for fundamental rights such as privacy and freedom of expression, including freedom of information and press freedom. Platforms must also conduct risk assessments related to election manipulation, the spread of disinformation, and the physical and mental health of users.
     

The Minister of Digitalization and Public Administration, Karianne Tung, has stated that stricter requirements will be imposed on major technology giants such as Google, Meta, Temu, and Amazon. This work is unlikely to be particularly important for many Norwegian businesses but is aimed at providing greater protection for consumers online.

With the same goal, the government will present a parliamentary report on safe digital upbringing later this spring.

The Norwegian Communications Authority (Nkom) will have the main responsibility for overseeing the compliance with the EU's new rules on digital services in Norway. Nkom will also take on the role of national DSA coordinator and be responsible for administrative tasks that ensure information flow, enforcement, and uniform application of the regulation. The Norwegian Media Authority, the Consumer Authority, and the Data Protection Authority will be designated as competent authorities in their respective areas.

Age verification – important for all services needing to know the age of customers

The European Data Protection Board (EDPB) recently adopted a statement on age assurance during its February 2025 plenary meeting. This statement aims to create a uniform approach to age verification across the EU, ensuring the protection of children's rights in the digital world. The statement is based on the principles outlined in the General Data Protection Regulation (GDPR) and provides guidance on how to handle personal data for age verification purposes.

To legally verify the age of users, companies must follow the guidelines set forth by the European Data Protection Board (EDPB) in their statement on age assurance. The EDPB emphasizes that age verification methods must comply with data protection principles and ensure the protection of children's rights in the digital environment.

Key elements in Age Verification are:

  1. Lawfulness, Fairness, and Transparency: Companies must ensure that the age verification process is lawful, fair, and transparent to the users. This includes providing clear information about why age verification is necessary and how the data will be used.
     
  2. Purpose Limitation: Data collected for age verification should only be used for that specific purpose and not for any other unrelated purposes.
     
  3. Data Minimization: Companies should collect only the minimum amount of data necessary to verify the user's age. This helps to reduce the risk of data breaches and misuse.
     
  4. Accuracy: The data used for age verification must be accurate and kept up to date to ensure that the verification process is reliable.
     
  5. Storage Limitation: Personal data collected for age verification should be stored only for as long as necessary to achieve the purpose of verification.
     
  6. Integrity and Confidentiality: Companies must implement appropriate security measures to protect the data collected during the age verification process from unauthorized access and breaches.
     
  7. Accountability: Companies must be able to demonstrate compliance with these principles and ensure that their age verification processes are in line with data protection regulations.
     

Examples of Age Verification Methods:

  • Document Verification: Users may be required to upload a government-issued ID or other official documents to verify their age. The company must ensure that the data is securely stored and used only for age verification purposes.
     
  • Third-Party Verification Services: Companies can use third-party age verification services that comply with data protection regulations to verify users' ages without directly handling sensitive personal data.
     
  • Parental Consent: For younger users, companies may require parental consent as part of the age verification process. This involves verifying the identity of the parent or guardian and obtaining their consent for the child's use of the service.
     

Failure to comply with these age verification requirements can result in significant consequences. Companies must ensure age verification mechanisms that balance the need for transparency and data protection with the protection of trade secrets and proprietary information.

AI literacy demands in effect now

The EU AI Act, which came into force on August 1, 2024, includes specific demands on AI literacy for providers and deployers of AI systems. These requirements actually became applicable on February 2, 2025.

Providers and deployers of AI systems are required to ensure that their staff and other individuals involved in the operation and use of AI systems have a sufficient level of AI literacy. This involves taking measures to educate and train their employees about AI systems, considering their technical knowledge, experience, education, and the context in which the AI systems will be used. The goal is to ensure that those handling AI systems understand the basic notions and skills related to AI, including the different types of products and uses, their risks, and benefits.

I see much is written on this on the web and LinkedIn. It is true that many companies should start preparing for education of the employees. I do not believe this will be like the GDPR-2028-rally at all. However, if you do (or probably should) use AI, it is time to think about training programs and educational initiatives for your workforce. It will be crucial for ensuring compliance with the AI Act and for the safe and ethical use of AI technologies – and you must acknowledge that you shall be able to demonstrate compliance with these requirements. 

Important decision – mostly for the Data Protection Authorities

In 2018, three people from Belgium, Germany, and Austria filed complaints with their local Data Protection Authorities (DPAs), represented by noyb. The complaints were against Facebook Ireland Ltd and WhatsApp Ireland Ltd (now Meta) for allegedly processing data illegally, including sensitive data.

Since Meta is based in Ireland, the Irish DPA (DPC) was in charge of handling these complaints. After investigating, the DPC shared its draft decisions with other concerned DPAs. The DPC believed that the data subjects did not prove that Meta couldn't rely on Article 6(1)(b) GDPR. However, several DPAs disagreed, and the matter was referred to the European Data Protection Board (EDPB).

On December 5, 2022, the EDPB issued several Binding Decisions and disagreed with the DPC's view that Meta could rely on Article 6(1)(b) GDPR. The EDPB instructed the DPC to exclude the finding that user consent was not necessary for data processing and to find certain GDPR infringements, especially of Article 6(1) GDPR. The EDPB also found the DPC's investigation too narrow and ordered the DPC to conduct a new investigation to determine if Meta's data processing involves particular categories of data and if it is used for targeted advertising or marketing. 

The DPC disputed the EDPB’s authority to impose these measures, arguing that the EDPB exceeded its powers. The EU General Court then most importantly decided that the EDPB can ask the lead Data Protection Authority to do more investigation and it stated that the scope of the investigation is not a procedural matter but relates to the substance of the case, as it determines what must be examined to assess GDPR compliance.

The court dismissed the DPC's argument that this interpretation allows the EDPB to completely direct the lead DPA whenever a relevant and reasoned objection is raised. The court clarified that such objections can only relate to substantial GDPR compliance, not the conduct of the investigation itself.

It was also interesting to note that the DPC argument that recognizing the EDPB's power to direct a lead DPA is not consistent with the purpose of the 'one-stop-shop' mechanism, was rejected. The court held that disagreements between DPAs should be resolved within the EDPB, as the GDPR establishes the framework for cooperation between DPAs for this purpose.

A second look at a decision from 2024

I did comment on this decision last fall, but I recently read an analysis of the case that merits revisiting the case. The Irish Data Protection Commission (DPC) last fall fined Meta Platforms Ireland Limited (MPIL) €91 million for violating several GDPR provisions. The DPC found that MPIL's inadvertent storage of user passwords in plaintext on its internal systems was a "personal data breach" under Article 4(12) GDPR. This storage was against MPIL's internal security policies, which require passwords to be encrypted, and the plaintext passwords could have been accessed by MPIL employees. This means that a breach of security, combined with potential access to personal data by unauthorized employees, may constitute a "personal data breach" under the GDPR. Not surprisingly, MPIL is appealing the decision, so it remains to be seen what the court concludes.

MPIL was also fined for failing to notify the personal data breach within the 72-hour timeframe (because MPIL did not see the incident as a security breach), nor did they document the incidents.

This decision shows the DPC adopting a broad interpretation of a "personal data breach" when they conclude that potential access by unauthorized staff and/or a loss of control of personal data, constitutes a "personal data breach."

The decision serves as a warning that a personal data breach may meet the threshold for notifying the authorities, even if the personal data is only accessible internally to trusted but unauthorized employees bound by confidentiality agreements. The case is a fair reminder that businesses must thoroughly examine internal security incidents to determine if a "personal data breach" has occurred and whether it is reportable to the DPA and/or data subjects.

You can read the full analysis here.

Excessive subject requests?

In the last newsletter, C-416/23 was mentioned, stating that intentional misuse of access rights may be used as an argument for not giving the data subject such rights. This case has now been used as reasoning in another case. 

On October 14, 2022, a person claimed that their right to access under Article 15 GDPR was violated by the controller. However, the Austrian Data Protection Authority (DSB) rejected the complaint on October 18, 2022, explaining that the person had filed their first complaint on September 3, 2018, and since then, they had filed 73 other complaints. The DSB argued that while Article 57(4) GDPR does not define "excessive," a high number of complaints can be considered excessive. 

The person appealed the DSB's decision to the Austrian Federal Administrative Court (BVwG), arguing that the DSB did not provide a legal justification for rejecting the complaint. The BVwG paused its assessment until the conclusion of the above mentioned case C-416/23. Based upon C-416/23, the BVwG then found that the DSB had not investigated whether the person had abusive intentions and concluded that there were no indications of such intentions. Therefore, the BVwG ruled that the rejection of the complaint was unlawful and sent the case back to the DSB for a proper assessment.

So – if you contemplate rejecting a data subject request – make sure you do the proper assessments. 

Do you have any questions?