Newsletter

Privacy Corner

by Eva Jarbekk and Trygve Karlstad

Published:

Security cameras mounted on wall.

August is here and most are back to work. As usual, there is a steady stream of news to take into account for us working with techlaw. 

In many ways the winds of change are blowing around the GDPR, and it's not as simple as it sounds. After years of treating the regulation as untouchable, there's growing pressure to ease the burden on Small and Medium Enterprises (SMEs), particularly around those dreaded Article 30 Records of Processing Activities requirements. We covered this in an earlier newsletter. But now 120 civil society organizations have pushed back hard, arguing that simplification could "roll back key accountability safeguards" and send a worrying message that "people's rights are expendable when economic interests are at stake". They're particularly concerned that companies might avoid keeping records even when handling sensitive data, simply based on staff headcount.

The debate highlights a fundamental tension we've seen before – balancing practical compliance burdens with robust privacy protection. While the proposed changes only affect record-keeping obligations (other GDPR requirements remain fully in place), critics worry this is just the beginning of broader deregulation efforts.

If the GDPR is weakened, the ripple effects could extend far beyond Europe's borders. A diluted regulation would make it easier for the European Commission to recognize more countries as "adequate" for data transfers and accept weaker cross-border transfer mechanisms. This could fundamentally alter the global privacy landscape that many countries have been working to align with. 

At the same time we see increasingly vocally criticism from the USA on European digital safeguards. We are living in interesting times. 

Below is a short description of some of the most important new matters; AI is of course on the radar as well as topics such as what constitutes sensitive data, what is an automated decision, how should you control and audit your distributors, how to purchase data from brokers and much more. Happy reading! 

EDPB launches new initiative to simplify GDPR compliance

Further to the simplifications mentioned above, the European Data Protection Board, EDPB, has unveiled an ambitious new initiative designed to make GDPR compliance more accessible and practical for businesses across Europe. Following a high-level meeting in Helsinki, the EDPB adopted what it calls "The Helsinki Statement on enhanced clarity, support and engagement" - a comprehensive plan to address longstanding complaints about regulatory complexity and inconsistency.

The statement emphasizes a "fundamental rights approach to innovation and competitiveness," suggesting regulators are increasingly aware of the need to balance privacy protection with business realities. At the heart of the new approach is enhanced stakeholder engagement. The EDPB committed to "proactive and early engagement" with businesses and other stakeholders to identify specific areas requiring support and clarification. This marks a departure from the traditional regulatory approach of issuing guidance after problems emerge, instead promising ongoing dialogue to prevent compliance issues before they arise.

One of the most practical elements involves developing ready-to-use templates for organizations. Building on work already done at national levels, the EDPB plans to harmonize these resources across member states, potentially ending the current situation where businesses face different requirements and interpretations in different EU countries.

The initiative specifically targets micro, small and medium organizations, recognizing that these businesses often lack the resources for complex compliance programs. The EDPB promised "timely and concise guidance that is accessible, easy to understand, and practical" for smaller organizations, aligned with GDPR's risk-based approach.

The Board also announced plans for a common template for data breach notifications across Data Protection Authorities. This standardization could significantly ease the burden on organizations operating across multiple EU jurisdictions, who currently must navigate different notification requirements and formats in each member state.

To enhance consistency in GDPR application and enforcement, the EDPB will collect positions taken by national authorities on priority issues, including guidance documents, decisions, and court judgments. These will be compiled into "case law"-style publications to help organizations understand concrete positions taken by different authorities.

We must hope that they succeed in this.

EU stands firm on AI Act timeline despite industry pressure for delays

The European Union has rejected mounting pressure from major technology companies to delay implementation of its landmark AI Act, despite growing concerns about readiness and industry lobbying for extended timelines. The Commission's firm stance comes as critical deadlines approach and supporting guidance remains incomplete.

The initial excitement around the EU's AI Act has indeed worn off, replaced by implementation challenges and doubts about effectiveness. EU Digital Chief Henna Virkkunen had initially signalled openness to delays, stating that "if we see that the standards and guidelines are not ready in time, we should not rule out postponing some parts of the AI Act." This comment, made during a meeting with EU digital ministers in Luxembourg, sparked speculation about potential implementation delays.

The pressure campaign intensified when CEOs from more than 40 European companies, including ASML, Philips, Siemens, and Mistral, sent a public letter to European Commission requesting a "two-year clock-stop" on key AI Act obligations. The letter argued this would allow "both for reasonable implementation by companies, and for further simplification of the new rules." Major US companies, including Google's parent Alphabet and Meta, also increased their lobbying efforts, pushing for a "stop-the-clock mechanism" that would postpone implementation dates if supporting standards weren't ready. This approach would tie regulatory deadlines to the availability of practical guidance rather than fixed calendar dates.

However, the Commission ultimately took a firm stance against delays. During a July 4 press briefing, Commission spokesperson Thomas Regnier ended speculation with unambiguous language: "Let me be clear as possible: there is no stop the clock, there is no grace period, there is no pause." He emphasized that legal deadlines established in the legislation must be upheld.

Regnier noted that provisions kicked in during February 2025, general-purpose AI model obligations will begin in August 2025, and high-risk model obligations will take effect in August 2026. The Commission's position reflects its commitment to maintaining regulatory credibility and ensuring that the EU's first-mover advantage in AI regulation isn't undermined by extended delays.

Read more about this crucial development in EU AI regulation here.

Norway prepares for AI Act

Norway is getting ready to regulate AI, with a new act that will implement the EU’s AI Act into Norwegian law. The draft was sent out for consultation on June 30, and the law is expected to take effect in late summer 2026.

This marks a big shift in how Norway handles AI. Privacy is a key part of the act. It will work alongside the GDPR, and many of the requirements—like impact assessments—will feel familiar to companies already used to GDPR.

So, if your business already has good privacy routines, you’re ahead of the curve. One standout feature is the AI sandbox, which lets companies test AI solutions in a controlled environment with guidance from authorities.

You find the draft here.

European Commission ask for input on high-risk AI systems implementation

The European Commission has launched a comprehensive public consultation to gather practical insights on implementing the AI Act's rules for high-risk AI systems, demonstrating its commitment to evidence-based regulatory guidance as the landmark legislation moves toward full implementation. The consultation, which ran until July 18, 2025, represents a crucial opportunity for stakeholders to shape how Europe's AI regulation will work in practice.

Read more here.

Norway implements DSA

Apparently Norwegian legislators don't take summer holidays. Just as Norwegian lawmakers were busy implementing the AI Act, they proved that summer vacation is no excuse for regulatory inactivity. On July 2, the Ministry of Digitalisation and Public Governance sent another major piece of digital legislation for public consultation - the Digital Services Act law, designed to implement the EU's DSA into Norwegian legislation.

The law aims to create a well-functioning and safe internet environment while protecting fundamental rights including freedom of expression, privacy protection, non-discrimination, and children's rights. It also seeks to combat illegal content and promote greater transparency in content moderation and marketing practices on internet platforms.

The Norwegian Data Protection Authority has been designated to supervise privacy-related matters, specifically focusing on behavioural advertising and protection of children online. As Director Line Coll notes: "We have long called for regulation of surveillance-based marketing. This draft law is therefore an important first step in the right direction to protect both children and other consumers' privacy online."

The regulatory framework establishes a multi-authority approach with the Norwegian Communications Authority as national coordinator, while the Data Protection Authority, Media Authority, and Consumer Authority handle their respective areas of expertise. For the largest platforms, enforcement will occur at the EU level.

The law represents a comprehensive approach to digital platform governance, addressing content moderation, algorithmic transparency, and protection of democratic values in digital spaces. The emphasis on children's rights protection reflects growing awareness of how digital platforms impact young users.

Click here to read more.

The hidden score that controls your life in Austria

If you live in Austria, there's likely a score on yourself that you've never heard of – but it might determine whether you can get a mobile phone contract or electricity service. The Austrian credit agency CRIF has quietly built what amounts to a private national register containing data on almost everyone in Austria, using it to calculate creditworthiness scores with very real consequences.

Here's the concerning part: for 90% of people, this score is based solely on address, age, and gender – not actual payment history. Companies like Magenta, Drei, Verbund, and Volksbank Wien use these scores to decide whether to offer you contracts. If your score is too low, you might face automatic rejection for essential services, often without knowing a CRIF score was involved.

The system's validity is questionable. The same person can receive scores varying by 150 points (out of 750 possible) simply by using different addresses. Even Austria's former richest man, Dietrich Mateschitz, had a below-average score.

Privacy group noyb, led by Max Schrems, believes CRIF's practices likely violate the GDPR and is investigating potential class action lawsuits. If successful, affected individuals could receive €200-€1,000 compensation. With millions affected, this could become Austria's largest class action lawsuit.

Read more about this potential class action lawsuit here.

Buying data without valid consent

The French data protection authority CNIL recently fined marketing company CALOGA €80,000 for processing personal data purchased from data brokers without a proper legal basis. The case serves as a stark reminder that when you're buying data from third parties, you can't simply trust their assurances about consent – you need to verify it yourself.

CALOGA operated as both a marketing company and data broker, organizing its processing across four databases. The company offered email marketing campaigns on behalf of other businesses, using data collected by data brokers through online game contests and product test entry forms. The problem? These forms used what we now recognize as classic dark patterns – the buttons to accept data use were much more prominent than those to reject it, essentially misleading users into giving consent.

When CNIL investigated in 2022, they found multiple violations. CALOGA had been relying on prior consent supposedly given to the original data collectors via online competitions but hadn't verified that this consent was actually valid. Even worse, users couldn't withdraw their consent as easily as they had given it – a violation of Article 7(3) GDPR.

The authority was particularly critical of CALOGA's approach to consent verification. As the decision notes, "it is not sufficient for the controller to rely on the declarations of the data seller and the existence of a contractual clause to consider that the data has been lawfully collected and can be reused." This echoes what we've seen in other contexts – controllers must take active steps to ensure the data they're processing has a valid legal basis.

CNIL found that the consent CALOGA was relying on wasn't freely given, specific, informed, or unambiguous as required by Article 4 in the GDPR. The use of dark patterns meant users weren't making a genuine choice about their data use. The authority also criticized the withdrawal process – while users could subscribe to multiple CALOGA databases with a single click, unsubscribing required separate actions for each database or contacting the company's data protection officer.

The case also involved problematic data retention practices. CALOGA retained prospect data for up to four years, even after users became inactive. It also used email openings as a trigger to reset retention periods, which CNIL found inappropriate.

Interestingly, CNIL didn't find a violation of Article 32 GDPR regarding data security. CALOGA used the outdated MD5 hashing algorithm for password storage. However, CNIL did not find sufficient evidence that this compromised personal data, so no breach of Article 32 was established.

This case reinforces several important principles. First, that controllers can't simply rely on contractual assurances from data brokers about consent validity – they need to conduct their own due diligence. Second, that dark patterns in consent collection can invalidate the entire legal basis for processing. And third, that withdrawal mechanisms must be as easy as the original consent process.

Link to summary of the case.

Federal administrative court finds error messages from credit agencies trigger GDPR Article 22 protections

Austria's Federal Administrative Court has delivered a significant ruling that clarifies when credit scoring systems fall under GDPR's automated decision-making restrictions. The case demonstrates how technical errors in credit databases can have far-reaching legal consequences for both credit agencies and data subjects.

The dispute began when a data subject's loan application was rejected after banks received an error message stating "Score value 0 - no calculation possible" from a credit reporting agency. This technical glitch occurred because the agency couldn't properly process information about the individual's out-of-court debt settlement from 2019, which had been successfully completed with regular payments over 36 months.

When banks received this cryptic error message instead of a numerical credit score, they interpreted it negatively and denied the loan application. One institution explicitly stated they "cannot provide us with a scoring value" due to "technical problems," directly referring the applicant back to the credit agency for resolution. Despite having secure employment earning approximately €2,200 monthly and successfully concluding two leasing agreements since the settlement, the data subject was unable to obtain the requested loan.

The credit agency argued this wasn't their responsibility since banks make the final lending decisions. However, the court rejected this defense, applying recent European Court of Justice precedent from the SCHUFA case to conclude that credit scoring itself constitutes a "decision" under Article 22 GDPR, regardless of who makes the final lending choice.

The court's analysis focused on whether the credit agency's processing met the three cumulative conditions for Article 22(1) GDPR: existence of a decision, automated processing including profiling, and legal effects or similar significant impact on the data subject. The judges found that the error message "0 - calculation not possible" was interpreted by banks as having sufficiently negative connotations that directly influenced their rejection decisions.

Crucially, the court emphasized that what matters isn't how scoring results are presented, but whether they significantly affect the data subject. The ECJ had established in SCHUFA that probability value determination falls under Article 22(1) GDPR when it plays a decisive role in third-party decisions, and the Austrian court applied this reasoning to error messages that effectively prevent credit approval.

The court also addressed the risk of circumventing Article 22 protections through narrow interpretations. As the ECJ noted, limiting automated decision-making rules only to final decision-makers would create legal protection gaps, particularly regarding data subjects' rights to explanation under Article 15(1)(h) GDPR. Banks receiving credit scores typically don't possess the specific information required to explain how agencies reached their conclusions.

The court found no valid exceptions under Article 22(2) GDPR applied to this automated processing. The decision wasn't necessary for contract performance between the data subject and credit agency, no appropriate national law safeguards existed, and there was no explicit consent for automated decision-making.

Additionally, the court determined the credit agency violated information obligations under Articles 13(2)(f) and 14(2)(g) GDPR by failing to adequately inform data subjects about the automated decision-making and profiling taking place. These provisions require controllers to provide meaningful information about the logic involved and the significance of such processing.

The ruling doesn't ban credit scoring entirely. The court clarified that numerical credit scores can still be used as supportive tools in creditworthiness assessments, following the ECJ's guidance in SCHUFA that such processing must play a supportive rather than decisive role. Credit agencies must now ensure their scoring systems comply with Article 22 GDPR's restrictions and provide proper transparency about automated processing.

Read more about this important development in credit scoring regulation here.

Finnish pharmacy fined €1.1 million for sharing health data with tech giants

Finland's Data Protection Authority has imposed a substantial €1.1 million fine on Yliopiston Apteekki (University Pharmacy), the country's largest pharmacy operator, for transmitting sensitive health-related data to major tech companies through website tracking services.

The investigation began after a whistleblower alerted authorities that the pharmacy's online platform was sharing detailed information about customers' medicine purchases and browsing behavior with Google, Meta, and New Relic. Despite the pharmacy's claims that data was "masked" and anonymized, the DPA found that customers remained identifiable through various tracking mechanisms.

University Pharmacy operates Finland's largest online pharmacy, where customers can order both prescription and over-the-counter medicines. Like many e-commerce sites, they used multiple tracking technologies to monitor website performance and enable targeted advertising. However, the sensitive nature of pharmaceutical data created unique privacy risks that the company failed to adequately address.

The tracking setup was extensive, spanning several years and multiple platforms. Google Analytics ran from May 2015 to April 2022, while Google Tag Manager operated from May 2015 to September 2022. Meta Pixel was active from either October 2020 or March 2021 until September 2022, and New Relic Browser Agent monitored performance from June 2019 to September 2022. Additional services like Videoly for YouTube embedding operated from December 2017 to September 2022.

The data transmitted to these third parties was very detailed. Google and Meta received purchase events specifically labeled as "PRESCRIPTION DRUG" or "SELF-HEALTH DRUG," along with shopping cart totals, currency information, and browser details. While product IDs were supposedly masked as "99999" or "88888," URL paths containing actual medicine names were still transmitted in GET requests - revealing information like "risperdal-1-mg-kalvopaallysteinen-tabletti" directly to tracking services.

New Relic received IP addresses, device information, and visited URLs, while other services like Klarna, Giosg, Videobot, and Frosmo obtained information about medicines browsed or added to shopping carts. This created a comprehensive picture of customers' health-related activities across multiple platforms.

The pharmacy argued that no actual medicine names or purchase information was transmitted, claiming that browsing data couldn't reveal what customers actually bought or their health conditions. They insisted that any transmitted data was limited and masked, and that browsing data shouldn't be considered personal data under GDPR.

The DPA rejected these arguments comprehensively. They found that website service providers received IP addresses and other identifiers that could be used to identify data subjects. When combined with data from multiple website visits, this information enabled large-scale automated identification and profiling of individuals. The authority concluded that browsing, purchase intention, and medicine purchase data transmitted to service providers constituted personal data of identifiable individuals under Article 4(1) GDPR.

More significantly, the DPA classified this information as health data under Article 4(15) GDPR and Recital 35. Even over-the-counter medicine purchases were considered health-related, as they reveal information about individuals' health conditions, symptoms, or medical concerns. This classification triggered GDPR's special protections for sensitive personal data categories.

The authority identified multiple violations. Under Articles 5(1)(f) and 32 GDPR, the pharmacy failed to ensure adequate security and confidentiality when processing personal data. They didn't implement sufficient safeguards to prevent unauthorized access by external tracking service providers and failed to conduct regular testing or assessment of their technical measures.

The DPA also found violations of data minimization principles under Articles 5(1)(c) and 25(2) GDPR. Data transmitted to New Relic exceeded what was necessary for website performance monitoring purposes. The pharmacy's default settings allowed transmission of sensitive data without effective minimization or pseudonymization measures.

Perhaps most seriously, the processing violated Article 9 GDPR's restrictions on special categories of data. Health data was processed without explicit consent or other valid legal basis from the restrictive list of exceptions for sensitive data processing.

The €1.1 million fine reflected several aggravating factors: the seriousness of the breach involving health data, the extended duration of approximately four years, the massive scale affecting millions of visitors and orders, and the highly sensitive nature of pharmaceutical information. The DPA also considered the pharmacy's negligence in implementation and oversight, plus potential financial benefits from tracking and targeted marketing activities.

Read more about this health data privacy decision here.

Legitimate interest justifies data transfer for fraud prevention

Germany's Oberlandesgericht Koblenz has delivered an important ruling that reinforces telecommunications companies' ability to share customer contract information with credit agencies like SCHUFA, even without explicit customer consent. The decision provides clarity on when legitimate interest can justify data transfers in the financial services ecosystem.

The case originated when a customer concluded a postpaid mobile phone contract in September 2021 that included a discounted purchase of an iPhone. The telecommunications provider transmitted the customer's contract registration data to SCHUFA as part of their standard fraud prevention procedures. This data includes, name, address, date of birth, contract start and end dates and the contract number. This data was considered “positive data”, meaning it reflects the existence and completion of a contract—not defaults or negative payment behaviour.

When the contract terminated in September 2023, SCHUFA deleted the customer's data the following month. However, the customer had already initiated legal proceedings, arguing that the data transmission violated GDPR requirements. They claimed the provider lacked explicit consent and couldn't rely on any other lawful basis under Article 6 GDPR, while also asserting inadequate information about the data sharing.

The customer sought comprehensive relief from the court: non-material damages for the unauthorized data transmission, an injunction preventing future positive data sharing with SCHUFA, a declaratory judgment of liability under Article 82 GDPR, and reimbursement of legal expenses as immaterial damages. They also requested referral to the Court of Justice of the European Union for clarification on the legal issues involved.

After the lower court rejected these claims, the customer appealed to the Oberlandesgericht Koblenz in 2024. The appeals court comprehensively dismissed all aspects of the customer's case, providing detailed reasoning that strengthens the legal foundation for industry data-sharing practices.

The court's central finding was that the telecommunications provider lawfully transmitted customer data to SCHUFA under Article 6(1)(f) GDPR's legitimate interest provision. The judges emphasized that fraud prevention and ensuring reliable creditworthiness assessments serve broader socio-economic interests of the telecommunications industry, not just individual company benefits.

Crucially, the court stated that "the rights of the data subject in this case do not outweigh the rights and interests of the defendant in participating in the solidarity-based fraud prevention system described in order to protect against economic damage." The decision explicitly rejected the notion that consent was required for such data sharing. The court found that customers could "easily recognize from the contract documents, which was clearly stated in the contract, that their contract data might be transmitted to SCHUFA after conclusion. This foreseeability factor played a crucial role in the legitimate interest balancing test, suggesting that transparent disclosure in contract terms can strengthen the legal basis for data processing.

The court also denied the request for CJEU referral, finding no need to revisit settled European law. Given that the customer's data had already been deleted and the legal basis was well-established, the judges saw no unresolved questions of EU law requiring higher court interpretation.

The judgment also pushes back against trends in some regional German courts that have required consent for nearly all credit agency transmissions. By taking a more permissive reading of GDPR for financial institutions and telecom operators, the Koblenz court suggests a pragmatic approach that recognizes the economic necessity of information sharing for fraud prevention.

Read more about this significant development in telecommunications and credit reporting law here.

Norwegian DPA launces enforcement actions on tracking pixels across six websites

The Norwegian Data Protection Authority has delivered a significant blow to tracking pixel practices across multiple sectors, conducting comprehensive investigations into six websites that were unlawfully sharing visitors' personal data with third parties. The enforcement action, which resulted in one fine of NOK 250,000, demonstrates the DPA's commitment to protecting user privacy in an era of pervasive online tracking.

The investigation encompassed a diverse range of websites, each serving vulnerable populations or handling sensitive information. These included 116111.no (a public service for children in vulnerable situations), apotekfordeg.no (an online pharmacy), bibel.no (a Christian website), drdropin.no (a medical services website), ifengsel.no (a counseling service for children with incarcerated parents), and nhi.no (a medical information website).

As Section Chief Tobias Judin noted: "All the websites made visitors' personal data available to third parties without legal basis. We also found breaches of the information obligation." The investigations revealed how tracking pixels - technology that automatically sends information about website visitors to third parties - can inadvertently expose highly sensitive personal information.

The DPA found that a person's browsing history, alone or through data compilation from various sources, often makes it possible to infer private or sensitive personal information. The investigations revealed examples where websites shared information that could indirectly reveal details about visitors' health, sexual life, and religion. Several websites also shared personal data about children in vulnerable situations.

The case of nhi.no, the medical information website, exemplifies these concerns. The DPA held that information about visits to subpages containing content on particular medical issues constitutes sensitive data under GDPR Article 9. The authority reasoned that if users visited pages about specific health conditions, it was likely they suffered from those conditions - establishing a relatively low threshold for what regulators consider health-related sensitive data.

The enforcement action revealed several problematic practices across the investigated websites. Visitors received incorrect information claiming they were anonymous when they weren't. Special categories of personal data about visitors were unlawfully made available to third parties. Personal data about children was unlawfully shared with third parties. Visitors were "nudged" into giving consent through dark pattern design. Information provided to visitors was misleading, difficult to understand, or failed to explain the consequences of giving consent.

The DPA made clear that future enforcement could be much stricter, stating that "this is the first time the Data Protection Authority has conducted this type of investigation, and the main purpose of the investigations is increased awareness. In the future, the reactions can be much stricter."

To support compliance efforts, the DPA has developed guidance based on experiences from the investigations. The guidance explains the findings, legal requirements for tracking tools, and expectations for websites going forward.

Read more about this important development in Norwegian data protection enforcement here.

Vodafone issued fines totalling EUR 45 million

The German Federal Commissioner for Data Protection and Freedom of Information has delivered a substantial blow to Vodafone Germany, imposing fines totaling €45 million for significant GDPR violations spanning both partner oversight and technical security measures.

The enforcement action represents one of the most comprehensive telecommunications sector penalties in recent years, addressing systemic failures across multiple aspects of Vodafone's data protection framework. The company has accepted the fines and paid them in full, while implementing extensive remedial measures to address the identified deficiencies.

The findings in the case may be relevant for many companies as some of the practises may not be all that uncommon.

The first fine of €15 million was imposed for Vodafone's failure to adequately supervise and audit partner agencies operating under its brand. These agencies, which operate local shops and handle customer contracts on Vodafone's behalf, had access to extensive personal data including sensitive contract information and account details. The investigation revealed that some partner agency employees had misused customer data to create fictitious contracts or make unauthorized changes to existing ones. In some cases, this involved adding expensive contract components to customer accounts without consent, while in others, employees created entirely fake contracts to obtain unauthorized commission payments. The German DPA found that Vodafone lacked effective processes for selecting, auditing, and continuously monitoring these partners, despite their access to substantial amounts of personal data. This violated Article 28(1) GDPR, which requires controllers to ensure that processors offer sufficient guarantees for GDPR-compliant processing.

The larger fine of €30 million addressed security deficiencies in Vodafone's customer authentication processes, specifically involving the integration between its "MeinVodafone" online portal and customer hotline services. These vulnerabilities enabled unauthorized third parties to gain access to customer accounts and eSIM profiles through combination attacks exploiting weaknesses in both systems. The authentication flaws had serious practical implications, potentially allowing attackers to take over mobile phone numbers and misuse them for other digital services, including two-factor authentication and payment transactions.

While the German DPA did not specify the exact GDPR provision underlying this fine, it emphasized the serious practical risks to data subjects' rights, particularly regarding digital identity protection. The authority also issued a separate warning under Article 58(2)(b) GDPR for additional vulnerabilities in Vodafone's distribution systems that violated Article 32(1) GDPR's security requirements.

You can read more about the Vodafone fines here.

Italian DPA reaffirms ban on AI chatbot

The Italian Data Protection Authority (Garante) has reaffirmed its ban on the Replika chatbot and imposing a €5 million fine after finding persistent violations of GDPR requirements. The April 2025 decision represents the culmination of a two-year enforcement saga that began when the Garante first restricted Replika's operations in February 2023. Despite multiple opportunities to address identified deficiencies, San Francisco-based developer Luka Inc. failed to implement adequate safeguards for European users, particularly minors accessing the emotionally charged AI companion service.

Replika markets itself as "The AI companion who cares" that is "always here to listen and talk," positioning the large language model chatbot as an emotional support tool offering conversational engagement. Users can configure their AI companion to assume various roles, including romantic relationships as "boyfriend" or "girlfriend," raising immediate concerns about psychological impact and age-appropriate content.

The Garante's initial February 2023 enforcement action identified fundamental compliance failures across multiple GDPR provisions. The authority found that Replika posed significant risks to minors while lacking effective age verification mechanisms - requiring only basic information like name, email address, and gender for registration. The service also failed to meet transparency obligations under Articles 5, 6, 8, 9, and 25 of the GDPR.

Critically, the regulator determined that Replika's processing of personal data was unlawful when dealing with minors, who cannot legally enter binding contracts under Italian law. This invalidated the company's reliance on contractual necessity as a legal basis for data processing involving underage users.

Following the initial enforcement action, the Garante issued a temporary processing limitation in June 2023, allowing Replika to continue operations while implementing required corrective measures. These included updating privacy notices, adding age-gate mechanisms to registration pages, and implementing other user protection measures. However, the April 2025 decision reveals that Luka's remediation efforts were fundamentally inadequate. The company's privacy notice remained deficient, lacking sufficiently granular descriptions of legal bases for processing personal data and failing to link valid legal grounds with specific processing operations. Notably, the company failed to identify LLM development as a processing purpose until February 2023, demonstrating ongoing transparency failures.

The Garante's investigation uncovered multiple ongoing violations that undermined user protection. Luka's privacy policy remained accessible only in English, including for Italian minors, and contained references to U.S. Children's Online Privacy Protection Act compliance - irrelevant for European operations and demonstrating a fundamental misunderstanding of applicable regulatory frameworks.

Perhaps most concerning, Replika's age-gating mechanism contained significant implementation flaws that rendered it essentially ineffective. Users could circumvent age restrictions by initially submitting false ages over 18 and subsequently editing their profiles without any oversight or secondary confirmation by Luka. This design flaw effectively nullified the primary safeguard intended to protect minors from inappropriate content.

Read about the case here.

Unlawful alcohol testing

The Swedish Data Protection Authority (IMY) has imposed a SEK 75,000 (approximately €6,700) fine on Stockholm's public transport authority for unlawfully processing ferry captains' health data through systematic breath alcohol testing.

Between October 2021 and August 2022, ferry captains operating commuter services in the Stockholm region were required to perform breath alcohol tests before each departure using alcohol meters installed on vessels. While the test results did not directly include names, they contained timestamps and vessel identifiers that could be cross-referenced with duty rosters to identify individual crew members. The system automatically stored results both locally on vessels and on a server accessible to both the transport authority and its subcontractor. This systematic collection and retention of testing data continued for months without clear deletion procedures, creating a substantial database of health-related information about ferry operators.

A ferry captain employed in the system filed a complaint with IMY, arguing that the breath alcohol test results constituted sensitive health data under Article 9 GDPR and that the routine, systematic storage of test results was disproportionate and unnecessary for ensuring transport safety. The complainant also highlighted the lack of clear time limits for data retention and questioned whether valid legal grounds existed for the processing.

Storstockholms Lokaltrafik attempted to justify the processing under multiple GDPR provisions, citing legitimate interests under Article 6(1)(f), public interest under Article 6(1)(e), and the public interest exception for health data under Article 9(2)(g). The authority referenced Swedish maritime law, including the Swedish Maritime Code and Ship Safety Act, arguing that the processing was necessary to ensure public safety and meet maritime safety obligations.

However, IMY systematically rejected each of these justifications. The authority found that the processing was not strictly necessary and disproportionately interfered with data subjects' rights, particularly given that less intrusive alternatives could have achieved the same safety objectives. The DPA specifically noted that non-recorded testing or alcohol lock systems could have served the same purpose without creating permanent data records.

Critically, IMY determined that Swedish law did not provide a sufficiently clear and specific legal basis for such systematic data collection and retention.

More significantly, the authority classified all breath alcohol test results as health data under Article 4(15) GDPR, regardless of whether any positive results were recorded. This broad interpretation reflects the DPA's view that physiological readings inherently relate to health status, even when they show normal or negative results.

Beyond the fundamental legal basis issues, IMY found that storing test results for several months violated core GDPR principles of data minimization and storage limitation under Articles 5(1)(c) and (e). The transport authority only implemented precise data retention policies in August 2022, nearly a year after beginning the testing program.

Despite finding clear GDPR violations, IMY imposed a relatively modest fine of SEK 75,000, considering several mitigating factors. The breach was limited to a single complainant, no actual harm was identified, and the controller implemented corrective measures including deletion routines and ending systematic result storage.

Read more about this important development in workplace privacy and health data protection here.

EDPB's 2024 report: Cross-border cooperation improves, but enforcement varies wildly

The European Data Protection Board (EDPB) has published its 2024 Annual Report, offering fresh insights into how GDPR enforcement is evolving across the EU.

Notably, for the first time since 2020, the EDPB didn’t issue any binding decisions to resolve disputes between national data protection authorities. This could be a positive sign, suggesting that regulators are becoming more aligned in their interpretations and enforcement practices.

Cross-border cooperation was a major theme this year, with 350 cases handled across jurisdictions and 982 procedures processed under the one-stop-shop mechanism. The EDPB also issued 28 consistency opinions, including key guidance on AI-related data processing and the increasingly controversial “consent or pay” models.

Enforcement statistics, however, reveal significant differences between member states. Ireland, for example, issued just seven fines—but with an average penalty of €93 million, largely due to a few high-profile cases involving major tech firms. In contrast, Latvia’s average fine was just €439. These disparities are less about enforcement intensity and more about the nature of the cases handled.

Germany and Spain took a different approach, issuing hundreds of smaller fines (416 and 281 respectively), reflecting a broader but less financially impactful enforcement strategy. Interestingly, 2024 marked the first decline in enforcement activity after seven consecutive years of growth—both in the number and total value of fines.

The report also highlights the EDPB’s focus on emerging technologies, particularly AI and “consent or pay” models, which continue to raise complex compliance questions. Stakeholder events and targeted opinions were part of the Board’s efforts to address these challenges.

Perhaps most importantly for businesses, the EDPB is increasingly coordinating with other EU regulatory frameworks, including the Digital Markets Act, Digital Services Act, and AI Act. This signals a shift toward more integrated oversight, where data protection is no longer a standalone issue but part of a broader regulatory landscape.

The takeaway? While cooperation among authorities is improving, enforcement remains uneven—and companies need to be prepared for a multi-layered compliance environment.

Read more about the report here.

UK data adequacy status under threat

The United Kingdom's data adequacy status with the European Union faces unprecedented challenges as civil society organizations press the European Commission to reconsider the arrangement due to mounting privacy and data protection concerns. The campaign, led by European Digital Rights (EDRi) and over 50 European NGOs, highlights a troubling pattern of legislative developments that could fundamentally undermine the data protection framework that secured the UK's adequacy decision in June 2021. The timing is particularly significant, as the adequacy decision recently received a six-month extension from the European Data Protection Board, providing a critical window for addressing these concerns.

The most significant concern centers on the UK Data (Use and Access) Bill, which has passed both Houses of Parliament and awaits Royal Assent. This legislation builds on the UK's GDPR-based framework but introduces modifications that could threaten parity with EU standards. The bill includes sweeping new exemptions allowing law enforcement and government agencies expanded access to personal data, loosened regulations governing automated decision-making, and weakened restrictions on data transfers to countries the EU considers inadequate. Perhaps most concerning for data adequacy purposes, the bill would increase the UK government's power to interfere with the regular operations of the UK Data Protection Authority. This threatens the independence that data protection authorities require under GDPR principles, potentially undermining one of the fundamental requirements for adequacy status.

The UK Border Security, Asylum and Immigration Bill presents additional challenges, having passed the House of Commons and currently before the House of Lords. This legislation would broaden intelligence agency access to customs and border control data while exempting law enforcement agencies from UK GDPR requirements - a direct contradiction of the comprehensive protection that adequacy decisions require.

The civil society complaints highlight practices that would extend UK surveillance capabilities beyond national borders, potentially affecting EU citizens' data. The UK's Investigatory Powers Act of 2016, known colloquially as the "Snooper's Charter," has become particularly problematic in its recent interpretation and application.

A leaked report from February revealed that this law had been invoked to order Apple to provide encryption backdoors for government access to iCloud backups. This demand would provide UK authorities with access not just to UK residents' encrypted storage, but to Apple customers worldwide - a clear example of how domestic surveillance laws can have extraterritorial privacy implications. Apple's response was telling: the company disabled its Advanced Data Protection feature for UK customers rather than comply with the backdoor demand.

Beyond legislative threats, the civil society groups cite significant enforcement failures by the UK Information Commissioner's Office (ICO). Despite receiving 25,582 privacy complaints in 2024, the ICO took regulatory action on just one case, addressing most others with non-binding measures lacking legal force. The proposed changes to ICO governance compound these concerns. The UK Data Bill would give the government expanded ability to hire, dismiss, and adjust compensation for all ICO board members, potentially compromising the independence that effective data protection supervision requires.

Read more about this developing challenge to international data protection cooperation here.

Do you have any questions?