
Eva Jarbekk
Partner
Oslo
Newsletter
by Eva Jarbekk
Published:
There will be a fair bit about AI in this newsletter. One of the things I read the most is that many court decisions will be required in order to clarify the content of the regulations. And that compliance programmes will have to adjust over time. It sounds like GDPR all over again. Yet perhaps we can make wiser choices now than many did in 2018 knowing just that – things will evolve. In 2018, many thought that the "GDPR project" was "finished" at the end of the project, but now more people are probably realising that the AI project is going to be a living project going forward.
But before we get into AI – first some news about good old 7-year-old GDPR. Maybe you know someone who plays Assassin's Creed or Prince of Persia?
Ubisoft is the company behind these games and many more. In order to play these games, even as single-player, you need to connect to the internet. Even though the games supposedly don't have any online features.
The reason for this is said to be that Ubisoft can record the player's interaction with the game. One user complained about this and referred to the data minimisation principle. Ubisoft reportedly was unable to answer why they want to know when a player starts and stops a game and how long a session lasts. The user found that during a 10-minute gaming session, Far Cry sent messages to remote servers 150 times. Otherwise, the claims are pretty much the same as usual – too much data is collected, legitimate interest is used as a basis when NOYB believes it is not necessary, information from the company is too difficult to obtain, etc. NOYB has complained to the Austrian Data Protection Authority on behalf of the user and has, as usual, asked for fines. Ubisoft's turnover is said to be over EUR 2 billion, which could result in a fine of around EUR 92 million. It's unlikely to land there.
A and B are the parents of a child with a kidney disorder, and the conflict started when an out-of-hours medical service sent a note of concern to child welfare services after a visit in 2018. A and B requested access to the medical record, including the name of the healthcare professional who had written the note. They were eventually given the medical record, but only with the initials of the person who had written it, and the municipality further refused to disclose the full name. The County Governor ruled that A and B were entitled to access the identity of the healthcare professional, but the municipality did not follow up on the decision.
The District Court found in favour of A and B and ruled that the municipality had to disclose the name, while the municipality appealed the case further. The municipality argued that the medical record only had to contain information about the child, and that the identity of the healthcare professional was not part of the record itself. They also pointed out that working environment considerations and the need to protect employees could justify exemptions from access. For their part, A and B argued for the importance of knowing who had written the medical record, in order to assess qualifications and possible conflicts of interest, among other things.
In a decision from May, the judges of the Court of Appeal disagreed. The majority found that Section 5‑1 of the Norwegian Patient and User Rights Act does not provide an independent right to know the name when the medical record only shows initials, and that the request for access was therefore fulfilled. The minority, on the other hand, found that the law requires the full name of the person who keeps the medical records. The result was that the municipality was acquitted of the claim to disclose the name, and each of the parties had to cover their own legal costs.
The Court of Appeal referred to the GDPR and the European Court of Justice's case C-579/21 ("Pankki"), which concerns the right to access one's own personal data. The judgment states:
"The right of access corresponds to the right under the Patient and User Rights Act and the Health Personnel Act. There is therefore no conflict between Norwegian law and EU law, apart from the fact that Norwegian law appears to provide wider access to logs, cf. Section 14 of the Patient Records Regulations compared with C-579/21 (Pankki). This falls outside the scope of this case."
This is the only place where the judgment refers to the Pankki case. The Pankki case is complex (and it is linked to the Austrian Post case as well), but simply put, it establishes that even if an employee is not considered a "recipient" under Article 15(1), the data subject has the right to know the identity of the employees who carried out the consultations under the authority and in accordance with the instructions of the data controller, where that information is essential for the effective exercise of the data subject's rights under the GDPR and provided that the employees' rights and freedoms are taken into account.
It would have been nice if the Court of Appeal had considered these criteria, regardless of the outcome of the case itself. Obviously, in Norway we must emphasise our legal sources, but we also have an obligation to emphasise and interpret practice that is GDPR-relevant.
The Irish Data Protection Commission (DPC) is not completely toothless, although many have thought so. As widely reported in the media, they recently fined TikTok approximately EUR 530 million. The case concerns whether or not TikTok transferred personal data to China. It turns out that they did.
TikTok initially claimed that it did not transfer users' personal data to China, but eventually it emerged that this had happened. Personal data about Europeans had been sent to servers in China. The DPC was of the opinion that TikTok had not taken sufficient measures to protect the information in accordance with Chapter 5 on transfers in the GDPR. Nor did the privacy policy describe the transfer, which it should have done. Here at home, our Minister of Digitisation has spoken out strongly in the media and said that this is serious.
Not many of us transfer large amounts of personal data to China, so the issue is probably not relevant to very many. But expect more attention to be paid to third-country transfers in the future.
Trump is not making things any easier in this area. Earlier this year he sanctioned Chief Prosecutor Karim Kahn of the International Criminal Court (ICC) because Trump doesn't like a case Kahn is litigating. Recently Kahn's professional Outlook account was no longer available to Kahn. The two facts are somehow connected, but few know exactly how. Microsoft is not telling much about the incident and it is presently is unclear if it was Microsoft that shut Kahn out - or if it was someone else. Maybe we will never know either. But the prosecutor does no longer have a professional ICC Outlook account. He has apparently switched to a proton mail.
I don't have an overview of all the legal issues in the case of Kahn, Microsoft and Trump/USA, and they are unlikely to become public. I assume that this is not a Cloud Act issue, but a sanction measure where Microsoft's legal room for manoeuvre is probably small. Microsoft says they can't comment, but they do state that they have not shut down the emails of the whole the ICC itself. Which is good, in some way. At the same time, it is highly unusual that such a comment is relevant.
Preventing lawyers from doing their job and sanctioning them for it is in stark contrast to how most Europeans view fundamental values in a democratic society.
However, this is probably an extreme case and not something that Trump intends to extend to other areas, so the average company is unlikely to have to worry about losing its access to services from US suppliers. I think we can tone down the risk for most companies, but there is reason to be vocal about democratic principles and freedom of speech. It also complicated what factors will have to be included in our GDPR assessments going forward. It is not only about lack of confidentiality, but also about accessibility to one's data.
There is no wondering why the topic of exit strategies is being addressed by more and more people. And the interest in having a back-up of one's own data within the EU is said to be increasing extremely.
The Kahn case has been talked about in the IT community for some time, but has only recently featured in other media. See an article from Olav Lysne and others in Simula in Dagens Næringsliv here.
Another comment on the matter can be found here.
One piece of good news on the transfer front is that the UK has retained its status as an approved country to transfer to. The approval has been extended until the end of the year, pending a review of the UK's data protection legislation. How the UK will be assessed after this is still quite unclear, however, as laws have been introduced that give the authorities wide access to citizens' data.
There are many good reasons why you might want to simplify the GDPR. At the same time, it has been written that "there is little appetite" in the EU to reopen the GDPR. But then it so happened anyway that proposals for simplifications were made.
When the GDPR was launched, one of the EU's slogans was that it would "cut red tape" – in that companies would not have to deal with each member state's data protection authority. There are probably few who think that there was not much "red tape" in the GDPR, as the regulations were and as they are practised. Now the slogan is reappearing – at a very general level, the EU wants to simplify many of its regulations. We don't need to go into the geopolitical situation, but there is a broad backdrop here and concerns about European competitiveness. Several so-called Omnibus simplification packages have been launched with an overall goal of reducing administrative costs by 25%. That's quite a lot. The simplification of GDPR is linked to Omnibus IV.
On 21 May, the Commission presented the content of Omnibus IV, in which it creates a new category for so-called Small Mid-Cap Companies (SMC). These will receive the same exemptions that are currently granted to small and medium-sized enterprises. The category is defined as companies with 250 to 750 employees, with either EUR 150 million in turnover or up to EUR 129 million in assets. The EU estimates that this applies to 38,000 businesses in the EU. The package applies not only to the GDPR, but also to other sets of rules that we won't go into here. Regarding the easing of rules in the GDPR, only the obligation to maintain an Article 30 record is affected. Many would have liked the Commission to go much further.
The new proposal also limits the application of the Article 30 record to cases where the processing in question is "likely to result in a high risk to the rights and freedoms of data subjects". This means that SMCs will only be required to keep an Article 30 record where processing activities have a potentially high risk, such as large-scale processing of special categories of personal data or other activities that typically require risk analysis or a DPIA.
Previously, it has been said that although there is an exemption in Article 30 for SMEs with fewer than 250 employees, in practice this exemption has not worked for many because it also required that there should be no risk or processing of special categories of personal data.
This may now be different because it is required that there must be a "high risk" for the main rule to apply. It is also proposed to clarify that the processing of special categories in employment relationships should not be considered a high risk. This is a very real difference and a significant simplification.
The proposal has been for consultation with the EDPB and EDPS, who have expressed a favourable opinion. They write that they give preliminary support to the simplification initiative, considering that this would not affect the obligation of data controllers and data processors to comply with other GDPR obligations.
Read more about this here.
At the same time, the Commission emphasises that other GDPR obligations – such as accountability, legitimate basis for processing and information security requirements – still apply in their entirety, and that the changes only affect the actual keeping of processing records.
It's tempting to ask whether it will be easier for companies to comply with their other obligations without the overview provided by the processing record? I don't think so. Many organisations benefit from their (incomplete) record, while the format and content of the record could probably be simplified. However, it's clear that many data protection officers don't want to have to worry about not having an updated Article 30 record. And that's nice.
Proposals have also been made to simplify the rather impractical rules on industry standards, but this is not happening as hoped. The proposed simplification involves taking into account the special needs of SMCs. In practice, this is not the reason why very few industry standards have been adopted since the GDPR entered into force, but that the rules require a supervisory body to be appointed and financed for each industry standard. It would have been more efficient to adjust this requirement and leave the supervisory function to the data protection authorities. The simplification is therefore rather uninteresting, even though it looks good on paper.
The final proposal is to simplify the provisions on certifications in the GDPR. Here, too, the needs of the SMCs must be taken into account, but these rules are so rarely in use that they do not constitute any change in practice.
Is this going to cut 25% of the administrative costs of GDPR? Absolutely not.
The proposed changes are now being considered in the EU's legislative process, where the European Parliament and the Council will give their views during the negotiations leading up to the final decision. There may be adjustments along the way and there are many views on the matter.
Read more about the proposal here.
It all started when a lawyer (B) requested access to personal data, which he believed had been published in the closed Facebook group "Lawyers not recommended", administered by A. B referred to a screenshot in which he was referred to as "charged" and to a link about his lawyer's licence. A rejected the request, pointing out that B had previously been granted limited access, but had not utilised it. After repeated enquiries, the Norwegian Data Protection Authority decided in May 2024 to order A to comply with the access request.
A complained about the Data Protection Authority's decision and argued that the group was closed and should be protected by freedom of expression, and that it was outside the scope of the Norwegian Personal Data Act because it was allegedly for personal purposes. The Privacy Appeals Board found that the group, which had around 5,000 members, could not be considered "purely personal or family-related", as the information reached an unlimited number of people. Furthermore, the Privacy Appeals Board found that A was the data controller and had an obligation to provide access under Article 15 of the GDPR.
The Privacy Appeals Board also rejected the argument that freedom of expression should prevent access, referring to the fact that the right of access does not limit people's ability to express themselves in the group. They concluded that B was nevertheless entitled to access personal data relating to himself, but not information about others, and pointed out that time restrictions or requirements for a medical certificate could not be introduced to undermine the obligation to provide access. The Data Protection Authority's order was thus upheld.
FATCA is a U.S. law that requires foreign banks to report account information about U.S. citizens, including the so-called "Accidental Americans" – persons who happen to have U.S. citizenship without close ties to the United States. That's a pretty extensive disclosure of information to the US.
In April 2021, the EDPB asked EU/EEA member states to consider, and possibly revise, bilateral agreements that require the transfer of personal data to third countries for taxation purposes.
In a letter to the EDPB, the Association of Accidental Americans (AAA) has pointed out that a very long time has passed without any action being taken by the member states. In 2023, a complaint was filed that has had a long road and many appeals in the Belgian legal system. Decisions authorising the transfer have gradually been cancelled, and now there is another decision from the Data Protection Authority stating that the transfers are unlawful.
The background is that several American/Belgian citizens living in Belgium were notified by their bank that it had to disclose their bank accounts, including deposits and other assets, due to the FATCA agreement. This agreement requires banks to disclose to local tax authorities the bank accounts that US citizens have opened abroad. The local authorities then report to the US.
The Belgian Data Protection Authority concluded that the transfer violated several provisions of the GDPR, inter alia because the purpose of sharing the data was formulated too generally and did not fulfil the data minimisation requirement. Many account holders were reported only because they were US citizens, regardless of whether there was an actual tax risk. In addition, the audit found that the legislation did not provide sufficient guarantees for how long the data can be stored, or to ensure access and appeal rights.
Even though the FATCA agreement was entered into before the GDPR entered into force, the Belgian Data Protection Authority concluded that the agreement had to comply with the GDPR. There was no valid transfer mechanism in place, neither according to the GDPR's standard rules for third-country transfers nor according to any exemption provisions. The Belgian Data Protection Authority therefore asked the institution to rectify the breaches within one year and better inform those affected, but did not immediately stop the data transfer itself out of consideration for Belgium's international obligations.
In 2023, I wrote that Norway also signed the FATCA agreement with the US, and there were some discussions about privacy when it was signed. So far, I can't see that there have been discussions similar to the one in Belgium.
I mention this case because it emphasises (again) the importance of compliance when relying on consent.
A seller of hearing aids sent letters to potential customers whose names they had obtained from a "data broker". The recipient objected to the data broker and asked to be deleted. The data broker forwarded the deletion request to the hearing aid company, but did not delete the data themselves.
The Norwegian Data Protection Authority found that the hearing aid company could not use legitimate interest and that they had to make sure that the data broker had sufficient consent for what they were reselling the address for. An English translation reads: "..it is not sufficient for the controller to rely on the declarations of the data seller and the existence of a contractual clause to consider that the data has been lawfully collected and can be reused."
The data broker were of the opinion that they were not the data controller, they were just a broker. They did not succeed with this argument.
Read more about the case here.
The German consumer organisation VZ NRW recently attempted to stop Meta's training of Facebook data to train its LLM through an interim injunction in Cologne, but the court rejected this claim so that the case must go through ordinary court proceedings. VZ NRW believes that the training of Meta's AI violates the privacy policy. The court held that Meta could use legitimate interest for the training and is thus in line with the Irish Data Protection Commission (DPC).
However, failure to grant an interim injunction does not mean that Meta will necessarily be successful in a final decision; the main case is still being processed in the judicial system, where the evidentiary requirements are different than for the interim injunction.
The Irish DPC has given Meta the go-ahead to start processing European users' data to train artificial intelligence (AI). This is despite the fact that several data protection organisations, especially NOYB, continue to believe that Meta's plan violates data protection rules. According to the DPC, Meta's proposal now includes clearer information for users, an expanded opportunity to object to data sharing and better data security. Among other things, the company has made a form available that makes it easier for users to request that their data not be processed for AI purposes.
Although the Irish DPC believes that Meta now complies with the requirements of the GDPR, many are still sceptical. Some believe that Meta, by virtue of its role as a data controller, does not have sufficient legal basis for collecting and allowing any data processors to use huge amounts of user data to process AI models. The critics point out that Meta has previously lost a case about using legitimate interest for targeted advertising – and that this is no different. We'll probably get a decision in the courts on this.
NOYB, led by Austrian lawyer Max Schrems, is pursuing legal action against Meta's plans and has sent a class action notice and a Cease and Desist letter. At the same time, the Data Protection Authority in Hamburg (Hamburg DPA) has initiated an urgency procedure under Article 66 of the GDPR. They demand that the Irish DPC must intervene against Meta's use of data to process AI, even if the Irish DPC thinks otherwise.
If Meta's practices end up being recognised as unlawful, the company could risk liability towards millions of European users. The case illustrates the increased awareness of technology companies' duty to safeguard privacy, also in connection with the development of AI.
As we realise the importance of LLMs in the future, it is perhaps not surprising that there are now voices saying that LLMs must be publicly owned. There is an article about this in the Guardian, see link below. It says that you can build public, openly accessible LLMs – trained on curated, multilingual, historically based corpora from libraries, museums and archives. These models can be transparent and academic and supported by public funding.
Is this possible? Well, it's possible, but it doesn't seem very likely that it will happen.
Read more here:
According to Article 4 of the AI Act, both providers and users of AI systems must ensure that employees and others who handle AI systems on their behalf have sufficient AI literacy to understand how the systems work, how they should be used, and what risks may arise.
In the EU, Article 4 entered into force earlier this year, so for businesses operating in the EU, this is already relevant material. For Norway as an EEA country, it's good to prepare; the rules are coming.
This applies in particular to those who actively develop, implement or manage AI solutions, but in a broader sense also to employees who are affected by, or work closely with, these systems. The European Commission has issued a FAQ on the topic that highlights key requirements. Below are some key points.
Level of knowledge
The level of knowledge varies depending on the complexity of the AI system and the person's role in the organisation. Technicians and developers typically need a deeper understanding of algorithm design, dataset quality and ethical implications, while non-technical roles typically need to know the basic principles, possibilities and limitations of AI. The main goal is for all relevant players to be able to use AI systems safely and correctly, and have enough insight to identify any challenges or errors.
Training measures
Training to ensure AI literacy can include workshops, e-learning courses and continuous training on new developments in the AI field. Among other things, organisations can integrate hands-on exercises, real-life use case demonstrations and cross-functional collaboration between IT departments and other business areas. Since AI technology is constantly evolving, there will often be a need for continuous updating of employees' expertise, preferably with external speakers or internal professional groups that keep up with research.
Requirement for certification
There is no explicit requirement in Article 4 of the AI Act for an official, centralised certification for AI knowledge. However, some industries or organisations may consider adopting voluntary certification schemes to ensure that employees achieve a certain level of knowledge.
Risk-based approach is OK
The AI Act is built around a risk-based approach, which means that the requirements for security and documentation are adapted to how critical the AI system in question is to health, safety or fundamental rights. Similarly, businesses can adapt the scope and content of their training programmes based on the risk associated with the system or its use. A high-risk AI system will require more thorough training and clearer routines, while less complex or low-risk systems may require a simpler training programme.
So what does this mean in practice? Many of us have different versions of nanolearning for information security and data protection. It might be a good idea to do something similar for AI.
The FAQ itself can be found here.
Fortunately, many of the examples of prohibited AI are not relevant to many people. Few in our part of the world work with social scoring and the like, but emotion recognition in the workplace is a type of AI that is offered, but probably not widely used in Norway. Nevertheless, it is important to know what is considered prohibited emotion recognition in the workplace. This is probably the type of prohibited AI we come closest to.
The guideline is 140 pages of typical EU text, so here is a selection of what I think is most practical for many. If your company does anything that might be borderline, read the guideline carefully, it can be found here.
The guideline states that the following is not emotion recognition, where I have highlighted things I think are particularly worth noting:
At the same time, the guideline indicates that the following is emotion recognition:
Furthermore, the prohibition is linked to "workplace" and the understanding of what constitutes a workplace is important. The guideline has the following practical examples:
The rules are similarly applied to educational institutions, and the guideline provides the following examples:
The Ai Act defines an AI system as follows in Article 3:
AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
As the AI Act has extensive provisions for systems covered by the definition, it's obviously important to know when you're inside or outside the regulations.
There are seven criteria in the sentence in Article 3 and the guideline goes into detail on all of them. It's too long to go into this here, but to summarise, there are very many systems that can be covered. It says that the system's architecture and functionality are crucial elements for whether something is covered. They also write that they are unable to create an exhaustive list of what is covered. At the same time, simpler systems that are actually covered by the definition may still be exempt from the strict requirements.
It's complicated. There needs to be an overview of typical systems that are exempt from the requirements, but I haven't seen a sensible one yet. We'll have to come back to that. You can find the guideline here.
Partner
Oslo
Partner
Oslo
Partner
Oslo
Partner
Oslo
Partner
Oslo
Managing Associate | Avvocato (EØS-advokat)
Oslo
Managing Associate
Stockholm
Managing Associate
Stockholm
Senior Associate
Oslo
Associate
Stockholm
Associate
Oslo
Associate
Oslo
Partner
Oslo
Partner
Oslo
Senior Associate
Oslo
Senior Associate
Oslo