Newsletter

Privacy Corner

by Eva Jarbekk

Published:

Computer and lock

Hope everyone has had a great summer and is ready for a wonderful and colourful autumn! In terms of privacy, July was a bit quieter than many other years – there were no groundbreaking decisions from the EU in July, as there have been so many times before, but there are still more than enough relevant issues to be aware of.

I have spent part of the summer finishing the new edition of the introductory book on GDPR that I have written together with Simen Sommerfeldt. A new print edition is just around the corner. That will be exciting. Going forward, I and a team at Schjødt will update the commentary edition of the GDPR in Karnov – and after that, I will start on a commentary edition for the AI Act. There's plenty to do.

The AI Act came into force in the EU on 1 August and will likely be incorporated into Norwegian law by 2025. We'll get there soon. I see that almost all privacy events currently deal with AI. Our next network meeting will include a session on the procurement of AI, and we will hear more about suppliers' use of customer data. Below is a selection of other relevant issues. There could have been even more, but I felt I had to stop when I reached 10 pages. Happy reading!

Even more facial recognition

In the previous newsletter, I wrote that I was surprised by how many new cases there are now where facial recognition is allowed. Here is yet another case. 

In England, Amazon AI cameras have been used for two years to monitor and analyse passengers' emotions. The system has been tested at eight train stations. It is a combination of smart CCTV cameras and traditional cameras where the video streams are connected to cloud-based analytics software. The system records age, gender, and various emotions. Facial expressions and other non-verbal signals are interpreted to identify passengers who may be distressed or upset – and then officials can prevent conflicts or emergencies.

This raises both questions about whether it is ethically acceptable and whether the analysis provides accurate interpretation of passengers’ emotions.

The system uses machine learning to detect individuals going onto the track, predict crowded platforms and to find what they call “anti-social behavior.”

The London Underground has done an analysis of what can be achieved with real-time data insights on passenger behaviour. In addition to the above, they believe one can generate real-time alerts for ticket cheating, find people leaning over the tracks, smoking, sitting on benches for extended periods or folding out electric scooters. In part, they write that facial recognition is not necessary, but on the other hand, they write that facial recognition would be able to reveal repeat offenders.

I must say that I am a bit skeptical about this and I have doubts whether this would be allowed within the EEA. The UK has a slightly different tradition of privacy than the EU. It will be interesting to see what is allowed in the future.

You can read more about this case here

Not just facial recognition – what about neurodata and “brain fingerprinting”?

Europe’s data protection authority has begun to look at neurotechnologies. The background is products that collect and process neural data, such as headbands that record information from the brain. This can be used in wellness products for meditation, for mindfulness training or to control a machine. 

The field is rapidly evolving in many areas such as medicine, entertainment, marketing, wellness, safety, employee monitoring, and more. 

At the beginning of June, the EDPS and the Data Protection Authority in Spain released a separate report on this: "TechDispatch on Neurodata". The report describes different neurotechnologies and privacy challenges around this. It is quite clear that neurodata about individuals can be sensitive information. 

The report goes into more detail on this and should definitely be read by those who are considering using these types of products – be themselves as a consumer or a data controller. I have glanced at the report and have learned a completely new term – “brain fingerprinting” – that is, to reproduce thoughts imaginable in a brain. 

The brain is active 24 hours a day with thoughts, dreams and nightmares. Reproducing all such activity for use in the real world does not sound entirely positive. They write on page 16:

“As a rule, the EDPS considers that the processing of data such as ‘brain fingerprinting’ should only occur for healthcare purpose, accompanied by all data protection conditions and safeguards. It would be alarming for any controller, other than a provider of healthcare, to use neurodata to detect or infer an individual’s health information (in particular very sensitive information that is possibly not yet known to that individual themselves, e.g. about psychological disorders or a neurogenerative disease).”

It makes sense to suggest that this should be done with great restriction. 

You can find the report here.

Legal basis for AI development? On Meta, OpenAI and more

In the early summer there was a debate in Norway about the development of LLM models. Some believe that consent is the best legal basis, not legitimate interest. The Italian Data Protection Authority has, as we know, spent a lot of time on OpenAI's ChatGPT. This has resulted in a separate Taskforce within the EDPB looking into the use of personal data in LLMs, and ChatGPT in particular. They issued a preliminary report in May.

When it comes to developing AI models, companies have typically used legitimate interest as a legal basis. This is understandable, as it can be challenging to obtain the consent from each individual when information from many people is needed to train a model. The report from EDPB’s Taskforce opens up the possibility of using legitimate interest, but it requires a thorough assessment by the controller, which alsoalso takes into account how the LLM model is used for the benefit of the general public. This means that the assessment may vary on a case-by-case basis.

The taskforce focuses on the fact that ChatGPT/OpenAI has relied on “web scraping”. This would involve third party data, but it does not address whether there is a difference between using data from third parties or first parties. Nor whether it matters if the data is public or private. It is conceivable that a contractual relationship between the data controller and the user (as it is for first parties) makes it easier to ensure transparency and information than with “scraping” from third parties.

However, the report is only a preliminary oneand it will be interesting to see how the final version will look like. I wrote an article about this together with my colleague Anna Olberg Eide, which was published in IAPP this summer – you can find it here

Meanwhile, the Norwegian Data Protection Authority has told Meta that they must use consent. It seems that the Irish Data Protection Authority supports this. If this becomes a general requirement for all AI, it will probably limit the opportunities to develop good AI tools in the EU. One could probably come a long way in recognising legitimate interest and providing an easy option for reservation. 

Meta, on the other hand, has concluded that they are pausing both the roll-out of their new Llama model in Europe, see link here and here - and that they will not train Llama on information from Facebook. There are many people in Norway who use Llama, so it will be interesting to follow this further. 

Meta has challenges in Africa as well

It is not often we have news about privacy from Africa, but now the Nigerian Competition and Consumer Protection Authority has looked into Facebook and Whatsapp. There are many Nigerian users of these services. They also have a data protection law which imposes extensive requirements that the data controller have a dedicated data protection organisation and submit a Data Protection Audit regularly. Meta has reportedly done none of this – in addition to the fact that there are a number of other violations of Nigerian privacy law. This has now resulted in a fine of no less than $220,000,000. 

The case is discussed here.

Complaints about Microsoft – what can a supplier use customer’s information for?

NOYB filed two complaints at the beginning of June against Microsoft’s (MS) use of personal data on behalf of two schoolchildren in Austria. The information comes from the Microsoft 365 Education school tool. NOPYB may raise such matters on behalf of individuals.

The father of one of the students had asked the school to direct inquiries to both MS and to the school to clarify how the child’s personal data was processed. MS answered that it was the school that was responsible for the processing. It is not entirely unlikely given that the student has no direct connection with MS, and the school has acquired the software. The school, on the other hand, replied that the only information they were responsible for was the student’s email address. So immediately, there is no complete agreement between what the actors state about the responsibility for the processing.

I have not read the documentation from MS myself, but NOYB writes that the terms of use and privacy statements are a "labyrinth" that is difficult to navigate and understand. That is not unusual. 

The second case also concerns a school student in Austria. There, NOYB claims that MS should have installed cookies that, according to Microsoft’s own documentation, analyse user behavior, collect browser data and be used for advertising. NOYB claims this can be used for very intrusive profiling, criticizing that it happens without the student’s school even being aware of it. They believe that it is likely that the company tracks all minors using their educational products without a valid basis for processing.

We have touched on this topic in the network before. What can an IT provider take responsibility for when they are a data processor, and the personal data comes from the customer’s employees or students? One of the best known cases about this is from 2019, the Google Chromebook/Workspace case from Denmark, where Helsingør municipality had to temporarily stop using Chromebooks in school in 2022.

The Danish Data Protection Authority subsequently reviewed practices in the area across 53 municipalities. The main point was that the municipalities and schools did not have the authority to "give" students' personal data to Google. As a data processor, Google cannot assume responsibility for the processing of students’ personal data. Freely translated, the Data Protection Authority writes the following: 

“Before using a tool, as the data controller, you must get an overview of how you process personal data in it, and you must be able to document it. This requirement applies to all organisations. But when it comes to public authorities – where we as citizens cannot opt out of our data being processed– the Data Protection Authority has a special expectation that the necessary analyses will be carried out and documented,” says Allan Frank, IT security specialist and lawyer at the Data Protection Authority, and continues:

“Most standard IT products today have a very complex contract structure that not only includes many possibilities for variations in the processing of personal data but also has a relatively high frequency of changes. This makes it more difficult than necessary for responsible businesses and governments to comply with the General Data Protection Regulation (GDPR), because you easily lose track of what happens to the data. We at the Data Protection Authority therefore encourage the contracts to be made more transparent - not only with regard to the processing structure, but also with regard to the consequences when the conditions surrounding the delivery change."

 It is easy to agree with this. Furthermore, it is practically important that the authority writes that they have concluded that the schools "have legal authority to disclose the students' information for the purpose of providing the services, improving the safety and reliability of the services, communicating with, among other things, the municipalities, and complying with legal obligations"

At the same time, they believe that the Public School Act does "not give legal basis for municipalities to pass on students’ information for the maintenance and improvement of Google Workspace for Education service, ChromeOS and Chrome browser, or for the measurement of performance and development of new features and services in ChromeOS and Chrome browser".

Based on this background, the Data Protection Authority ordered municipalities to bring their processing into compliance with the rules by ensuring that there is a legal for all processing that occurs. This was in January 2024. This can be done, for example, by:

  • Municipalities no longer disclose personal data to Google for these purposes. This is likely to require Google to develop a technical ability to intercept these data streams.
  • Google itself refrains from processing the data for these purposes.
  • That the Danish Parliament establishes a sufficiently clear legal basis for disclosure for these purposes.
     

The municipalities were given a deadline until 1 August 2024 to ensure this. During July, 52 municipalities confirmed that they will no longer disclose such personal data to Google. The contracts with Google shall also be amended so that personal data will only be processed on instructions from the responsible municipality (with the exception of cases required under applicable EU regulations or national legislation). The Data Protection Authority says that there are still some outstanding questions in the case.

The municipalities must have confirmed that they will refrain from using services where the processing of personal data takes place in third countries that do not have equivalent protection of data subject rights. This also applies to the maintenance of infrastructure on the part of the supplier, where there may be processing of personal data processed for the responsible municipalities.

The Danish Data Protection Authority has asked the EDPB for a statement on the data controller's documentation obligation for the data processor’s use of sub-processors. When this opinion is available, the Data Protection Authority expects to make a final assessment of the subprocessor chain in the municipalities' use of Google's products.

Without knowing the cases in detail, it seems to me that NOYB is trying to initiate a similar process against Microsoft as the Danish Data Protection Authority has done against Google. The Danish Data Protection Authority has made relatively large changes without threatening fines, although it probably cost considerable resources when they ordered Helsingør municipality to stop using Chromebooks in school. NOYB is (of course) going the opposite way and demanding that Microsoft to be fined. 

The first case from NOYB against MS can be found here.

The second case from NOYB against MS is here.

Latest developments from the Danish Data Protection Authority in the Google case can be found here.

Security Log Access Requests?

Denmark has significantly nuanced its perception of whether security logs can be exempted from access requests. For a long time, the point of view was that personal data in a security log was not subject to access requests, but it is no longer that simple. On the basis of CJEU case C-579/21, the Data Protection Authority changed its opinion in a specific case where the individual requesting access wanted this to find out who had “subscribed” to information about them from the civil registration system (CPR). I have not found the reason for the request, but it is not uncommon for citizens to want to know which public officials have looked up their information. This information could be derived from the security log in the CPR system. 

In case C-599/21, the CJEU emphasized that if a log shows information to which the data subject is entitled, the individual shall receive this. This means that one must look at what the log actually shows. In the specific case of the CJEU, the individual wanted to know who had posted his/her information. 

Going forward, the Danish Data Protection Authority will assume that the data controller must provide a copy of log information when it displays information to which the individual in question is entitled. This could be, for example, information about searching for their data and the date (and purpose) of those searches.

If it is necessary for the data subject to exercise his or her rights effectively, the data controller must also state who carried out the search. This may be relevant if the data subject suspects that his or her information has been the subject of an unauthorized search. 

This means that anyone who requests access can have access to more type of information if one states that the reason for the request is that one suspects "snooping" on information. 

Tougher times for adtech - NOYB goes against Xandr

Xandr is a very large adtech company that offers a so-called Real Time Bidding (RTB) platform. Microsoft owns Xandr. The service works in such a way that when a user visits a website, an algorithmic auction takes place to determine which company can show the user an advertisement. 

The information about the user can be enriched with other information that the platform has about them. Often, the more information available, the more valuable the ad placement is because it becomes more targeted.

 NOYB writes that Xandr collects and shares large amounts of personal data to create profiles of users and enable targeting. Some of the information is also supposedly purchased by external parties. They believe Xandr collects hundreds of sensitive profiles of Europeans that contain information about their health, sex life or sexual orientation, political or philosophical opinions, religious beliefs or economic status. Specific segments include things like 'french_disability', 'pregnant', 'lgbt', 'gender_equality' and 'jewishfrench'.

The interesting thing about the case is that XANDR discloses internal statistics on how they handle access requests form data subjects– and state that they never provide access or respect deletion requests on the grounds that they cannot identify the person. 

NOYB has filed a complaint against Xandr before the Italian Data Protection Authority, Garante, in July. I would think this is going to result in a comprehensive mapping of how the RTBs and Xandr work and what possibilities they have to delete individuals. 

Compilation of requirements for consent banners

There has been greater clarity on what is required of consents for cookies. Although the European countries have adopted a much more uniform practice, there are still some differences. NOYB has, as we know, worked a lot on this and has made an overview of what the different countries think. Among other things, there is a different attitude to the use of color differences on yes and no buttons. I would assume that the overview may need to be changed quickly because there is still a high rate of change in this area, but the overview may still be useful for some. 

You can find it here.

Consent texts must be consistent

The Latvian Data Protection Authority conducted a survey at AQUAPHOTO, in May 2023. AQUAPHOTO offers photo services to visitors in a water amusement park and uses facial-recognition software to identify and provide images to customers. The company informed about the data processing in four different ways, but the information was inconsistent. 

On the amusement park's website, it is stated that visitors will be photographed by seeking out the photographers. In the "Rules for Visitors" section on the website, it was written that visitors accompanying minors could be photographed unless they had a special sign showing that they did not want to be photographed. It stated that the sign was distributed from the stand of AQUAPHOTO. However, the terms of AQUAPHOTO stated that such a sign was distributed by the park itself, or possibly by the photographers. There was also an information sign at the cash register in the park that said: “Don’t be afraid of the AQUAPHOTO photographer. You will not be photographed in red. Green bracelets give lots of great pictures.” I understand that the color of a bracelet was supposed to tell the photographers whether one wished to be photographed or not.

The company believed they had obtained consent – a green bracelet to be photographed, a red bracelet to avoid photography. 

The Data Protection Authority found that AQAPHOTO’s practice did not meet the company’s own standards. Employees at the park did not inform visitors about the processing and there was no clear procedure for choosing bracelets. Furthermore, pictures were taken in situations where the bracelets were not visible, e.g. in the attraction "Unique Tornado".

The Data Protection Authority imposed a fine of €1,000 on AQAPHOTO for breach of the GDPR, including the articles on transparency, consent and the general processing of personal data. This wasn’t a huge fine, but the privacy threat wasn’t much either. Regardless, the case is a good reminder that different forms of privacy statements and terms must be consistent. And then the practice must be as stated. 

The case is discussed here.

EU postpones controversial proposal for backdoors in chat communication

There has been a lot of criticism against the upcoming chat control legislation, even though it is justified with the intention to prevent sexual abuse against children. Academics, privacy activists, journalists and politicians have warned that mandatory “straws” into chat functions will mean the end of encrypted online communication.

The proposal would mean that companies like WhatsApp and Signal would have to embed a backdoor into their services allowing authorities to scan messages for material related to child sexual assault (CSAM). This is an important goal, of course. However, if such a scanning system is built into an application, it can be used for other purposes as well. For example, monitoring journalists or politicians. There are a number of technical discussions around this, but the main point is that authorities can access information that is intended to be protected, whether or not the information is illegal. It could wellhave a cooling effect on the exchange of opinions.

The European Commission should have voted on the proposal on 19 June, but this vote was postponed because it is so controversial. This does not mean that the proposal will not come back up however.

The proposal is discussed here

AI on mobile phones presents a number of challenges

Apple and OpenAI want to integrate ChatGPT into Apple’s operating systems, but this has been criticized. 

One issue is possible competition law problems. Many are concerned that large technology companies will dominate AI through acquisitions and partnerships. Another issue is the uncertainties surrounding access to third-party apps and users’ privacy. Here, there may be challenges both in relation to the Digital Markets Act and GDPR.

According to what is known from Apple, ChatGPT will be integrated into iPhone, iPad and Mac operating systems later this year. Apple made this announcement at the Worldwide Developer Conference 2024 on 8 June when they introduced the iOS18 operating system.

To provide better answers, “Siri” is to have access to ChatGPT as well as users’ messages and emails. In addition, writing tools such as Notes and Pages are to have access to ChatGPT 4 to help users create written and visual content.

OpenAI will not store data that is used and users’ IP addresses (mobile phone) are to be hidden. There will likely be a comprehensive discussion of whether this is technically correct.  Supposedly, the whole thing is based on consent.  It will be interesting to see how they intend to implement such consent – if it has to happen before each use of the AI, it can be cumbersome. If they do not do it this way, they may risk that the consent is not considered valid in the EU. Apple has probably taken this into account. 

Although Apple has traditionally been considered as a company that values privacy highly, , several critical voices are now being heard. It probably doesn't help  help that OpenAI has a somewhat more unstable relationship with the regulations – cf. the processes initiated by the Italian Data Protection Authority. The result of this so far is that the EDPB has set up a group that works on privacy in ChatGPT. 

The interesting thing is that Apple finds the EU to be so regulatorily challenging that they choose to postpone the introduction of these services in the EU, see link here. The project is also postponed in China, but this time because China does not allow ChatGPT. It seems that suppliers will have to take into account different countries’ regulations to a greater extent going forward. This is going to be challenging.

Some cases on compensation

There were two new decisions from the CJEU in June on compensation for non-material damage where actual misuse of personal data was not proven

Just the fact that there are so many cases about this from the CJEU shows that this is an area that national courts find complicated. Here I cannot help but comment that Norwegian courts have so far been extremely reluctant about requesting clarification from the EFTA Court in principle cases, ref. the Grindr case where the district court concluded what constitutes special categories of personal data. It’s perhaps typically Norwegian to think one knows best oneself.

Regardless. Both of the new cases are similar to the decisions on “Österreichische Post” and other cases on the same topic. 

In the first case, an application from Scalable Capital had been hacked and personal data of users were taken. The CJEU confirmed that the purpose of Article 82 is compensatory and not punitive, and that the criteria for assessing the compensation to be awarded should be determined by the legal system of each member state. The Court also ruled that damage caused by personal data breaches is no less significant than physical damage, but that it is possible for a national court to award lesser compensation if the damage is not severe. The CJEU was also asked to clarify the term “identity theft”, which is used in items 75 and 85 of the narrative, where this is used as an example of a type of damage. The CJEU confirmed that the mere theft of personal data i does not constitute identity theft but emphasized that compensation cannot be limited to cases where the data theft later led to identity theft. Data theft in itself may be sufficient. These are interesting considerations in the decision's paragraphs 47-58.

This case is relatively short to review and can be found here.

The second decision concerned a tax return containing information about disabilities and religious affiliation. This was sent to the wrong address. The applicants requested compensation of of €15,000 for loss of control over their personal data, without being able to determine whether third parties had read the data. The CJEU confirmed that the fear a data subject experiences with respect to possible misuse of his or her personal data may constitute “non-material damage”. They referred to previous decisions and wrote that it must be emphasized whether the fear can be considered well-founded. The CJEU writes that  "a person’s fear that his or her personal data have, as a result of an infringement of that regulation, been disclosed to third parties, without it being possible to establish that that was in fact the case, is sufficient to give rise to a right to compensation, provided that the fear, with its negative consequences, is duly proven.

This case is also relatively concise and can be found here

In the January newsletter this year, I gave a brief summary of the situation to get compensation for pain and suffering – I think it still stands up. 

New Swedish review of the role of the Data Protection Officer

IMY has reviewed the independence of Data Protection Officers (DPOs) in several public and private businesses, see here.

The DPOs held other roles in the business in addition to being DPO. Were they sufficiently independent? This is often a difficult issue; is it really the case that a business must pay for a DPO who is not involved in decisions that may be relevant to personal data? Yes, that is the answer. 

Several of the companies avoided criticism or fines even though the DPO also had some responsibility for compliance, risk/security and legal counsel.

The Västerbotten region receives criticism for allowing the DPO to be a lawyer. The Social Committee in Örebro receives criticism for not providing the DPO with sufficient resources (there were not enough resources to be able to report) and for not having procedures that ensured that reporting was made to the top level. The comfort may be that no one was fined.

Designating a DPO is not easy. However, it is quite important to place the role correctly and to allow for reporting all the way up to management. It’s also quite a good idea to have annual reports from the DPO showing that something is being done. And then the management must follow up on deviations that the reports state, but that is another matter.

IMY is also interested in Meta

Or rather, they are interested in businesses that use Facebook pixel. IMY has imposed a fine of NOK 15 million on Avanza Bank AB. In June 2021, IMY received a breach notification from the bank which showed that personal data had been incorrectly transferred to Facebook during the period from 15 November 2019 to 2 June 2021. 

The transfer occurred because the bank used the Facebook pixel for marketing purposes. Two functions of the analysis tool were unintentionally activated by the bank, resulting in the transfer of personal data of a large number of individuals who were logged into Avanza Bank’s website or in the bank’s app, to Meta. The personal data transferred included, among other things, social security numbers and financial information about loans and account numbers – sometimes in plain text, sometimes in hashed format. 

IMY determined that the bank did not implement adequate technical and organisational privacy measures. The information was also subject to statutory confidentiality. IMY believes that loss of control over this type of bank information poses a high risk to the freedoms and rights of data subjects. 

You can find the case here

We have briefly touched on APRA before.

If you don’t remember what it stands for, just learn it. It stands for the American Privacy Rights Act. Their GDPR. Which does not exist, but which is outlined now and discussed. The content is quite similar to the GDPR and, of course, it has sparked major protests in some circles. 

The Association of National Advertisers and the American Association of Advertising Agencies have jointly sent letters to the House of Representatives and the Senate urging changes to the proposal because they believe it will decimate the modern advertising industry and destroy small and medium-sized businesses that depend on advertising to reach their customers. It is also claimed that the rules will deprive individuals of access to the products, services, information and resources they enjoy and rely on. There are undoubtedly divided opinions on whether this is correct. 

On the other hand, the Heritage Foundation’s Tech Policy Center has praised the APRA draft for reducing incentives to exploit U.S. consumers.

It is certainly a good while before this becomes final legislation, but it is good that Americans are discussing this. And when the bill progresses further, we will take a closer look at what the consequences of APRA will be. 

Do you have any questions?