Privacy corner

by Eva Jarbekk


Lock hanging.

The autumn is well underway and the data protection authorities in Europe have a stream of new decisions. There has been another big fine issued by Ireland, this time to TikTok. There are so many decisions now that it's a challenge for many to keep up with what's important. Below I have selected some that I think are worth knowing about. The boundary between agreement and consent is constantly in focus. In addition, it is of importance that legal action has been initiated against the DPF, the basis for transfer of data to the United States. There has also been some progress on a more general data protection level – companies affected by DMA have been identified, there has been (new) criticism against how the automotive industry handles data protection and much more. Happy reading!

There are more activists than Schrems

The new EU-US Data Privacy Framework (DPF) breaches both the GDPR and the EU's Charter of Fundamental Rights, according to Philippe Latombe, a member of the French Parliament. As is well known, it was in early July that the European Commission adopted a new legal framework for the safe transfer of personal data between the EU and the US. Latombe believes that the new data transfer agreement does not provide sufficient guarantees for respecting privacy, and he asks the European Court of Justice to put it on hold. Many thought it would be Schrems who would attack the DPF, but this was not the case.

An interesting element is that he is also employed by the French Data Protection Authority – CNIL. He has nevertheless filed the complaint as an individual – something else would be quite remarkable when the supervisory authorities are actually bound by the EU's decision to accept the DPF. He is said to have chosen a legal procedure that will go faster than what Schrems has done, while there is supposedly a somewhat greater risk that the case will be dismissed.

The complaint has not yet been made public, so I have so far found no comment on how likely it is that he will succeed. We will certainly find out as soon as more is known, because this issue is important to many.

Read more here and here

However, Schrems and NOYB are active as well

Meanwhile, Max Schrems and NOYB (None of Your Business) have pointed out their next target in the battle for digital privacy: the fitness watch Fitbit. NOYB has filed a complaint against the manufacturer of Fitbit for violating data protection regulations in Austria, the Netherlands, and Italy.

NOYB claims that the watch becomes virtually useless if the user does not give their consent to the sharing of personal data. Among other things, it is criticized that Fitbit does not provide the user with information about how personal data is used, and that the only way to withdraw their consent is by deleting the user account. Maartje de Graaf, lawyer at NOYB writes: "First, you buy a Fitbit watch for at least 100 euros. Next, you sign up for a paid subscription, only to find that you are forced to "freely" accept sharing your data with recipients around the world. Five years into GDPR, Fitbit is still trying to enforce a "take it or leave it" approach". NOYB believes that such consent is neither voluntary, informed nor specific – which means that the consent does not meet GDPR's requirements.

The criticism is in line with much else that is happening now, where what is necessary in order to deliver a service is in focus and there is increased attention to the difference between agreement and consent.

You can read more about the case here.

TikTok in severe weather

The Irish Data Protection Authority (DPC) has fined TikTok €345 million for violating the processing of children's personal data.

According to the DPC, the TikTok violations involved the following:

  • By default, profile settings for child accounts were set to public, meaning anyone could see the content, even without a TikTok account.
  • The Family Pairing setting allowed accounts to pair their account with a child account without verifying that they were a parent or guardian, enabling instant messaging for users between the ages of 16 and 17.
  • Too little transparency in the information provided to child users
  • TikTok used so-called "dark patterns" to get users to choose options with less privacy in the registration process, as well as when posting videos.

Of the takeaways from this case, I almost think that what is most practical for many is that there is a focus on manipulative design. There aren't that many decisions that deal with this yet, although there is a lot of talk about it. Thus, there may be reason to place somewhat more emphasis on having a "fair" design in the future when services and interfaces are developed.

You can find the DPC's decision here.

NYPD wants to use drones to monitor backyard parties

The New York Police Department (NYPD) plans to deploy drones to monitor various events, including backyard parties. The plan has sparked protests from several activists who believe that such drone surveillance is an invasion of privacy.

About 1400 police departments in the United States use drones in one form or another, according to a recent report by the American Civil Liberties Union. Until now, however, the use of drones has been limited to the operator's field of view and emergency situations. According to the report, the use of drones is "poised to explode" among law enforcement agencies in the United States.

Read the full article here.

Data protection in the UK is not quite like in Europe

The UK's Home Office is accused of secretly lobbying on behalf of Facewatch, a company that offers artificial intelligence facial recognition technology. Critics of the technology argue that the technology violates human rights and is inaccurate and biased, especially against people with dark skin. Despite widespread concern over the technology, it has already been introduced in hundreds of stores in the UK to identify shoplifters.

The Information Commissioner's Office (ICO) concluded on 31 March that no action was required against Facewatch, as "the company has a legitimate purpose in using personal data to detect and prevent crime". However, it was recently revealed that the UK's Home Office wrote to the ICO, making it clear that the use of facial recognition technology to combat retail crime was an important policy agenda for Chris Philp, the UK's police minister. The Home Office also wrote that Chris Philp would contact the ICO if the ICO's conclusion was not positive. The apparent threat came two days after a closed-door meeting on 8 March between Philp, senior Home Office officials and Facewatch.

Several activists have now demanded an independent inquiry into the Home Office's influence on the ICO. The ICO, the Home Office and Philp all deny any undue influence the decision.


Privacy in cars is going to be an issue

A survey conducted by Mozilla
shows that none of the major car brands are able to meet the most basic data protection and security standards in their new Internet-connected models. I am not particularly surprised and believe that this is a field that will receive more attention in the future. None of the 25 brands examined passed Mozilla's test. Some automakers, such as Nissan
and KIA, have also been criticized for collecting and processing highly sensitive personal data, including genetic and health information, for marketing purposes. Some cars collect information such as "race, facial expressions, weight, health information, and where you drive"; a pretty powerful mix of information.

"Every new car today is a privacy nightmare on wheels collecting massive amounts of personal data", said Jen Caltrider, program director for the Privacy Not Included project.

Mozilla also discovered that many car brands practice "privacy washing", i.e., they provide consumers with information that suggests they don't have to worry about privacy issues when the reality is completely different. The report also criticises what the companies consider to be consent. For example, Subaru says that just by being a passenger in the car, you are considered as a "user" who has given the company consent to collect information about you. That's an interesting angle on what consent should be. Mozilla also points out that a number of car brands say that it is the drivers' responsibility to inform passengers about the car's privacy policy. I don't think many drivers are aware of how the information is used whatsoever. It's probably good that there will be more focus on this.

Read more here.

Does the motive behind a freedom of information request matter? Not this time

A German court found that a request for access is not "excessive", even if it pursues purposes other than data privacy. In this case, the insurance company increased an insurance premium, which the insured believed was unlawful. The insured thus requested access to information related to premium increases, but the company refused access.

The insured challenged the legality of the premium increase in court and asked the judge to order the company to provide access to information about premium increases. The German court concluded that the insured had the right of access and noted that such information constitutes personal data within the meaning of Article 4(1) GDPR. Furthermore, the German court held that it was irrelevant what was the motivation behind the subject access request. Therefore, the Court could not see that the claim was "excessive" under Article 15 GDPR, and the motion was granted.  

Read the decision here

The supervisory authorities are concerned with consents

On 18 July 2023, the Italian Data Protection Authority fined an Italian company, Compara Facile, €40,000 for conducting marketing calls by telephone without obtaining informed consent from data subjects.

The Italian Data Protection Authority found that Compara Facile, after purchasing the personal data from a company in Moldova, contacted people to ask if they were interested in receiving commercial offers, and then sent them a text message with a link to a website where they could give their consent. The Italian Data Protection Authority found that the first telephone contact took place without the consent of the individual, and without the individual receiving sufficient information about the processing of personal data. The Italian Data Protection Authority thus ascertained several breaches of the GDPR. 

Read the full decision here

In a similar case, which also concerned consent to digital marketing, the Italian Data Protection Authority again found a breach of the GDPR. In this case, the Italian Data Protection Authority found that Tiscali Italy provided insufficient information to its users, as Tiscali Italy did not disclose a time limit for the storage of personal data for marketing purposes. The Italian Data Protection Authority also noted that Tiscali Italy had carried out so-called "soft spam" activities, sending text messages to over 160,000 customers who had not consented to receive advertising. The fine was set at €100,000.

Read the decision here.

Information security is important

The Swedish Data Protection Authority (IMY) fined Trygg-Hansa SEK 35 million.

650,000 customers of the Swedish insurance company Trygg-Hansa have had their personal data openly available online. By replacing some digits in the URL, customers of the insurance company were able to open other customers' documents. The problem was so fundamental that Trygg-Hansa should have discovered it even before the system was introduced, according to IMY. Security experts I've spoken to confirm that it was a very elementary mistake that was made.

The situation lasted from October 2018 to February 2021, exposing Social Security numbers, financial information, contact information, and health information.

The case can be found here.

Some small privacy good news towards the end

The Norwegian technology company Secure Practice has been awarded NOK 29 million in EU funds to strengthen cyber security throughout Europe. This comes less than two years after the company tested its ideas in the Norwegian Data Protection Authority's regulatory sandbox. Secure Practice estimates that 85% of security breaches are due to human error and strives to reduce the risk of this through personalized training through an AI-based cloud solution. As the first Norwegian company to receive support through Digital Europe, Secure Practice is in a rare position to contribute to our digital future. With the EU as its client, the company can build significant networks and cooperation across Europe.

Read more here.

A little more from the UK

The UK government has made changes to its controversial Online Safety Bill, showing signs of softening its stance on end-to-end encryption (E2EE). The bill, which aims to combat harmful content on the internet, initially raised concerns about compromising the security of private messages on digital communication platforms.

E2EE is an encryption method that ensures that communications remain safe by keeping the decryption keys hidden, even from platform providers. The proposed bill threatened to undermine this security by giving Ofcom (the UK's regulatory and competition authority for the communications industry) the power to require the use of "accredited technology" for content moderation, which would require the identification and removal of illegal content.

Major tech companies such as WhatsApp, Apple and Signal raised objections, warning that they would rather leave the UK than let this come at the expense of user privacy.

Now, however, it seems that the UK government has taken the criticism to heart, and it has come out with the message that tech companies will not be required to automatically scan digital communication between users.  In reality, however, there is little to suggest that the government's view has changed. The controversial part of the bill will not be removed, meaning there will still be a backdoor introduced that undermines end-to-end encryption. The only change is that the UK government has now said it won't force tech companies to actually enforce them.

It remains to be seen whether the UK's bill will be implemented or if it will be dropped altogether. Meanwhile, the government's decision could have a major impact on similar legislation being negotiated in the EU.

The EU has identified who will be DMA gatekeepers

The EU has now identified 22 digital platforms, which will be defined as so-called gatekeepers under the EU Digital Market Act (DMA). These tech giants will be subject to specific (and stricter) obligations under DSA regulations from March 2024. The regulations are not applicable to mainstream operations in Norway because they only affect the largest platforms. So, for the average user, this is most relevant from our point of view as users of the platforms. As you know, the DMA is supposed to ensure fair competition and the rights of the end user. The EU expects the new rules to create new opportunities, innovation and limit unfair practices by gatekeepers. Among the measures adopted, it may be mentioned that gatekeepers will be required to allow third parties to interact with the gatekeeper's own services in specified situations. Gatekeepers will also be prohibited from giving their own goods and services a better ranking than similar goods and services offered by third parties on the gatekeeper's platform.

To be designated as gatekeeper, the company must have over 45 million active local users, a turnover of over 7.5 billion euros in the last three years, or a market capitalization exceeding 75 billion euros. On the list you will therefore find a number of technology giants, including Amazon, Apple, TikTok, Meta, Instagram, Google, Microsoft, etc.

If companies violate the new legislation, they may face penalties of up to 10 percent of global annual turnover, or up to 20 percent in case of serious offenses.

Do you have any questions?