Data Protection update - June 2024
Welcome to the Stephenson Harwood Data Protection update, covering the key developments in data protection law from June 2024.
In:
- Data protection and privacy news: (i) the Data Protection Commission ("DPC") announced that Meta will not process EU and EEA user data for their artificial intelligence model; (ii) noyb filed complaints against Microsoft and Google; and (iii) Meta's elections tools were banned in Spain.
- Cybersecurity news: (i) the National Cyber Security Centre ("NCSC") published guidance or organisations considering payment in ransomware incidents; (ii) Russian hackers stole 300m NHS patient records; and (iii) the information Commissioner's Office (the "ICO") and the Canadian Data Protection Agency (a "DPA") launched a joint investigation into the 23andMe data breach.
- Other news: (i) the High Court made a significant judgment in relation to data subject access requests ("DSARs"); and (ii) the ICO released its final decision on its investigation into Snap.
We are also pleased to announce the first edition of the Neural Network! Beginning next month, we will be releasing a monthly AI newsletter. Each issue will be packed with the latest regulatory updates, key enforcement actions, and groundbreaking technology developments within the AI sector. For those already subscribed to our Data Protection update, you'll be pleased to know that you'll automatically receive this new AI newsletter as part of your subscription. To opt-out please click the "Unsubscribe" link in the newsletter email.
Data protection
- Meta's use of data for training AI curbed by DPC
- noyb files complaints against Microsoft and Google
- Meta's EU election tools banned in Spain
Cybersecurity
- NCSC and ABI publish cyber ransom payment guidance
- Ransomware hackers steal records of 300 million patient interactions with NHS
- ICO and the Canadian DPA launch joint investigation into 23andMe data breach
Enforcement and civil litigation
- High court rules you don't always need to disclose identities of individual recipients in response to a subject access request
- ICO determines that Snap did not violate UK GDPR
- Round-up of enforcement actions
Data protection
Meta's use of data for training AI curbed by DPC
The DPC has announced that Meta will stop the processing of EU and EEA user data for undefined "artificial intelligence techniques." This decision, announced on Friday 14 June 2024, follows 11 complaints from noyb (Max Schrems' privacy rights organisation), and other European DPAs.
Initially, the DPC approved Meta’s AI rollout in the EU and EEA. However, after pressure from various EU data protection authorities and public reactions to noyb's complaints, the DPC reversed its stance. The DPC now welcomes Meta's decision to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU and EEA, a decision that came after intensive engagement between the DPC and Meta.
noyb and other organisations, such as the Norwegian Consumer Council, were pivotal in this outcome. Schrems continues to call for an official decision regarding the ongoing cases noyb has filed.
Meta's response highlighted its discontent, framing the pause as a loss for EU and EEA users who would miss out on AI services. Critics argue that Meta could still deploy AI in Europe by seeking valid opt-in consent from users, as allowed under the EU GDPR. The DPC will continue to engage with Meta on the issue, in cooperation with other EU DPAs.
noyb files complaints against Microsoft and Google
On June 4, 2024, noyb filed two complaints against Microsoft Corporation with the Austrian data protection authority ("DSB"), alleging that Microsoft 365 Education software violated the EU GDPR. On the same day, noyb also filed a complaint against Google over its "Privacy Sandbox" feature in Chrome, arguing that it deceptively tracks user data.
- Complaint against Microsoft for breaching schoolchildren's data rights
noyb's complaints against Microsoft focus on the use of Microsoft 365 in schools. The first complaint alleges a breach of the information obligation, claiming that Microsoft misled students and schools about who controls their data. Microsoft purported that schools were the data controllers, yet schools lacked control over Microsoft's systems, violating EU GDPR Articles 5(1)(a) and 12 to 15.
The second complaint centres on unlawful processing of personal data through cookies, claiming Microsoft violated Article 6(1) in conjunction with Article 5(1)(a) of the EU GDPR. noyb asserts that Microsoft failed to provide adequate information on cookie and tracking technologies used in Microsoft 365 Education and did not obtain valid consent for its use of tracking cookies. Furthermore, noyb alleges that Microsoft processed personal data beyond the scope of the processing order, violating Article 28(3)(a) of the EU GDPR. noyb has urged the DSB to impose a fine on Microsoft. - Complaint against Google over their "Privacy Sandbox" feature
In parallel, noyb's complaint against Google targets the new “Privacy Sandbox” feature in Chrome, which collects users’ browsing data to generate targeted advertising topics. noyb argues that Google’s method of obtaining user consent for this feature is misleading and manipulative, utilising “dark patterns” to trick users into enabling tracking.
The complaint highlights that Google's pop-up message, which asks users to "turn on ad privacy feature," conceals the fact that it initiates tracking for ad targeting. noyb claims this approach violates the EU GDPR’s requirement for specific, informed, and unambiguous consent.
Schrems criticised Google's practices, stating that users were deceived into believing they were opting for enhanced privacy while actually consenting to invasive tracking. noyb urged the Austrian DPA halt data processing based on invalid consent and impose a significant fine on Google.
For more insights into noyb’s ongoing efforts to uphold data privacy, read our article on a recent complaint over ChatGPT's alleged "hallucinations" filed last month.
Meta's EU election tools banned in Spain
Spain's data protection authority (the "AEPD") has ordered the provisional suspension of two election-related tools planned by Meta for deployment on Instagram and Facebook. The decision, announced on Friday 31 May 2024, affects Meta's "Election Day Information" ("EDI") and "Voter Information Unit" ("VIU") tools, which were set to be used in the upcoming European election.
The AEPD stated that these tools would violate Spanish data protection regulations, particularly, the data protection principles of lawfulness, data minimisation and limitation of the retention period. Although Meta has cooperated with the AEPD's request, it disagrees with the AEPD's assessment of the case.
Meta intended for all eligible Instagram and Facebook users in the EU to receive notifications from EDI and VIU reminding them to vote. However, the AEPD highlighted several issues with Meta's approach, such as the identification of eligible voters based on data including city of residence and IP addresses, which could exclude EU citizens living abroad and include non-EU citizens residing in Europe. The AEPD criticised Meta's data collection methods as "unnecessary, disproportionate, and excessive," noting that the company had not justified the need to store this data post-election.
Additionally, the AEPD pointed out that there was no reliable mechanism to verify users' self-reported ages and that the treatment of interaction data was "totally disproportionate" to the goal of informing about the elections.
Cybersecurity
NCSC and ABI publish cyber ransom payment guidance
In collaboration with the Association of British Insurers ("ABI") and other members of the cyber insurance industry, the NCSC has published guidance for organisations considering payment in ransomware incidents.
The NCSC highlighted that payment of a ransom does not guarantee data recovery nor will it protect an organisation against future attacks. In addition, it does not fulfil an organisation's regulatory obligations, with the ICO not considering a payment to criminals as a risk mitigation factor such that it would not reduce any penalty for the personal data breach as a result.
As ransomware attacks become increasingly common, organisations should ensure that they have strategies in place so that they are prepared for these incidents. Key takeaways from the guidance include:
- Consider alternatives to payment – there may be other ways to recover the data that has been stolen for example by reviewing back ups that are available.
- Undertake a risk assessment - before deciding to pay the ransom, organisations should carry out a risk assessment on the impact of payment across the organisation.
- Consult experts – where possible, consulting law enforcement, the NCSC or cyber incident response companies can improve the quality of an organisation's decision-making.
- Consider the legal and regulatory requirements for your organisation – some payments may not be lawful, for instance payments to a jurisdiction or entity that is under UK sanctions.
Ransomware hackers steal records of 300 million patient interactions with NHS
In a major cybersecurity breach, Qilin, a cyber-criminal group have stolen records of 300 million patient interactions with the NHS. The hackers targeted a company called Synnovis, a company which provides services such as blood tests and blood transfusions to the NHS. NHS England confirmed that stolen data includes patient's personal data, such as names, dates of birth, NHS numbers and descriptions of blood tests.
The hack has led to disruption in some NHS hospitals, with over 3,000 hospital and GP appointments being delayed as a result of the ransomware attack. However, the NHS, the National Crime Agency ("NCA") and the NCSC are investigating to clarify exactly how many patients were affected by the data breach and to verify the data included in files published by Qilin.
A 2023 report by the Joint Committee on the National Security Strategy found that the healthcare sector is a growing target across Europe for ransomware attacks, and that the NHS is particularly vulnerable, as many of their IT systems are outdated and they lack the funds to upgrade their systems.
Cyber defence capabilities should be monitored and reviewed routinely to ensure that attackers cannot target vulnerabilities in old systems. Additionally, if sharing personal data with other companies, before any data sharing takes place, it is important to ascertain whether they have sufficient cybersecurity measures in place. This is particularly significant when sharing sensitive personal data, such as health data.
ICO and the Canadian DPA launch joint investigation into 23andMe data breach
On June 10, the ICO and the Canadian data protection authority launched a joint investigation into the 23andMe data breach.
The company, which provides direct-to-consumer genetic testing suffered a data breach in October 2023, where hackers accessed 0.1% of user accounts (an estimated 14,000 individuals). Users that had reused usernames and passwords across different websites were those impacted. By accessing these accounts, hackers were able to view a "significant number of other files containing profile information about other users' ancestry", with c.7 million individuals (which equates to almost half of 23andMe's customers) impacted by this.
The ICO and Canadian DPA are investigating:
- the scope of information that was exposed by the breach and potential harms to affected people;
- whether 23andMe had adequate safeguards to protect the highly sensitive information within its control (23andMe processes sensitive personal data, including data related to people's health, ethnicity, location and family trees); and
- whether the company provided adequate notification about the breach to the two regulators and affected people as required under Canadian and UK data protection laws.
While the onus is also on the individual to ensure that they use secure passwords, and do not reuse passwords across different websites, companies should ensure that they have strong cybersecurity policies in place in order to mitigate the risks of a data breach. The more sensitive the data being processed, the greater the measures put in place to protect customers' data should be.
Enforcement and civil litigation
High court rules you don't always need to disclose identities of individual recipients in response to a subject access request
On 7 June 2024, a significant data protection judgment was handed down in the High Court case of Harrison v Cameron and ACL. This case highlights three key issues for organisations to consider when handling data subject access requests ("DSARs"):
- individual directors may be under an obligation to respond to a DSAR (as well as their organisation);
- requesters may be entitled to be informed of the specific identities of the recipients of their personal data; and
- the "rights of others" exemption can take into account the motive of the requester and the wellbeing and safety of other parties.
Please see our article here for an analysis of this case, including its key takeaways.
ICO determines that Snap did not violate UK GDPR
The ICO has released their final decision on their investigation into Snap.
The investigation was opened in 2023, as the ICO was concerned that Snap had not adequately assessed the data protection risks associated with "My AI". Snap had also failed to consult with the ICO prior to processing personal data in connection with "My AI", even though their data protection impact assessment showed that this processing would result in "a high risk to the rights and freedoms" of children and teenagers using the chatbot.
Following the announcement of the ICO's investigation, Snap carried out a revised data protection impact assessment. It also implemented "child-specific mitigatory measures" to address the risks posed to children using the chatbot. For example, it (i) amended the wording of the "My AI" just-in-time privacy notice to be accessible to 13- to 17-year-olds; and (ii) and attributed an "age range" to each user, which was shared with My AI so that the chatbot would respond in a safe and age-appropriate manner when discussing sensitive topics.
This decision highlights the importance of carrying out detailed data protection impact assessments from the outset when developing or using AI tools. It is also important to be clear about the specific steps and internal policies you have in place for addressing the risks posed by the data processing, particularly if any sensitive data is being processed.
Round-up of enforcement actions
Company | Authority | Fine | Comment |
Meta Platforms | Italian Competition Authority | €3.5 million | The Italian Competition Authority fined Meta for breaches of the Italian Consumer Code. This included a failure to inform Instagram users that their data would be collected and used for commercial purposes. |
BBVA | AEPD | €200,000 (reduced to €120,000) | BBVA was fined for violating the accuracy principle under the GDPR. |
Unnamed company | Belgian DPA | €172,431 | An individual had received unsolicited direct marketing from a company and exercised their rights of objection and erasure. The company was fined for failing to comply with the individual's requests. |
Azienda Ospedale – Padova University | Italian DPA | €75,000 | The healthcare company was fined as it did not have sufficient security measures in place, so its employees had unauthorised access to patient health data. |
National Institute of Social Security ("INPS") | Italian DPA | €20,000 | The INPS was found to have violated the GDPR as it published people's personal information on the INPS website without the individuals' consent. |