Data Protection update - July 2023
Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from July 2023.
This month, the European Commission adopted its adequacy decision in respect of the EU-US Data Privacy Framework. The framework provides a new lawful basis for flows of data from the EU to US recipients that certify to adhere to the framework's principles.
The framework was made possible by a number of changes to US laws, which placed limitations on signals intelligence activities and created a new Data Protection Review Court to provide individual redress in respect of surveillance activities. The European Commission believes these changes provide EU citizens with adequate privacy safeguards and effective redress rights. However, some consider that the adequacy decision does not do enough to deal with the concerns set out by the European Court of Justice in Schrems II, making it likely that the framework will be subject to legal challenge in the near future.
Elsewhere, a new inquiry by the House of Lords' Communication and Digital Committee has been established to assess large language AI models and ensure that the UK can effectively respond to AI's opportunities and risks over the next 1-3 years.
In this month's issue:
Data protection
- From Safe Harbor to Privacy Shield to Data Privacy Framework: EU announces third EU-US adequacy decision
- Big Tech submit Digital Markets Act gatekeeper status
- UK finalises first law enforcement adequacy decision
Artificial intelligence
- China updates approach to AI regulation
- UK steps up inquiry into how to regulate AI
- Meta and Google develop ChatGPT competitors
Cyber security
Enforcement and civil litigation
- The intersection of competition and data protection: CJEU rules competition authorities can consider GDPR breaches and rules out contract and legitimate interest bases for certain personalised advertising
- Controllers may refuse to comply with access requests unrelated to data protection
- Round-up of notable enforcement actions
Data protection
From Safe Harbor to Privacy Shield to Data Privacy Framework: EU announces third EU-US adequacy decision
On 10 July the European Commission ("EC") adopted its adequacy decision (the "Decision") on the EU-US Data Privacy Framework ("DPF"). The DPF is the long-awaited replacement for the Privacy Shield, which was invalidated by the Court of Justice of the European Union ("CJEU") in the Schrems II decision in July 2020. The DPF aims to address concerns about the ability for US public authorities to access the personal data of EU citizens.
The Decision had immediate effect and provides a new mechanism to send personal data from the EU to US data importers that self-certify compliance with the DPF, without further transfer safeguards being required.
As with the Safe Harbor and Privacy Shield adequacy decisions, the Decision is distinct from other EC adequacy findings, in that it applies only to transatlantic data transfers that are made to certified entities, not to all transfers to US recipients.
Changes to US laws
The Decision was preceded by changes to US intelligence gathering operations, made possible after US President Joe Biden issued Executive Order 14086 on Enhancing Safeguards for United States Signals Activities ("EO") in October 2022, together with regulations passed by the Attorney General's office. The EO strengthens the conditions and limitations that apply to all signals intelligence activities regardless of where they take place and establishes principles-based safeguards that draw on EU law concepts of necessity and proportionality.
The EC's conclusion
The EC conducted a detailed assessment of the legal changes introduced and concluded that the new safeguards address the concerns addressed in Schrems II. The EC also concluded that the newly established Data Protection Review Court, which will handle complaints about how EU citizens' data is handled by security authorities, ensures that individuals enjoy effective redress rights. The functioning of the DPF is set to be subject to periodic review by the EC – together with representatives of European data protection authorities ("DPAs") and competent US authorities – and it may decide to suspend, amend or repeal the adequacy decision or limit its scope.
How does the DPF work for US importers?
Data importers in the US that wish to rely on the DPF to receive data from the EU must self-certify their adherence to a set of principles (the "DPF Principles"). The DPF Principles are an updated version of the principles under the Privacy Shield framework. Consequently, US data importers that were certified under the Privacy Shield are well-positioned to self-certify under the DPF. Only US organisations that are regulated by the Federal Trade Commission ("FTC") or the Department of Transportation may certify to the DPF. So, for example, insurance, banking and telecommunications organisations may not currently participate.
There are some minor changes to the Privacy Shield certification process. For example, there are some differences in the way that annual fees are paid and the information required to be provided for self-certification or continued certification in the event of changes to the organisation. The DPF Principles also introduce a more granular procedure for ending certification under the DPF, specifically in relation to what happens to personal data upon withdrawal. In order to benefit from the DPF, an eligible organisation must update its privacy notices to refer to the DPF Principles within 3 months.
While the obligations under the DPF are arguably lighter touch than those set out in the EU Standard Contractual Clauses ("SCCs"), US importers will no doubt be aware of the increased scrutiny and enforcement action in relation to data handling practices from their regulators recently, notably the FTC. It will therefore be crucial that they can stand by the commitments they make if they certify to the DPF.
More information on certifying is available on the DPF website published by the International Trade Administration within the US Department of Commerce, or from your usual Stephenson Harwood contact.
What do EU exporters need to do?
Organisations that wish to transfer EU personal data under the DPF will need to check, prior to transferring the data, whether the US recipient is certified under the DPF and whether the data transfer in question is covered by their DPF certification. If the intended recipient is not certified under the DPF, the transfer will need to rely on a different transfer safeguard such as the EC's SCCs or Binding Corporate Rules.
EU data exporters should also update their privacy notices to take account of the requirements of Articles 13 and 14 of the GDPR, to the extent that such exporters wish to rely on the DPF as the legal basis for US transfers.
Are TIAs still required?
Transfer impact assessments ("TIAs") for EU-US transfers will no longer be required for transfers falling within the scope of the DPF, as the Decision effectively replaces the adequacy assessment which would ordinarily be conducted through the TIA.
However, TIAs will still be necessary for EU-US transfers that rely on other transfer mechanisms. However, the Decision's conclusions on the protections now available under US law will be helpful in supporting the TIA process wherever the DPF does not apply. In particular, any TIA can now reflect the EC's positive assessment of the changes introduced by the EO. As a result, EU data exporters should be able to more confidently conclude that US laws offer essentially equivalent protection to EU data subjects.
What about UK-US transfers?
Following the announcement in June that the UK and US governments had reached an agreement in principle on a US-UK data bridge, a "UK Extension" to the DPF to facilitate UK-US transfers is now planned to be swiftly established.
The US Department of Commerce has confirmed that organisations certifying under the DPF can already self-certify under the UK Extension too. However, UK businesses should be aware that even where a US importer has self-certified under the DPF plus UK Extension, transfers will still not be legitimised by the framework until the UK government has issued its own adequacy decision in respect of the DPF and UK Extension.
Potential challenges
Austrian privacy campaigner Max Schrems has referred to the Decision as a "third attempt to pass largely the same unlawful decision" and said that now the first companies have started relying on the DPF, it will open the door to new a legal challenge. Particular concerns have been raised in relation to the Data Protection Review Court, because its processes do not allow data subjects to be heard themselves but will require the involvement of the applicable supervisory authority.
Despite these indications that challenges to the DPF may arise fairly quickly, US officials have expressed confidence in the ability of the DPF to survive a legal challenge. So, whilst the Decision brings much anticipated change to legitimate EU-US transfers, its future as a robust transfer mechanism remains to be seen.
Big Tech submit Digital Markets Act gatekeeper status
Back in May, we reported that potential "gatekeepers" – Big Tech platforms large enough to fall within the remit of the Digital Markets Act ("DMA") – must notify their core platform services to the EC if they meet the DMA's thresholds by 3 July.
Earlier this month, Google, Apple, Microsoft, Amazon, Meta, Samsung and Tik-Tok owner ByteDance notified the EC that they fall within the scope of the DMA. The EC now has until 6 September to determine whether each platform is a "gatekeeper", with platforms designated as such set to be subject to a number of obligations centred on data combination, self-preferencing, interoperability and advertising transparency. After designation, gatekeepers will have until March 2024 to comply with DMA rules.
UK finalises first law enforcement adequacy decision
On 7 July, the UK used new powers gained since leaving the European Union to make the first law enforcement adequacy decision by finding the data protection legislation of the Bailiwick of Guernsey adequate. The UK government has therefore concluded that the jurisdiction has strong privacy laws which will protect data transfers to Guernsey for law enforcement purposes, while upholding the rights and protections of UK citizens. The finding will enable personal data to be more freely transferred from UK law enforcement to authorities in the Bailiwick of Guernsey for law enforcement purposes. The UK is now progressing law enforcement data adequacy assessments of the Bailiwick of Jersey and the Isle of Man.
You can read the Home Office's release for more information here.
Artificial intelligence
China updates approach to AI regulation
On 13 July, the Cyberspace Administration of China ("CAC") published a new draft of its upcoming interim AI measures, expected to come into force on 15 August.
The rules will apply to AI models that generate text, images, and other output and that are publicly available. Compared to an earlier draft, the new measures have a softened approach towards KYC requirements and the requirements on the quality of training data. The new rules also state that only providers planning to offer services to the public would be required to submit security assessments, suggesting providers with business-facing products will escape this requirement.
However, other commentators have deemed the new draft to be equally, if not more, strict than the earlier draft. The CAC still plans to create a system where companies must obtain a licence before they release generative AI models. This reflects the wider struggle facing global legislatures: how to balance the promotion of AI innovation with the desire to regulate generative AI providers.
China will delegate the regulation of generative AI to the relevant sectoral authorities, unlike the EU's approach (bringing in comprehensive legislation to cover all instances via the AI Act). However, despite this delegation, the draft rules still emphasises that content generated by AI products in China must be in line with China's core socialist values.
The CAC's press release is available in Chinese here.
UK steps up inquiry into how to regulate AI
On 4 July, Rishi Sunak insisted the UK could 'do a lot without legislation'. However, following growing criticism to the AI White Paper published in March, a new inquiry has been established.
The White Paper published in March focused on safety features and guardrails, in contrast to the EU's AI Act which establishes a prescriptive risk-based system. The UK Equalities and Human Rights Watchdog described the UK's light-touch approach as inadequate, warning that regulators will be left under-resourced and unable to keep emerging technology in check.
In the midst of this criticism, and as global legislatures race to regulate AI, the House of Lords' Communication and Digital Committee announced its inquiry into large AI language models. The inquiry will evaluate the work of governments and regulators and consider what needs to happen over the next 1–3 years to ensure the UK can respond to AI's opportunities and risks.
Also in July, a policy advisor for the UK's Office for AI confirmed that any gaps that occur from the UK's sectoral approach to AI governance will be handled by the Office for AI. The Office for AI will conduct horizon scanning and risk assessment, identifying where more guidance is needed on specific AI risks.
Meta and Google develop ChatGPT competitors
OpenAI's ChatGPT has dominated AI headlines since its release. As OpenAI becomes the subject of increased regulatory attention and, this month, a new class action, other tech giants are joining the race to develop and release generative AI models.
On 13 July, Google released Bard in the EU. As reported in June, the launch of this AI tool was delayed due to the Irish Data Protection Commission ("DPC") requesting more privacy information. Owing to these privacy concerns, Google incorporated further data protection measures into Bard, including a privacy 'help hub', a more prominent warning notice and a webpage with a privacy notice. Suggesting these improvements are not enough to convince the DPC, Google also agreed to send a report to the DPC in three months. This illustrates how businesses involved with AI may be expected to have continuing engagement with regulatory bodies, rather than simply attaining a green light before publication.
Also competing with OpenAI, Meta announced in July that it will soon release a commercial version of its AI model. Meta released its open-source language model 'LLaMA' to researchers in February. The commercial release will enable businesses to build custom software on top of the AI technology. In support of Meta's open-source approach, Meta's global affairs chief commented that "openness is the best antidote to the fears surrounding AI".
Stay tuned to see how the attention of privacy regulators will be divided between these tech giants.
Cyber security
Microsoft suffers cyber attack
On 11 July, Microsoft confirmed that it had been the target of a cyber attack from a threat actor based in China. The threat actor gained access to OWA and Outlook.com enterprise and personal email accounts, affecting approximately 25 organisations including several Western government agencies. The US Department of Commerce confirmed that Microsoft had notified it of the attack.
Microsoft stated that the motivation of the threat actor was espionage, focused on gaining access to email systems for intelligence collection and in order to gain access to data residing in sensitive systems, while US officials have described the attack as narrowly targeted at high-value individuals. Such activities are often associated with state-backed and nation-state directed threat actors. The Chinese state has refuted any involvement.
The threat actor gained access to the Microsoft operated email accounts using forged Azure Active Directory digital authentication tokens, mimicking the tokens that are usually required by Microsoft's systems to verify a person's identity. Microsoft discovered that the breaches began in mid-May. Microsoft has since announced that the issue arose from a validation error in its source code, and that the threat actor was able to produce the forged tokens using an inactive Microsoft key it had acquired, which security researchers have claimed dated from 2016. In response to this attack, Microsoft stated that it had implemented substantial automated detections for known indicators of compromise associated with the attack. As of 11 July, no further access by the threat actor had been identified by Microsoft. Microsoft confirmed that it had contacted the affected customers. In its press release, Microsoft emphasised that it was sharing the details of the attack publicly due to the industry's need for transparency around cyber incidents so that businesses can learn and improve.
This episode highlights the ongoing threat of supply chain attacks in which a single provider, here Microsoft's email services, can impact multiple victims. It highlights the reliance that customers place on their providers to respond to threats appropriately. Microsoft has received some criticism. Customers could only identify if they had been victim to the attack if they subscribed to more expensive logging options; those on cheaper tiers did not have access to the necessary services and logs to confirm whether they had been impacted. Microsoft has not announced that it will make certain critically important logs available to licensees of lower cost cloud services. Further, security experts and US lawmakers have raised concerns as to how the inactive key was acquired by the threat actor and why it was still functional.
It also shines a light on threat actor motivations beyond the more well-known money theft and ransomware/extortion cases. The threat actor in question (designated Storm-0558) is believed to target U.S. and European diplomatic, economic, and legislative governing bodies, and individuals connected to Taiwan and Uighur geopolitical interests, as well as media companies, think tanks, and telecommunications equipment and service providers.
SEC adopts rules on cyber security
On 26 July, the Securities and Exchange Commission ("SEC") adopted rules on cyber security risk management, strategy, governance and incident disclosure by US public companies (the "Rules").
The Rules were first announced in March 2022, and will require registrants to disclose to investors any material cyber security incident, and to describe the material aspects of the incident's nature, scope and timing, as well as its material impact or reasonably likely material impact on the registrant. The disclosure is ordinarily required within four business days after a registrant determines that a cyber security incident is material, although the disclosure may be delayed if the US Attorney General determines that immediate disclosure would pose a substantial risk to national security or public safety and notifies that SEC of such in writing. The SEC has suggested that material incidents are those which a public company's shareholders would consider important "in making an investment decision."
The Rules will further require registrants to include in their annual reports descriptions of their processes for assessing, identifying and manging material risks from cyber security threats, as well as the material effects or reasonably likely material effects of risk from cyber security threats. Registrants must also describe the board of directors' oversight of risks from cyber security threats and management's role and expertise in assessing and managing material risks from cyber security threats.
Comparable disclosures by foreign private issuers are also required by the Rules for both material cyber security incidents and for cyber security risk management, strategy and governance.
The Rules are set to become effective in December or 30 days following publication of the adopting release in the Federal Register. Although it is for registrants to determine what constitutes a "material" cyber incident, the Rules seem to be indicative of a shift in the US towards more robust expectations for cyber security governance. Commentators have also speculated that the changes may provide a new source of securities class action filings in the US.
You can read the SEC's press release here.
Civil litigation and enforcement
The intersection of competition and data protection: CJEU rules competition authorities can consider GDPR breaches and rules out contract and legitimate interest bases for certain personalised advertising
In February 2019, the Bundeskartellamt – Germany's national competition authority – found that Meta's (then Facebook) data policy amounted to an abuse of its dominant position. Meta made use of the Facebook platform conditional on the collection and subsequent combination with multiple sources outside the platform. The Bundeskartellamt found that such data processing would only be lawful under the EU GDPR where users explicitly consent and prohibited Meta from engaging in the practices, deeming them an exploitative abuse of dominance under competition law. Meta appealed the Bundeskartellamt's decision to the Higher Regional Court of Düsseldorf, which in turn referred questions to the CJEU.
On 4 July, the CJEU ruled that EU competition authorities can consider a company's compliance with relevant regulations, including EU data protection rules, when assessing whether it abused its dominant position. Despite this, it should be noted that competition authorities are not compelled to assess GDPR breaches as part of their investigations. Where privacy practices and data protection infringements are the sole form of abuse assessed, the CJEU said that it may be more appropriate for the issue to be resolved by a national DPA. The CJEU also emphasised that, in order to ensure consistency in the GDPR's application, where a competition authority is called upon to examine whether an undertaking's conduct is consistent with the provisions of the GDPR it must "consult and cooperate sincerely with the national supervisory authorities concerned".
Importantly, the CJEU clarified that the fact that a controller, such as Meta, holds a dominant position on the social network market does not, of itself, prevent the users of that social network from giving their consent freely, despite the potential imbalance of power.
In relation to the legal basis for personalised advertising based on the combined data, the CJEU noted that Meta's processing activities were "particularly extensive" relating to "potentially unlimited data" having a "significant impact on the user", potentially giving rise to data subjects feeling that their "private life is being continuously monitored". Although the CJEU held that processing personal data for the purposes of personalised advertising could theoretically rely on the legitimate interests basis, it found that Meta's processing did not satisfy the balancing test. Meta's processing therefore could not be justified in the absence of the data subject's consent. The CJEU also held that "it does not appear" that Meta's processing is "strictly necessary for the performance of the contract" between Meta and Facebook users. For the CJEU, necessity in this context requires objective indispensability for a purpose integral to the contract and the controller must be able to demonstrate how the main subject matter of the contract cannot be achieved if the processing in question does not occur.
The judgment of the CJEU is significant in that it paves the way for competition authorities across the EU to assess GDPR compliance in competition investigations, although competition authorities must still be prepared to justify why GDPR breaches are relevant to their competition law investigations. The judgment is also important in narrowing the opportunities to rely on legitimate interests or contract for more intrusive forms of personalised advertising. A Meta spokesperson confirmed that the tech giant is evaluating the CJEU's decision and will have more to say in due course.
You can read the judgment of the CJEU here.
Controllers may refuse to comply with access requests unrelated to data protection
The Brandenburg Higher Regional Court has ruled that a controller may refuse to comply with access requests under Article 15 GDPR where such requests are aimed exclusively at achieving objectives not related to data protection.
The data subject, the plaintiff, had objected to premium increases made by the defendant controller, a private health insurer. The data subject made an access request under Article 15(1) GDPR, which related solely to the triggering factor for the recalculation of the premium at issue, in order to demonstrate if the increase was unlawful. The triggering factor is the percentage threshold by which the cost for insurance services must have changed in order for the insurer to be able to increase premiums for health insurance policies.
Because the value of the triggering factor was a mere calculation parameter from which it was not possible to identify the data subject, the court ruled that it did not constitute personal data as defined by Article 4(1) GDPR.
However, the court went on to consider the position if it was accepted that the triggering factor was considered personal data. Pointing to the language of Article 12(5), which provides that frequent repetition is an example of where a request by a data subject may be "excessive", the court held that the use of the words "in particular" made it clear that the provision is also intended to cover other abusive requests and is not exhaustive in this respect. When interpreting what constitutes an abuse of rights in this sense, the court noted that the protective purpose of Article 15 GDPR as recognised by recital 63 – to enable the data subject to understand the processing of personal data concerning him or her and whether that data is processed in a lawful manner – must be considered. The purpose of the data subject's access request was exclusively to review the lawfulness of premium adjustments, which did not align with the protective purpose of Article 15 GDPR and was therefore abusive.
Despite this, other German courts have ruled differently. For these courts, data subjects can legitimately use access requests to reduce information asymmetries between themselves and the controller in order to protect their rights and freedoms.
While this ruling is specific to a German court's interpretation of the EU GDPR in a particular context, it addresses the issue of data subject rights being abused for purposes unrelated to data protection objectives and suggests potential scope to argue that access requests should not be used to "fish" for information that does not serve data protection purposes.
Round up of notable enforcement action
Each month, we bring you a round-up of notable data protection enforcement action.
Company | Authority | Fine | Comment |
Bonnier News | Swedish DPA | 13 million Swedish krona (approx. €1,130,000) | The company illegally profiled customers and web visitors without consent for target advertising, telemarketing and postal marketing. |
Icelandic Directorate of Health | Icelandic DPA | 12 million Icelandic krona
(approx. €82,000) | The directorate had failed to implement sufficient data security measures, resulting in authorised access to sensitive health data. |
Creditinfo Lánstraust, eCommerce 2020, AIC ehf
| Icelandic DPA | Creditinfo Lánstraust: 37.86 million Icelandic krona (approx. €258,000)
eCommerce 2020: 7.5 million Icelandic krona (approx. €51,000)
AIC ehf: 3.5 million krona (approx. €24,000) | Creditinfo Lánstraust, a financial information agency, had illegally registered defaulted small loans, which were reported to it from loan provided eCommerce 2020 and debt collector AIC ehf. |
Telecommunications company | Hellenic DPA | €150,000 | The company sent unsolicited advertising messages, did not respond to an access request and did not facilitate the data subject's objection to processing. |
Piraeus Bank S.A. | Hellenic DPA | €100,000 | The bank mistakenly including its customer's personal data in a list of debtors and did not respond promptly to an access request. |
Caixabank | Spanish DPA | €25,000 | The bank provided a customer's account data to the wrong person. |