Data Protection update - July 2024

Data Protection update - July 2024

Welcome to the Stephenson Harwood Data Protection update, covering the key developments in data protection and cyber security law from July 2024.

In:

  • recent news, the King's Speech unveiled plans for new UK data and cyber security Bills;
  • data protection news, a global privacy sweep found that most websites use deceptive design patterns, and Meta's plans to utilise users' personal data for AI training purposes were suspended in Brazil, the EU and the UK;
  • cyber security news, the European Commission (the "Commission") launched a consultation on draft Implementing Regulations on requirements for cyber security risk management measures under the NIS 2 Directive, AT&T announced that hackers had stolen records of millions of customers, and CrowdStrike pledged to enhance its software testing procedures following the global IT outage on 19 July; and
  • enforcement news, Microsoft's Xandr was accused of EU privacy breaches and a school in Essex was reprimanded for using facial recognition in school canteen;
  • civil litigation news, Meta challenged the European Data Protection Board's ("EDPB") "pay or consent" decision, the EU Commission appealed the decision by the European Data Protection Supervisor ("EDPS") decision regarding the Commission's use of Microsoft 365, an English data protection case that may have some interesting implications on whether reputational harm damages are available as a remedy in claims other than defamation was heard, and the European Court of Justice ("ECJ") ruled that failure to provide fair processing information can enable consumer protection associations to bring legal proceedings.

Data protection

Cyber security 

Enforcement

Civil litigation

Data protection

King's Speech unveils proposal for new data protection Bill

On 17 July, the King's Speech unveiled proposals for 40 new Bills, including the Digital Information and Smart Data Bill (the "Bill") - a bill related to data reform.

The full text of this Bill is not yet available, but a summary of the proposed legislation sets out the Bill's main proposals and states that it is intended to "harness the power of data" to promote economic growth and enable new and innovative uses of data to improve people's lives. The Bill resurrects some key themes from the defunct Data Protection and Digital Information Bill ("DPDI"), such as reforms to data protection obligations related to scientific research and a reorganisation of the Information Commissioner's Office ("ICO").

Harnessing data for economic growth

The Bill will reportedly outline proposals that will allow data to be used to accelerate innovation and productivity across the UK, including:

  1. Digital verification services – the proposal suggests that the introduction of a digital identity verification service will reduce the time and costs that businesses spend verifying an individual's identity as part of transactions such as pre-employment checks and property purchases. The Government suggests that the economic benefit to using secure digital identities across the UK is estimated to be around £600 million a year.
  2. Smart data schemes – this would involve the secure sharing of customer personal data, upon the customer's request, with authorised third-party providers in order to provide customers with innovative services to improve decision-making and engagement in a market. An active example of a comparable scheme is Open Banking.

Scientific research

The Bill also intends to make it easier for scientists to use data for research purposes. For instance, scientists will be able to ask for broad consent for scientific research and researchers will be permitted to undertake scientific research for commercial purposes to make use of data. These proposals appear to be taken from the DPDI and are likely to be welcomed by research businesses.

Regulatory changes

The Bill will also advocate for changes to the ICO in order to modernise the regulator and strengthen its powers. The ICO will be re-structured, so that it will have a CEO, board and chair, and it will be given "new, stronger powers".

The Bill also mentions making "targeted reforms to some data laws … where there is currently a lack of clarity impeding the safe development and deployment of some new technologies", however, there is currently no further detail on what these reforms are likely to entail and whether they will track the DPDI's provisions, for example through the use of "recognised legitimate interests".

There is no information currently available on when the Bill will be introduced.

Global privacy sweep finds most websites use deceptive design patterns

The Global Privacy Enforcement Network ("GPEN") conducted its annual global privacy sweep of more than 1000 websites and apps between 29 January and 2 February 2024 and found that nearly all the platforms it examined employed one or more deceptive design patterns.

A deceptive design pattern is a design feature that aims to steer users in to selecting options that allow for the collection of more of their personal data than they would like, or that make it difficult for users to object to the collection of their personal data.

The sweep evaluated the websites and apps using five criteria identified by the OECD as being characteristic of deceptive design patterns:

  • Complex and confusing language – particularly in privacy policies, which make it difficult for individuals to understand what their rights are.
  • Interface interference – for instance, making the least privacy protective option the easiest for users to select.
  • Nagging – such as asking users to reconsider their intention to delete their account.
  • Obstruction – for example, when the design of the website or app makes it difficult for users to find privacy settings or delete their account.
  • Forced action – when users are forced to disclose more personal information when deleting their account than they had to provide when they opened it.

Website design and making it easy for consumers to make informed privacy choices are key components of the data protection principles of fairness and transparency. Therefore, organisations should review the design patterns they use, as otherwise they could infringe on individuals' rights, and potentially lead to enforcement action.

Meta's plans to utilise users' personal data for AI training purposes suspended in Brazil, the EU and the UK

On 2 July, Brazil's national data protection agency ("ANPD") announced that it had taken the preventative measure of suspending Meta's updated privacy policy that covered the use of user data to train its generative AI system. This followed a similar announcement on 14 June from the Irish Data Protection Commission ("DPC") that Meta had decided to stop its plans to use users' posts on Instagram and Facebook across the EU and EEA to train its AI system.

Please see our article from the July edition of the Neural Network, Stephenson Harwood's new AI-focused newsletter, for further details on this story.

Google drops plans to end third party cookies and revises Sandbox

On 22 July 2024, Google and the Competition and Markets Authority ("CMA") separately  announced Google's revised approach to its "Privacy Sandbox" feature. Instead of removing third-party cookies ("TPCs") from Chrome, Google will introduce a user-choice prompt for retaining TPCs.

This change follows the CMA’s acceptance of Google’s commitments in February 2022 to address competition concerns under the Competition Act 1998. Google had pledged not to remove TPCs until these concerns were resolved. The CMA now seeks public input on Google’s new plan by 12 August 2024, via privacysandbox@cma.gov.uk. Due to this development, the CMA will not release its quarterly update on Google’s compliance, which was due at the end of July.

Deputy Information Commissioner Steve Bonner expressed disappointment at Google’s change, viewing the blocking of TPCs as positive for consumers. The ICO says it "will monitor how the industry responds and consider regulatory action where systemic non-compliance is identified for all companies including Google". Please read our article here on noyb's complaint against the "Privacy Sandbox" feature.

Cyber security

New Cyber Bill in King's speech (the UK's NIS 2)

During the King’s Speech on 17 July 2024, alongside the proposal for the Digital Information and Smart Data Bill, Labour also revealed a proposal to introduce a new Cyber Security and Resilience Bill (the "Bill"). The Bill aims to fortify the UK's cyber defences and heighten protection of essential digital services amidst a rising threat environment and evolving global cyber security standards.

The new Bill is set to update the existing Network and Information Security Regulations 2018 ("UK NIS"), which are based on the EU’s Network and Information Security Directive ("NIS"), a piece of legislation that imposes cyber security and incident reporting obligations on operators of essential services and digital service providers. The UK government is following the lead of the EU, where NIS has been updated, with the implementation of the NIS 2 Directive due to become effective in October 2024.

While the full text of the Bill is not yet available, the background briefing paper accompanying the King’s Speech provides a summary of the proposed legislation, setting out the Bill's main proposals, including:

  1. Expansion of scope: The Bill will extend the UK NIS regime to encompass more digital services and supply chains.
  2. Strengthening regulatory powers: The Bill aims to place regulators on a stronger footing, potentially through cost recovery mechanisms and powers to proactively investigate vulnerabilities.
  3. Increased incident reporting: The nature and type of incidents that need to be reported will be expanded to improve the UK's understanding of current threats and alert to potential attacks.

The proposed Bill is essential for safeguarding the UK’s critical infrastructure across key sectors such as transport, health and energy. With rising cyber threats evidenced by attacks on London hospitals and the Ministry of Defence, costing the UK billions each year, enhanced regulations are crucial.

Commission consults on draft requirements for risk management measures and significant incidents under NIS 2

On 27 June 2024, the European Commission launched a consultation on a draft Implementing Regulation (the "Regulation") outlining technical and methodological requirements for risk management measures under Article 23(3) of the NIS 2 Directive. This includes operational requirements in areas such as incident handling, and minimum requirements for risk management frameworks and network security policies as specified in Article 21(2) of the NIS 2 Directive.

The Regulation also establishes criteria for incidents to be considered significant for the purposes of Article 23(3) of the NIS 2 Directive, triggering reporting requirements. These include incidents causing financial losses exceeding EUR 100,000 or 5% of annual turnover. Specific criteria are also provided for different entities.

Relevant entities covered by the Regulation would include cloud computing service providers, managed service providers, providers of online marketplaces, search engines, and social networking platforms. The Regulation also proposes repealing Commission Implementing Regulation (EU) 2018/151, which sets current requirements for digital service providers under the Cybersecurity Directive ((EU) 2016/1148), with the repeal effective from 18 October 2024.

The consultation closed on 25 July 2024 and the Commission plans to adopt the Regulation in the third quarter of 2024.

Hackers steal records of millions of AT&T customers

On 12 July 2024, AT&T revealed that hackers had stolen six months' worth of call and text message records from nearly every AT&T cellular network customer, leaving millions of Americans susceptible to having their sensitive information exposed. An even greater threat to AT&T users is posed because of a previous security issue leading to some AT&T customer names being previously released in a breach announced in March.

An internal investigation in April found that hackers "unlawfully accessed and copied" call logs stored on a third-party cloud platform. The records include phone numbers but not the content of the calls and messages or personal information. Although names were not included, phone numbers can often be traced back to individuals using publicly available tools.

The Justice Department, FBI, and Federal Communications Commission are investigating the breach alongside AT&T. In response, AT&T has strengthened its cyber security measures and assured customers it is working to apprehend the hackers. The company noted that one person has been apprehended and that the stolen data is not publicly available.

CrowdStrike to improve testing after worldwide IT outage

CrowdStrike has pledged to enhance its software testing procedures following a global IT outage caused by a faulty content update. On 19 July 2024, a bug in the update for Windows systems led to widespread disruptions, affecting banks, hospitals, and airlines, and causing "blue screens of death" on 8.5 million computers worldwide.

In its review published 24 July 2024, CrowdStrike revealed that the outage stemmed from a flaw in the system responsible for validating software updates. This error allowed problematic content to go undetected, triggering the massive outage. Cybersecurity experts criticised the company for not having adequate safeguards and for deploying updates to all customers simultaneously without phased testing.

The outage reportedly incurred an estimated $5.4 billion in financial losses among the top 500 US companies, with only a fraction covered by insurance. The situation has drawn congressional attention, with CrowdStrike's CEO being summoned to testify about the incident.

Enforcement

Microsoft's Xandr accused of EU privacy breaches

On 9 July 2024, Max Schrems' privacy rights organisation noyb filed a complaint with the Italian data protection authority (the "Garante") against Xandr, Microsoft's advertising and analytics subsidiary. Xandr operates a Real Time Bidding (RTB) platform, collecting and sharing personal data for targeted advertising.

The complaint accuses Xandr of violating multiple GDPR provisions setting out that it is failing to be transparent about the sensitive data it collects (e.g., health information, sexual orientation and financial status) and to implement data subject rights. This follows Xandr's failure to comply with a data subject's access and erasure request in February 2024.

Noyb is asking for the Garante to fully investigate the matter and to order Xandr to comply with the complainant's data access and erasure request. It also demands that Xandr aligns its data processing operations with GDPR principles of data minimisation and accuracy and that it facilitates the exercise of data subject rights. Additionally, noyb recommends a substantial fine due to the continuous GDPR violations and the sensitivity of the data processed.

Chelmer Valley High School reprimanded for using facial recognition in school canteen

On 22 July 2024, the UK Information Commissioner’s Office ("ICO") announced that it had reprimanded Chelmer Valley High School (the "School") in Chelmsford, Essex, for unlawfully using facial recognition technology in its canteen. The school started using the technology in March 2023 without conducting a proper risk assessment or fully consulting parents or students, thus violating the UK General Data Protection Regulation ("UK GDPR").

The ICO found that the school relied on assumed consent by sending letters to parents with an opt-out slip, rather than seeking explicit consent. "Most students were old enough to provide their own consent," the ICO stated, criticising the deprivation of students' rights and freedoms.

The ICO's head of privacy innovation, Lynne Currie, highlighted the significance of properly handling personal information in schools, stressing that any new technology deployment must be thoroughly assessed for data protection risks. The School has been given recommendations to improve its data protection practices. This incident follows the ICO’s 2021 advice against using facial recognition in schools, promoting less intrusive methods instead.

Civil litigation

Meta challenges EDPB's "pay or consent" decision

Meta Platforms is reportedly challenging the EDPB's "pay or consent" opinion, published in April 2024, which stated that in "most cases", the pay or consent model will not allow large online platforms (such as Meta) to comply with the requirements for valid user consent.

While the EDPB's opinion is not binding on organisations, European data protection agencies (each a "DPA") can take it into account when investigating complaints or making decisions.

For further details on the EDPB's "pay or consent" opinion, please see our insight here.

Meta is asking the General Court to annul the EDPB's opinion and grant an injunction against it being used in any active case. If the injunction is granted, this could impact current investigations into organisations' use of pay or consent models, as DPAs would no longer have to take the EDPB's opinion into account.

Commission appeals EDPS' decision regarding use of Microsoft 365

On 17 May (published in the Official Journal on 1 July), the Commission filed a claim against the EDPS in the EU General Court against the watchdog's decision in March, which found that the Commission's use of Microsoft 365 had breached Regulation (EU) 2018/1725 ("EUPDR"), a framework which governs the protection of personal data processed by Union institutions, bodies, offices and agencies and is equivalent to the EU GDPR. Microsoft has also challenged the EDPS' decision.

For further details on the EDPS' decision, please see our insight from March.

In the claim, the Commission argues that the EDPS misinterpreted several aspects of the EUPDR and had made errors of fact when applying the law. For instance, the Commission alleges that the EDPS made an "erroneous application" of the EUPDR in assuming that direct transfers of personal data took place between the Commission and Microsoft Corporation in the United States. They also submit that the EDPS infringed the principle of proportionality in the corrective measures they outlined in their decision.

Can data protection claims provide a remedy to reputational harm?

A recent case, Pacini & Anor v Dow Jones Inc., may provide some useful insights into the relationship between data protection laws and defamation.

Two Wall Street bankers (the "Claimants") alleged that in 2017 and 2018, the Wall Street Journal, published by Dow Jones, released two articles which suggested that XIO Group, an investment firm that the Claimants were senior executives of, had been involved in fraudulent activities.

In their claim, the Claimants argued that by publishing the article, Dow Jones had breached GDPR, as it had published inaccurate information and failed to process the Claimants' personal data fairly. The Claimants also argued that the personal data was criminal offence data, which can only be processed in limited circumstances, and that Dow Jones had failed to process it accordingly.

In response to this, Dow Jones brought a strike out claim, arguing that the UK GDPR claim was an abuse of process. It submitted that the claim was purely tactical, and that it was a "defamation complaint dressed up as a claim in data protection" (as defamation claims must be brought within one year of the defamatory statement being made). The High Court presiding Judge highlighted that a key aspect of this issue was whether a claim for damage to reputation can be made in data protection proceedings.

The Judge dismissed the strike out application, stating that "the state of the law on the recoverability of damages for injury to reputation in non-defamation claims is uncertain and in flux", and therefore it would be wrong to dismiss a claim for reputational damages caused by "the processing of inaccurate data in a data protection claim" on the grounds that it was an abuse of process.

If the case proceeds to trial, this issue will likely be considered in greater depth. If reputational harm damages are to become available as a remedy to data protection claims, this may lead many people who have missed the one-year defamation claim deadline to begin proceedings under data protection heads instead.

ECJ rules that failure to provide fair processing information can enable consumer protection associations to bring legal proceedings

In a recent preliminary ruling involving Meta Platforms Ireland Ltd and the Federal Union of Consumer Organisations and Association, the ECJ has ruled that Article 80(2) of the EU GDPR allows not-for-profit entities to initiate legal actions without a mandate if they assert that a data subject's rights have been infringed due to a controller's failure to provide fair processing information. In this case, Meta failed to provide clear and accessible processing information to users of free games via its App Centre.

The EU GDPR require data controllers to provide concise, transparent, and intelligible information about data processing purposes and recipients at the time of data collection. The ECJ emphasised that personal data processing must meet transparency requirements, aligning with the data protection principles of fair and lawful processing (Article 5(1)) and purpose limitation (Article 5(2)). The validity of a data subject's consent depends on receiving the necessary information to make an informed decision.

This ruling highlights the importance of transparency in data processing and sets a precedent that empowers consumer protection associations to act on behalf of data subjects, ensuring robust protection of data rights across the EU.

Ofcom fines TikTok for failure to comply with information request

On 23 July 2024, Ofcom, the UK’s communications and online safety regulator, issued its final decision to TikTok Information Technologies UK Limited ("TikTok"), imposing a financial penalty of £1,875,000 for breaches related to its compliance with the Communications Act 2003. This follows TikTok's disclosure that it had provided inaccurate data when responding to Ofcom's information request about its "Family Pairing" parental control to inform Ofcom's Child Safety Report.

Please see our recent blog post for further details on this story.

Round-up of enforcement actions

Company

Authority

Fine

Comment

Grindr

Oslo District Court

€5.65 million

The Oslo District Court confirmed that Grindr had violated the EU GDPR by sharing users' personal data with advertisers without a valid legal basis.

 

Of particular concern, was that the data revealed who was using Grindr, which is a strong indication their sexual orientation, which is special category personal data and merits a higher level of protection under the EU GDPR.

Eni Plenitude S.p.A

Italian DPA

€6.4 million

The company was fined as it lacked a suitable legal basis to carry out telemarketing activities and had not enacted appropriate security measures.

 

Eni Plenitude had made promotional calls to individuals without their consent and to individuals whose numbers were on a registry that indicated they did not want to receive unsolicited advertising calls.

Vinted

Lithuanian DPA

€2.38 million

The Lithuanian DPA was forwarded complaints by the French and Polish DPAs that Vinted had not sufficiently responded to data subject erasure and access requests.

 

Another key issue in the investigation was that the company effected "shadow blocking" where users would be banned from the platform (without their knowledge) for allegedly violating Vinted's terms of service, but Vinted would continue to process their personal data.

Mapfre Inversión

Spanish DPA

€300,000 (subsequently reduced to €180,000)

The company was fined as it had carried out transactions as part of a wealth management contract, without the consent of the other parties to the contract.

 

The Spanish DPA found that it had carried out unlawful processing of personal data.

Cappello Giovanni and Figli Srl

Italian DPA

€120,000

The Italian DPA fined the company as it had processed employees' personal and biometric data without their consent.

 

The company was monitoring employees and regulating their access to the workplace using a facial recognition system.