Data Protection update - August 2020
Welcome to our data protection bulletin, covering the key developments in data protection law from August 2020.
Data protection
- The aftermath of Schrems II
- ICO's children's code enters into force on 2 September 2020
- Covid track and trace app update
- ICO launches guidance page for the data issues arising from exam results
- ICO launches guidance on AI and data protection
- GDPR's dispute resolution mechanism triggered for the first time in an ongoing case against Twitter
Cyber security
Enforcement
Regulatory enforcement
- ICO releases 2019-2020 annual report
- French data protection authority issues first fine as lead supervisory authority
- Capital One bank fined for 2019 data breach
- Twitter to pay compensation for unlawfully using personal data
- ICO issues fines for unlawful marketing practices
- Danish Data Protection Agency files a data breach notification against itself
Civil litigation
- UK Court of Appeal found police use of live facial recognition technology unlawful
- Class action claim commenced against various companies in the Marriott group arising from data breach
- Uber drivers commenced Dutch proceedings to obtain data
- Salesforce and Oracle face class action lawsuit for tracking cookies
Data protection
The aftermath of Schrems II
In the wake of the Schrems II decision, which invalidated the Privacy Shield as a data transfer mechanism, (as reported on in our July 2020 bulletin), the Austrian civil-rights group, Noyb, led by Max Schrems himself, has filed a total of 101 complaints against EU-US data transfers alleging violations of the GDPR. Companies across 30 European countries, as well as Google and Facebook in the US, are the subject of these complaints. In a statement issued by Noyb, the group explains that many companies are still using Google Analytics or Facebook Connect despite both companies being subject to US surveillance laws. It appears some are justifying the international data transfers on their use of the Standard Contractual Clauses (“SCCs”) as an alternative to relying on the Privacy Shield; however the use of SCCs is not a suitable solution where the recipient country’s laws do not provide adequate protection for EU citizen’s personal data. There is frustration from the group that companies appear to be ignoring the ruling handed down by the Court of Justice of the European Union (“CJEU”) on 16 July. The group is calling on data protection authorities to take action, referring to their obligations under the GDPR to enforce the law, especially when receiving a complaint. In the Schrems II ruling, the CJEU made it clear that it is the responsibility of the data protection authorities to take the necessary action against companies who are in breach of the GDPR. The statement issued by Noyb suggests that further legal action is planned not just against those companies transferring data in breach of the GDPR but also against the data protection authorities who continue to take a backseat in the fight against monopolising US tech companies.
Meanwhile, it appears the US and the EU are engaging in discussions to come up with a replacement to the Privacy Shield. The US Department of Commerce and the European Commission have issued a joint statement on the issue recognising the “vital importance” of data protection and cross-border data transfers. The US International Trade Association has also issued FAQs on the subject.
It is clear the CJEU’s decision has resulted in a multitude of issues for companies and data protection authorities alike and hopefully more questions will continue to be answered by the relevant authorities as we move into the autumn.
ICO’s children’s code enters into force on 2 September 2020
Under the Data Protection Act 2018, the Information Commissioner is required to produce a code of practice on the standards of age appropriate design. The Age Appropriate Design Code (also known as the Children’s Code) shall come into force on 2 September 2020 with a 12 month transition period.
The ICO has released a preface to the code which explains that the code will not be a new law but rather a set of standards. It will include a set of 15 flexible standards to ensure built-in protection for children using online platforms, for example ensuring that software settings are set at “high privacy” by default and collecting and retaining only the minimum amount of data.
It is envisaged that the code will ensure children and young people have a safe space to learn, explore and play. Elizabeth Denham suggests this will be a welcome comfort for parents and insists that keeping children safe online should be considered just as important as ensuring children are safe in other areas of life.
Covid track and trace app update
After months of delay, it seems the UK contact-tracing app may finally be making some progress as trials began this month. The app will be based on Apple and Google’s decentralised model. It is thought the app will have a number of additional features as well as the symptom checking and alerts. These include QR check-in at venues, the ability to book a free test and an isolation countdown timer to remind people how long they must quarantine for. While the decentralised model offers more comfort than the original centralised model in relation to data privacy, the government is yet to publish details on how the app will handle personal data and comply with data privacy laws. Equally, there has been no hint of the privacy policy being released yet. As with the last version of the app, the ICO continues to work with the Department of Health and Social Care and has released a statement in support of the UK tracing app saying it will “continue to offer guidance during the life of the app as it is further developed, rolled out more widely and when it is no longer needed.” The ICO is facing criticism from MPs this month for failing to hold the government accountable for its failures in the track and trace programme including their failure to carry out a data privacy impact assessment, as required under the GDPR. A letter signed by 22 MPs from four different parties called on the ICO to consider fining the government for this breach of data protection laws. As far as we are aware, the ICO has not issued a response.
It goes without saying that the ICO’s actions in the development of this version of the app will be under close scrutiny by MPs and privacy activist groups alike as data privacy once again comes to the forefront of the fight against Covid-19.
ICO launches guidance page for the data issues arising from exam results
This month has been witness to yet another Covid-19 crisis as the pandemic prevented students from sitting exams and resulted in algorithms being used to calculate final A-level exam grades. Despite the government eventually deciding against the use of the algorithm after complaints of significant inconsistences and prejudice, the ICO continues to probe the data privacy implications of the algorithm. Under Article 22 of the GDPR, it is prohibited to make decisions about a data subject “based solely on automatic processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” The Office of Qualifications and Examinations Regulation, Ofqual, has issued a statement declaring that automated decision-making does not take place when the standardisation model is applied and that human checks were in place to make the decisions.
One law firm is gathering evidence ahead of a potential judicial review of the decision seeking to rely on breach of the GDPR as one of the grounds. The letter issued by Foxglove raises data protection breaches around profiling, fairness, accuracy and automated decision-making. It remains to be seen whether this judicial review will be pursued in light of the government’s U-turn on exam grades however it is clear that this has caused significant commotion, such that the ICO has felt it necessary to continue to monitor the situation and has launched an exam guidance page to help reassure students and parents.
ICO launches guidance on AI and data protection
As artificial intelligence (AI) continues to dominate digital transformation, now is the time for companies and their senior management to focus on the risks related to AI use and specifically, the risk around personal data. Simply put, AI is a collection of technologies that combine data, algorithms and computing power. Private companies and governments are encouraged and excited by its potential value while regulatory authorities are struggling to balance its rapid development with the still relatively unknown risks it brings with it. This month, the ICO published guidance on AI and data protection. The guidance sets out detailed advice for companies on accountability and governance implications of AI; how to ensure lawfulness, fairness and transparency in AI; how to assess security and data minimisation in AI; and how to ensure individual rights are protected in AI. Elizabeth Denham emphasises the importance of considering data protection issues in the early stages of AI development in order to mitigate risks at the design phase. She explains that the ICO will “continue to focus on AI developments and their implications for privacy by building on this foundational guidance and continuing to offer tools that promote privacy by design”. This guidance will be welcome for many and comes after the European Commission released a white paper on artificial intelligence earlier this year. The white paper looked at how to manage AI risks and ensure the use of AI remains transparent and fair in order to gain the public’s trust. This will certainly be something to pay close attention to as the use and development of AI continues to increase.
GDPR’s dispute resolution mechanism triggered for the first time in an ongoing case against Twitter
Twitter has been facing investigations by the Irish DPC since 2019 when they reported a data breach within the 72 hour deadline. What appeared to be a relatively simple case took a turn, as EU counterparts began to weigh in on the case. Failures to agree on the best course of action has resulted in significant delays to settle the case. This month, the Irish DPC was forced to trigger a dispute-resolution mechanism provided by the GDPR for the first time. The mechanism is used where one regulator is leading an EU-wide investigation and other EU regulators don’t agree on the approach. This came to fruition following the Irish DPC’s publication of a draft decision against Twitter in May 2020. Other supervisory bodies did not agree with the approach laid out in the draft decision. The dispute mechanism is laid out in Article 65 of the GDPR and gives other national regulators the power to have a say in the final outcome. It is thought some of the other big tech cases which are pending in Ireland may meet with the same fate. The Irish DPC has 23 live investigations into multinational tech companies including against Whatsapp, Instagram and Facebook. The Irish DPC has faced scrutiny from privacy advocates for taking too long to resolve these investigations into tech companies. They remain in the spotlight after the Schrems II judgement so it will be interesting to see how this might affect their strategy. The outcome of this dispute resolution mechanism in the Twitter case will shed crucial light for other companies on how EU regulators are likely to deal with privacy violations going forward.
Cyber Security
Garmin hit by ransomware attack
On 24 July 2020, Garmin, the GPS and smartwatch company, was forced to close down its call centres, website and some online services following a ransomware attack on its internal network and some production systems. One of the services taken offline was the Garmin Connect service, which allows Garmin watch owners to synchronise their sporting activities. It is unclear if any personal data was accessed during the cyber-attack.
Since then, Garmin has restored services to its customers. Garmin had reportedly paid a ransom of $10 million via a third party. Researchers at cybersecurity firm NCC claimed that Garmin was subject to the ransomware known as WastedLocker, which was developed by a Russian cybercriminal gang known as Evil Corp.
This is a strong reminder to readers to ensure that their security systems are up-to-date. Firms are also encouraged to provide regular training sessions to their employees on vigilant cyber-practices, particularly given that a large part of the UK workforce is still working from home amidst the Covid-19 pandemic.
EU Council imposes first sanctions for cyber-attacks
On 30 July 2020, the Council of the European Union imposed sanctions against six individuals and three entities connected to various cyber-attacks, including the “WannaCry” and “NotPetya” ransomware attacks. The sanctions imposed include a travel ban and asset freeze, in addition to a prohibition on EU persons and entities from making funds available to the named persons.
This is the first time the EU has imposed sanctions against cyber-attackers. It follows the EU's establishment in June 2017 of the Framework for a Joint EU Diplomatic Response to Malicious Cyber Activities. The Framework allows the EU and its Member States to use various tools, including sanctions, to deter and penalise cyber-attacks.
This sanction is a sign of the EU's growing concerns over the impact of increasing cyber-attacks on individuals and entities at all levels. It is likely that this will be coupled with greater regulatory focus on requiring firms, particularly large firms, to improve their cyber-security measures in the future.
Enforcement
Regulatory enforcement
ICO releases 2019-2020 annual report
On 20 July 2020, the ICO published its 2019-2020 annual report. The Information Commissioner explained that 2019-2020 was a “transformative period” in the UK's digital history, owing to the focus on privacy as a “mainstream concern”.
Key statistics in relation to 2019-2020 revealed in the annual report include:
- The ICO received 38,514 data protection complaints, down from 41,611 in 2018-2019.
- The ICO had resolved more than 39,860 complaints from the public, up from 34,684 in 2018-2019. However, only 80% of the complaints were resolved within the target of 12 weeks. Nevertheless, over 98% of the complaints were resolved within 6 months.
- 2,100 investigations were conducted by the ICO.
- The ICO had taken 236 regulatory actions against firms for regulatory breaches. This comprised 54 information notices, 8 assessment notices, 7 enforcement notices, 4 cautions, 8 prosecutions and 15 fines.
- In 2019-2020, complaints against the local government, healthcare and internet sectors made up almost one-quarter of the total complaints received. Almost half of the complaints related to subject access requests.
In terms of protecting the public, the ICO pointed out that it had produced the Age-Appropriate Design Code to protect children's data privacy (as reported in the Data Protection section above). Further, the ICO supported the Gambling Commission's efforts to protect the data of vulnerable consumers in the gambling sector. In the political sphere, the ICO launched its “#Bedataaware” campaign during the 2019 European elections, which explained to the UK public how political campaigners may use data analytics to micro-target voters.
The annual report, which can be accessed here, highlights the likelihood of stronger regulatory supervision by the ICO in the future as data protection becomes a growing concern across various sectors and for the public.
French data protection authority issues first fine as lead supervisory authority
On 5 August 2020, the French data protection authority (the “CNIL”) imposed a fine of EUR 250,000 on Spartoo, a French online shoe retailer, for GDPR breaches. Spartoo specialises in online shoe sales through its website, which is accessible in 13 EU countries.
After an on-site visit by the CNIL in May 2018, the CNIL decided to take regulatory action against Spartoo for data protection breaches. The CNIL informed other relevant supervisory authorities that it intended to act as Lead Supervisory Authority in the investigation into Spartoo's cross-border processing of personal data belonging to Spartoo's existing and prospective customers.
Following consultation with the other supervisory authorities, including the German, Italian and Portuguese authorities, the CNIL found that Spartoo had committed several breaches of the GDPR. In particular, Spartoo should not have recorded and stored customers' card details for over-the-phone orders. Additionally, Spartoo's collection of Italian customers' health card information, allegedly to combat fraud, was deemed to be excessive, and its failure to delete customer data after a period of customer inactivity was inappropriate, noting that Spartoo had also retained the personal data of more than 25 million prospective customers who were inactive for more than three years.
Capital One bank fined for 2019 data breach
In July 2019, Capital One, a US-based company that offers credit cards and other financial products, suffered a data breach affecting approximately 100 million individuals in the US and 6 million customers in Canada. A hacker, Paige Thompson, accessed and copied data from Capital One's server relating to its customers who had applied for credit cards from 2005 to 2019. Approximately 140,000 Social Security numbers were accessed, in addition to names, addresses and dates of birth. Capital One only discovered the hack after a whistleblower directed it to Thompson's GitHub page where she had posted about the hack.
On 6 August 2020, the US Office of the Comptroller of the Currency, which regulates US banks, announced that it had imposed an $80 million civil money penalty against Capital One in relation to the data breach. The Comptroller highlighted Capital One's failure to establish effective risk assessment processes prior to migrating significant information technology operations to the public cloud environment and its failure to correct the deficiencies in a timely manner. Nevertheless, the Comptroller gave credit to Capital One's customer notification and remediation efforts. Capital One consented to the fine.
Twitter to pay compensation for unlawfully using personal data
In a regulatory filing on 3 August 2020, Twitter disclosed that it had received a draft complaint from the US Federal Trade Commission regarding alleged improper use of users' personal data to improve targeted advertising.
This follows an investigation launched by the Federal Trade Commission into Twitter's activities in October 2019. The investigation concerned Twitter linking a database of its users' personal information entered for two-factor authentication, such as phone numbers and email addresses, to a system used by its advertising partners. Twitter found that when companies uploaded their marketing lists to Twitter's tailored audiences program, the program matched the users on the list to their registered phone numbers and email addresses. This may have violated a 2011 agreement that Twitter signed with the Commission, under which Twitter agreed that it would not mislead users about the measures it took to protect their security and privacy.
According to the regulatory filing, Twitter expects that a payment of between $150 million and $250 million is likely be required to resolve the investigation.
ICO issues fines for unlawful marketing practices
Between July and August 2020, the ICO issued monetary penalty notices against two UK companies for breach of the Data Protection Act 1998. The first fine of £90,000 was issued on 1 July 2020 to Decision Technologies Limited for sending unsolicited marketing emails to almost 15 million customers between July 2017 and May 2018, in breach of Regulation 22 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 ( “PECR”).
The second fine of £500,000 was issued in August 2020 for a breach of Regulation 21 of the PECR. The ICO found that Rain Trading Ltd had made unsolicited marketing telephone calls to 270,774 individuals registered with the Telephone Preference Service (“TPS”), the service which allows individuals to opt out of unsolicited live sales and marketing calls.
The fines serve as a reminder to firms to ensure that customers' explicit consent to direct marketing communication is obtained and recorded to avoid enforcement action by the ICO.
Danish Data Protection Agency files a data breach notification against itself
This month, the Danish data protection agency proved that anyone and everyone is at risk of data breaches as it filed a breach notification against itself. The breach itself related to a non-compliant handling of physical documents which could have contained sensitive data about citizens and were not destroyed as required under the GDPR. Not only were they notifying themselves of their own breach, they also failed to meet the 72 hour deadline of submitting a data breach notification. This serves as a real reminder to organisations to not only check their breach notification processes but also to bear in mind that even well-informed and sophisticated organisations can slip up if they don’t give data protection law the attention it requires.
Civil litigation
UK Court of Appeal found police use of live facial recognition technology unlawful
On 11 August 2020, the Court of Appeal delivered a significant judgment (R (on the application of Bridges) v Chief Constable of South Wales Police ([2020] EWCA Civ 1058) which found that the use of automated facial recognition technology (“AFR technology”) by the South Wales Police Force (the “SWP”) was unlawful.
AFR technology can automatically detect faces and compare them with a database of facial images, to, for example, identify faces which have been put on a “watchlist”. The SWP deployed AFR technology on at least two occasions: once on 21 December 2017 at Cardiff city centre and another on 27 March 2018 at the Defence Exhibition that took place at the Motorpoint Arena in Cardiff. Both deployments were subject to the challenge.
The Court of Appeal found that there was no clear guidance on where and how AFR technology could be used, nor on who was put on the watchlist. Therefore, the SWP's use of AFR technology on the two occasions breached Mr Bridge's Article 8(2) right not to have public authorities interfere with his right to privacy unless in accordance with the law. Consequently, the Court of Appeal found that the data protection impact assessment (the “DPIA”) conducted by the SWP was invalid. The DPIA was written on the basis that Article 8 was not infringed, when in fact there was such an infringement. The DPIA had therefore failed to properly address measures required to mitigate the risks arising to data subjects' rights, in breach of section 64 of the Data Protection Act 2018.
Interestingly, in response to this decision, the Surveillance Camera Commissioner said that the “Home Office and Secretary of State have been asleep on watch and should reflect upon the comments of the court and now act in the public interest”. There may therefore be renewed guidance on the use of AFR technology in the future following the Court of Appeal's findings.
This is the first successful legal challenge of AFR technology. It also highlights the importance of ensuring that DPIAs are carried out properly to avoid future legal challenges to data processing by firms.
Class action claim commenced against various companies in the Marriott group arising from data breach
In November 2018, Marriott discovered that the personal data of approximately 339 million Starwood hotel group guests were unlawfully accessed by hackers between July 2014 and September 2018. The ICO issued a notice of intention to fine Marriott as a result of this data breach, as we previously reported in our July 2019 bulletin, which Marriott subsequently challenged.
As set out in our April 2020 update, the final outcome of the ICO's investigation remains pending, albeit it now appears that any fine which Marriott ultimately faces will be substantially lower than the £99,200,396 million in the original notice of intent to fine. However, Marriott now has to deal with another consequence of the data breach. On 18 August 2020, a class action was filed against Marriott in the High Court, alleging that the data breach resulted in various breaches by various companies in the Marriott group of their obligations pursuant to the GDPR and the UK's Data Protection Act 1998. The claim is brought by Martin Bryant as a representative on behalf of an estimated 7 million affected hotel guests domiciled in England and Wales at that time. This claim raises a number of interesting questions including: (1) the Court's approach to liability where a data breach arises out of hack (see the recent collapse of the group claim against Equifax in this regard); (2) whether such claims are permitted pursuant to CPR 19.6, which will fall for consideration by the Supreme Court in Lloyd v Google; and (3) whether the financial exposure arising from civil litigation arising out of data breaches will ultimately outweigh the much vaunted fines under GDPR. Marriott is currently also facing legal proceedings by consumers relating to the same data breach in US and Canadian courts.
Uber drivers commenced Dutch proceedings to obtain data
A group of UK Uber drivers launched proceedings against Uber in the Amsterdam district court on 20 July 2020 seeking an order requiring Uber to disclose the algorithm it uses to allocate rides to the drivers. In particular, the drivers are requesting that Uber disclose detailed driver profiles, in a bid to find out how Uber's system allocates tags on their profile (for example, “navigation – late arrival/missed ETA”), and how these tags are used to allocate rides to the drivers.
The UK drivers are supported by various union groups and non-profit organisations, including the App Drivers and Couriers Union (the “ADCU”), the International Alliance of App-based Transport Workers and Worker Info Exchange.
According to the ADCU, the drivers have made several subject access requests to Uber for their detailed profiles, but Uber has allegedly failed to provide the requested information, in breach of the GDPR.
The case was commenced in the Dutch courts because, Uber BV, the corporate entity that controls the ride allocation algorithm and driver data, is based in Amsterdam.
Salesforce and Oracle face class action lawsuit for tracking cookies
Salesforce and Oracle are facing a class action in the Netherlands, and are expected to face similar proceedings in England and Wales, arising out of their tracking of cookies. The proceedings are brought by The Privacy Collective, a non-profit organisation set up in the Netherlands for the purpose of bringing the lawsuits. The Privacy Collective claims that Oracle and Salesforce failed to obtain customers' consent to collect and share their personal data collected through embedded cookies, in contravention of the GDPR. In particular, The Privacy Collective alleges that both companies' software, BlueKai and Krux, was used to track, monitor and collect personal data across a host of websites including Amazon, Booking.com, Dropbox, Reddit and Spotify, with a view to such personal data being used for the purposes of real-time bidding, without providing adequate information to data subjects to obtain their informed consent as required by the GDPR and PECR.
The outcome of the proceedings could significantly affect how real-time bidding is operated for online advertising, if this has not already occurred as a result of ICO enforcement action in relation to adtech (as to which see our May 2020 update).