Data Protection update - May 2024
Welcome to the Stephenson Harwood Data Protection update, covering the key developments in data and AI laws from May 2024.
This month the EU AI Act was adopted by the EU Council and the Data Protection and Digital Information Bill (the "DPDI Bill") fell in the Parliament wash-up period ahead of the UK General Election.
In AI news, Max Schrems' data privacy group noyb filed a complaint to the Austrian Data Protection Authority ("DPA") regarding ChatGPT's "hallucinations", the AI Seoul Summit took place and the European Data Protection Board's (the "EDPB") ChatGPT task force released their interim findings.
In other news, the ICO reported on cyber security pitfalls and the EDPB opened investigations into Meta under the Digital Services Act (the "DSA").
In ICO actions, the Police Service of Northern Ireland was fined £750,000 following a data breach, and the ICO concluded their investigation into Snapchat.
Data protection
AI
- EU AI Act passed
- GDPR complaint filed over alleged AI chatbot 'hallucination'
- EDPB's ChatGPT taskforce publishes interim report
- AI Seoul Summit
- ICO AI updates
- AI safety updates from DSIT and AISI
Cybersecurity
Enforcement and civil litigation
- ECJ ruled that a national authority can access civil identity data linked to IP addresses
- Upper Tribunal rules on Experian case
- ICO concludes its investigation into Snap
- Round-up of enforcement actions
Data protection
DPDI Bill falls in Parliament "wash-up" period
Due to the Prime Minister calling a UK General Election for 4 July, any legislation that was not passed by Parliament by the end of the "wash-up" period before Parliament is prorogued will fall away.
The DPDI Bill was one of the bills that did not pass by the end of the wash-up period, allegedly due to disagreement over late-stage amendments by the Department of Work and Pensions.
The Bill was intended to clarify certain aspects of the UK GDPR to make data protection compliance simpler for businesses to implement. For instance, it introduced a new definition of "scientific research" and clarified when legitimate interests would be an appropriate lawful basis for businesses to rely upon when processing personal data. We wrote about the changes proposed by the DPDI Bill here and here.
Since the Bill has fallen, it would only proceed if the next government actively chooses to reintroduce it after the General Election has taken place. It is possible that some version of its provisions will be introduced by the new government.
AI
EU AI Act passed
On 21 May, the EU Council approved the EU AI Act (the "Act"). The Act will come into force 20 days after it is published in the Official Journal of the EU.
The Act aims to ensure that AI is safe and does not infringe upon people's fundamental rights. It puts in place different regulatory frameworks depending on the AI's level of risk, with some rules for general purpose AI, other rules for high risk AI and another regime for lower risk AI systems. There are also certain categories of AI that are completely prohibited, such as any AI system related to "social scoring" resulting in detrimental treatment. Please see our article here for details on the main provisions of the Act.
The Act is not only relevant to EU businesses: any company that places AI on the market or deploys AI in the EU is likely to need to comply with the Act.
The majority of the provisions of the Act will come into force in 2026, but certain provisions, such as the regulations covering prohibited AI practices, will come into force in the next six months, so companies that deal with AI in the EU should begin reviewing their practices to ensure that they are complying with the AI Act's provisions.
We will be producing a suite of materials on the detail of the Act's provisions and what they mean for your business – keep up to date with these in future data protection bulletins.
GDPR complaint filed over alleged AI chatbot 'hallucination'
Max Schrems' data privacy group noyb filed a complaint with the Austrian DPA regarding ChatGPT's "hallucinations", alleging that the chatbot provides false information that OpenAI is unable to correct.
noyb's complaint originated when the complainant (a public figure) asked ChatGPT to list some information about them, including their birthday. In response to this request, ChatGPT provided them with an incorrect date of birth.
The complainant submitted an access request with OpenAI, but the company was only able to provide the complainant with their account data (rather than all the data about the complainant that had been used to train the chatbot's algorithm). OpenAI also allegedly refused to disclose any information about the data they had processed and the sources of this data.
In their complaint, noyb detailed how OpenAI claimed that there was no way to prevent ChatGPT from displaying the complainant's inaccurate date of birth, and that there was no way to remove this information from their system (as this would affect the other data that ChatGPT would be able to display about them).
noyb's asked the Austrian DPA to investigate how OpenAI processes personal data and how it controls the accuracy of its data. The Austrian DPA has confirmed that it has received the complaint and will be assessing whether further action needs to be taken.
The Polish DPA is investigating a similar complaint raised in August last year, and the European Data Protection Board has set up a task force to coordinate the multiple investigations into ChatGPT for alleged GDPR breaches across Europe (see the article below).
EDPB's ChatGPT taskforce publishes interim report
On 23 May, European Data Protection Board's ChatGPT taskforce published an interim report of its findings, focusing on how large language models ("LLMs") can comply with the requirements of the GDPR. The report contains several points of interest for organisations providing and deploying these systems and appears to leave room for certain types of large-scale data processing for use in AI models to be lawful under GDPR.
Transparency
In its report, the EDPB stated that due to the vast amounts of data collected via web-scraping, it would not be possible to inform each individual data subject that their data had been scraped. The EDPB left open the possibility that ChatGPT and other LLMs could rely on the exception in Article 14(5)(b) GDPR, which exempts organisations from having to provide this information if it would "involve a disproportionate effort". However, it underlined that, even if an organisation relies on this exception, they must still take appropriate measures to protect their users' rights.
Lawfulness
In order to process personal data, an organisation must identify an appropriate lawful basis under the GDPR. In its report, the EDPB found that legitimate interests could potentially be used as a lawful basis for ChatGPT's processing of personal data to train its algorithms. This would be more likely if adequate safeguards to protect the rights of data subjects were implemented. Such safeguards may include minimising the data used for training and removing special categories of data from training datasets.
This is contrary to the findings of the Italian DPA's investigations into OpenAI last year, which indicated that the organisation had no legal basis for the processing of personal data to train ChatGPT.
Accuracy
As the previous article and the Italian DPA's investigation into ChatGPT demonstrate, the inaccuracy of the output of LLMs continues to cause major concerns.
The EDPB stated in its report that ChatGPT would need to provide "proper information on the probabilistic output creation mechanisms and on their limited level of reliability" and would need to specifically refer to "the fact that the generated text, although syntactically correct, may be biased or made up." This indicates that the EDPB's view that a model need not necessarily produce accurate outputs in order to comply with the GDPR's accuracy principle.
Key takeaways
It is worth bearing in mind that these are the EDPB's preliminary findings, and that the final report may differ to these conclusions. However, in the meantime, this report provides an initial indication of how the tensions between large language model data processing and data protection law could be resolved.
AI Seoul Summit
The UK government co-hosted the AI Seoul Summit with the South Korean government on 20 and 21 May. The conference gathered international government leaders, AI companies, academics and civil groups to discuss AI safety. The summit built on discussions from the AI Safety Summit held in Bletchley Park in November 2023.
The main developments from the conference include:
- Global leaders committed to creating an international network of AI Safety Institutes to "boost understanding of AI".
- Leading AI companies (such as Amazon and Meta) signed up to the voluntary Frontier AI Safety Commitments, and in accordance with these, must publish a safety framework which demonstrates how they will manage severe AI risks by the next AI Summit.
- World leaders representing Australia, Canada, the EU, the UK and US (among others) signed the Seoul Declaration, an agreement intended to demonstrate their commitment to global cooperation for safe, innovative and inclusive AI.
The next AI Summit will take place in France.
ICO AI updates
Response to AI White Paper on AI regulation
At the end of last month, the ICO published its response to the UK Government's White Paper on AI regulation.
The overall takeaway from the ICO's response is that the regulatory body will continue regulating AI in line with existing data protection laws, and that the regulation of AI will remain one of its main focuses.
The ICO also highlighted several areas of focus, which included foundational models, facial recognition technology, children's privacy and high-risk AI applications.
Please see our article for an in-depth summary of the ICO's response.
ICO's fourth call for evidence on generative AI
On 15 May 2024, the ICO launched its fourth call for evidence on generative AI. This consultation is focused on how individual's data protection rights can be protected in relation to the training and fine-tuning of generative AI.
Please see our article from earlier this month for more information on the consultation.
The call for evidence is open until 10 June 2024 and can be accessed via this link.
AI safety updates from DSIT and AISI
The UK Department for Science, Innovation and Technology ("DSIT") and the AI Safety Institute ("AISI") published various AI safety initiatives and reports in May 2024.
DSIT publishes report on cybersecurity risks associated with AI systems
On 15 May, DSIT published its research on cybersecurity risks to AI systems. It underlined that the rapid introduction of AI into different industries has created numerous cybersecurity risks, many of which people are either not aware of, or not adequately addressing.
The report highlighted the numerous cybersecurity risks across different phases of an AI system's lifecycle, underlining that considerations of cybersecurity should begin at the very start of the process of creating an AI system, at the design phase.
DSIT also conducted interviews on this topic across various industries, ranging from healthcare companies to law firms and banks. They found that there was a general lack of awareness around cybersecurity regulations tailored to AI, and very little attention paid to the security of AI models used.
This report was commissioned as part of the UK Government's National Cyber Strategy, which aims to strengthen the UK's "position as a responsible and democratic cyber power" and complements the government's call for views on the cybersecurity of AI.
New AI safety evaluation platform and safety testing results
The AISI has launched a new safety evaluation platform called Inspect. This is a free open-source platform that can be used by AI developers around the world to evaluate AI models. It scores these models based on a variety of factors, such as their core knowledge, ability to reason and autonomous capabilities.
The aim is for Inspect to help improve the safety of AI models and increase collaboration within the global AI community.
A key component of the AISI's work is periodically evaluating advanced AI systems to assess the harm they could potentially cause.
In a recent report, the AISI also tested five leading LLMs and found vulnerabilities in all of them. The institute was easily able to bypass the models' safeguards and found that they were all likely to comply with harmful questions under relatively simple attacks.
The AISI continues to work with the developers of the models tested in order to help them enhance the safety features of the LLMs.
International Scientific Report on the Safety of Advanced AI
DSIT also published an interim report which sets out an "an up-to-date, science-based understanding of the safety of advanced AI systems". Academic and government ministers from all over the world consulted on this report, which was released ahead of the Seoul Summit.
Key takeaways include:
- Properly governed general-purpose AI can be applied to advance the public interest, which could lead to enhanced wellbeing and scientific discoveries. However, this will need to be used safely and securely, to ensure that the AI does not malfunction and cannot be used maliciously to cause harm to individuals and organisations, such as through scams, fake news or data privacy violations.
- Our understanding of general-purpose AI systems is still quite limited and developing this understanding should be a priority.
- While general-purpose AI has developed very quickly in recent years, experts were uncertain over the rate of future progress.
- The current technical methods we possess for evaluating and reducing the risks posed by general-purpose AI (e.g. red-teaming and benchmarking) can be helpful, but they all have limitations, and will need to be improved.
The report concluded that the future of general-purpose AI is uncertain, and it will be up to governments and societies to help determine its future. The report is intended to help guide stakeholders in making informed choices over the future of general-purpose AI.
The final version of this report is due to be published before the next AI Action Summit.
Cybersecurity
ICO calls for cybersecurity boost and details common security pitfalls
In light of the growing threat of cyberattacks, on 10 May 2024, the ICO published a blog post urging organisations to boost their cybersecurity measures and protect the personal data they hold. Recent trend data reveals a significant increase in cybersecurity breaches, with over 3,000 incidents reported to the ICO in 2023. The finance, retail, and education sectors were the most affected.
On the same day, the ICO also published the Learning from the Mistakes of Others report, which provides an analysis of these breaches and offers valuable lessons on avoiding common security pitfalls.
The report identifies five main causes of cybersecurity breaches:
- Phishing: scam messages that trick users into sharing passwords or downloading malware.
- Brute force attacks: use of trial and error to guess login credentials or encryption keys.
- Denial of service: a website or network is overloaded to disrupt its normal functioning.
- Errors: security settings are misconfigured.
- Supply chain attacks: compromised products or services are used to infiltrate systems.
The report also details how each type of cyberattack occurs, offers important strategies to reduce the risk and discusses potential future developments. Strategies include ensuring that training is provided to staff to help them recognise such attacks and employing security measure such as two-step or multi-factor authentication.
PTSIA enforcement guidance
The UK Office for Product Safety and Standards ("OPSS") has issued guidance clarifying its enforcement powers when addressing non-compliance with the UK Product Security and Telecoms Infrastructure Act 2022 ("PSTIA").
Please see our article for a summary of the OPSS' guidance.
Enforcement and civil litigation
ECJ ruled that a national authority can access civil identity data linked to IP addresses
In a preliminary ruling, the European Court of Justice (the "ECJ") ruled that a national authority may access civil identity data linked to IP addresses in order to investigate online copyright infringement.
Retention of this data is permitted for the purpose of identifying a person suspected of having committed a criminal offence and can only take place where a country's legislation imposes retention arrangements that allow for categories of personal data to be separated from each other.
In other words, access is only permissible if the safeguards (such as the above, and other measures such as preventing officials who have access from disclosing the information of the files consulted) are in place to prevent "precise conclusions [from being] drawn about the private life of the persons whose data" is being accessed. Therefore, as the risk to individuals' rights is low due to the safeguards in place, the access to this personal data is permissible.
Upper Tribunal rules on Experian case
In April, the Upper Tribunal dismissed the ICO's appeal of the First Tier Tribunal's February 2023 decision on Experian.
The ICO issued an enforcement notice to Experian in October 2020, as Experian had failed to notify their customers that they were processing their personal data for direct marketing purposes.
Experian appealed the ICO's decision on the basis that the relevant data subjects were aware of how their data was being used, as it was accessible via a hyperlink on Experian's consumer information portal.
The Upper Tribunal sided with Experian, finding that the company had sufficiently complied with data protection regulations for these data subjects.
Please see our article here for further information on this case.
EU Commission takes action on social media child protection
The EU Commission (the "Commission") has opened formal proceedings against Meta under the DSA. The Commission is concerned that the social media company may have breached the DSA's regulations on child protection.
The Commission's investigation is focused on:
- concerns over whether the design of Facebook and Instagram's interfaces may "exploit the weaknesses and inexperience of minors and cause additive behaviour";
- the extent of Meta's compliance with the DSA's requirements in relation to the mitigation measures to prevent minors from accessing inappropriate content; and
- whether Meta has put in place appropriate and proportionate measures to "ensure a high level of privacy, safety and security" for children.
The Commission has also launched an investigation into TikTok over similar concerns.
ICO concludes its investigation into Snap
The ICO recently concluded its investigation into Snap's (formerly known as Snapchat) "My AI" chatbot. The ICO opened the investigation a year ago due to concerns that Snap had not adequately assessed the data protection risks associated with "My AI". It was particularly concerned by the data protection risks the chatbot posed to children.
This investigation highlights the importance of considering data protection from the outset when developing or using generative AI. The ICO's final decision in this case is expected to be published in the next few weeks.
Round-up of enforcement actions
Company | Authority | Fine | Comment |
Police Service of Northern Ireland (the "PSNI") | ICO | £750,000 | The PSNI was fined for failing to protect the personal information of its entire workforce.
They had published a spreadsheet online which contained a "hidden" tab which contained personal data, including the surnames and roles of all PSNI employees.
|
Central YMCA | ICO | £7,500 | Fined for a data breach which exposed the personal data of 166 people. |
4Finance Spain Financial Services S.A.U. | Spanish DPA | €480,000 (reduced to €360,000) | 4Finance suffered a data breach where customers' personal data was exposed. They were fined for failing to implement appropriate security measures which could have prevented the data breach. |
Telefónica de España | Spanish DPA | €90,000 | The company was fined for failing to provide the Spanish DPA with the information they had requested in connection with a complaint. |
Sigma Srl | Italian DPA | €150,000 | Fined for using users' personal data without their knowledge. |
BdM Banca | Italian DPA | €10,000 | The bank was fined for failing to respond to a data subject access request on time. |
Private individual | ICO | £265 | The individual (a former Management Trainee at Enterprise Rent-a-Car UK Limited) illegally obtained customer personal data.
Accessing this data fell outside the scope of his role, and there was no legitimate reason or business need for him to access it. |