Neural Network - February 2025

Neural Network - February 2025

In this edition of the Neural Network, we look at key AI developments from January and February.

In regulatory and government updates, the European Commission has published guidance on prohibited AI practices under the EU AI Act and guidelines on what will constitute an "AI System" for the purposes of the Act, whilst also withdrawing the AI Liability Directive from further consideration; two new AI-related amendments have been included in the draft data bill currently making its way through the UK Parliament; and the UK government has indicated that it will legislate to criminalise AI tools designed to generate child sexual abuse material ("CSAM").

In AI enforcement and litigation news, DeepSeek is facing attention from numerous European regulators, with the Garante (the Italian Data Protection Authority) already having banned the processing of Italian users' personal data by DeepSeek; the ICO has indicated that it is considering whether to regulate AI models and services that generate summaries of content using personal data; and X and TikTok are each facing class actions in Berlin alleging violations of European AI and data-related legislation.

More details on each of these developments is set out below.

Regulatory and government updates

Enforcement and civil litigation

Regulatory and government updates

European Commission publishes guidance on prohibited AI practices

The European Commission has released a set of guidelines on AI practices and use cases specifically prohibited under the EU AI Act. The guidance, which aims at harmonising application of the Act across the EU, is non-binding and gives the Commission's interpretation of the relevant law, including legal explanations and a series of practical examples.

The guidance is focused on those AI systems falling within Article 5 of the AI Act, covering "AI systems posing unacceptable risks to fundamental rights and Union values". Marketing and putting into service AI systems that fall within this category is generally prohibited within the EU, with only narrow exceptions. Examples include AI systems used to carry out "social scoring", systems which use subliminal or deceptive techniques to harmfully distort individuals' behaviour, or systems which carry out facial recognition using databases generated through untargeted web or CCTV scraping of facial images.

The prohibitions under Article 5 are now in force, having become effective on 2 February 2025. In the case of each of the eight prohibited categories, the guidance sets out the Commission's view of the legal background which permits the AI Act to prohibit that category, the conditions that apply to determining whether a particular AI system is caught under that prohibition, the interactions with other EU law, and examples of AI systems which would and would not be prohibited within each category.

Commission publishes guidelines on the definition of an "AI System" under the AI Act

On February 6, the European Commission published guidelines on the definition of an "AI system" under the EU AI Act. With the guidelines, which are not legally binding, the Commission aims to "assist providers and other relevant persons in determining whether a software system constitutes an AI system to facilitate the effective application of the rules".

The guidelines unpack the definition of "AI systems" in the Act by providing guidance on the meaning of each of the elements of that definition

The guidelines clarify that AI systems do not need to display all of these elements consistently in order to be caught by the AI Act; and the extent to which some of the elements are present can determine whether the system is in need of more human oversight.

Proposals for an "AI Liability Directive" abandoned by European Commission

The European Commission has announced that it will not seek to press ahead with the proposed AI Liability Directive, and will withdraw the proposed legislation from consideration by EU institutions.

The AI Liability Directive was originally proposed in 2022, in conjunction with the AI Act. If enacted, it would have imposed new civil liabilities on developers of AI systems for harms caused by those systems, and standardised the approach to civil liability for AI harms across EU member states. The proposals met a hostile reception from major AI developers, and a mixed reaction by member states themselves.

The decision to withdraw the Directive has, however, also met with contrasting responses. The rapporteur for the Directive, Axel Voss MEP, criticised the Commission's decision as one likely to lead to "legal uncertainty, corporate power imbalances and a Wild West approach that only benefits Big Tech". Conversely, tech industry association CCIA Europe, whose membership includes Google, Apple, Amazon, Meta and others, welcomed the withdrawal, saying it reflected a "growing recognition that the EU can only remain competitive by ensuring its digital and tech framework doesn’t become an unworkable patchwork".

Upcoming UK draft law will criminalise AI designed to create child sexual abuse material

The UK Home Office has announced new measures, to be included in the upcoming Crime and Policing Bill, that will aim to combat the threat of child sexual abuse images being generated using AI.

The UK will become the first country in the world to make it to illegal to possess, create or distribute AI tools that have been designed to create child sexual abuse material ("CSAM"), with the offence being punishable by up to five years in prison.  

It will also be made illegal for anyone to have possession of a so-called "AI paedophile manual"—materials which teach people how to use AI to generate CSAM; a specific offence will be introduced for those who run websites which allow paedophiles to exchange illegal content and grooming advice; and the Border Force will be given powers to prevent the distribution of CSAM generated abroad, by allowing officers to compel an individual who they believe to be a risk to unlock their digital devices for inspection.

House of Lords votes to add AI copyright amendment to new UK data bill, against government's wishes

In a Parliamentary defeat for the Government, the Data (Use and Access) Bill ("DUA Bill") – currently being debated in Parliament – has been amended to include a new requirement that "web crawling" software, used for data scraping, must provide information as to the operator of the software and the purpose for which it is being operated. This is intended to ensure that content creators and IP rightsholders can assess whether or not their content has been scraped, for what purposes, and thereby whether their rights may have been infringed.

The government is currently consulting on proposals for a new regime to govern data-scraping and use of copyrighted works for AI training purposes, open for responses until 25 February, which we covered in detail in a previous edition of Neural Network that you can read here. The proposals subject to consultation would see the extension of the limited "data scraping" copyright exemption, currently only available in non-commercial contexts, to also cover commercial use cases, but subject to a facility and associated infrastructure allowing rightsholders to "opt out" of having their content used for AI training.

The Government opposed the amendment in the House of Lords but suffered a defeat when it was put to a vote, meaning that the amendment has been included in the version of the Bill that has now been sent to the House of Commons for consideration. It is not yet known whether the amendment will survive the House of Commons' legislative stages and be included in the Bill when it eventually becomes law.

UK data bill amended to criminalise explicit AI "deepfaking"

In a further change instigated in the House of Lords before passing to the House of Commons, the DUA Bill has also been amended to create a new offence of creating "intimate" AI deepfakes of an adult without their consent.

The new offence, which is distinct from the planned legislation on AI-generated CSAM discussed above, would also encompass solicitation of such content.

As this was a government-backed amendment to the DUA Bill, it appears likely that it will survive through to the Bill's passage into law.

Enforcement and civil litigation

Several European data protection regulators investigate DeepSeek; Garante bans DeepSeek from processing Italian users' personal data

Hot on the heels of its explosive emergence onto the AI scene in January, the developers of AI model DeepSeek have experienced an equally rapid regulatory intervention. The Italian data protection authority, the Garante, has banned the model's operators from processing personal data of Italian users and has opened a formal investigation.

The Garante posed a series of questions to DeepSeek's developers on 28 January, shortly following its launch. The developers' responses to these questions, received by the Garante on 30 January, were – according to the regulator – "entirely unsatisfactory".

The developers had claimed that they do not operate in Italy and are not subject to European legislation – an understanding not shared by the Garante or indeed, apparently, by other European data protection authorities. The Dutch data protection authority has also commenced a formal investigation into DeepSeek owing to "serious concerns" regarding its handling and use of EU users' personal data. The Belgian authority is reported to be considering opening an investigation following receipt of a complaint against DeepSeek, and the Irish and Croatian authorities have written to the developers to request information from the developers. The President of the Polish Personal Data Protection Office, meanwhile, has urged users to exercise "extreme caution" when using DeepSeek applications.

DeepSeek made global headlines in January when it launched its latest AI model, "DeepSeek R1", which (the developer claimed) was on par with models developed by companies such as OpenAI in terms of capability, but which had been developed at a fraction of the cost and with far less of a requirement for dedicated, specialised infrastructure, such as AI-specific computer chips. Stocks in chip manufacturing giant Nvidia, along with other US companies involved in AI development, fell precipitously following DeepSeek's launch of its new model and the subsequent extremely rapid rate at which new users were downloading the relevant app – albeit that these stocks have since recovered somewhat.

OpenAI has since claimed to have received "indications" that DeepSeek had "inappropriately distilled" its models – saying that it would take "aggressive, proactive countermeasures to protect [its] technology". Model "distillation" refers to a practice by which a "student" AI model is trained on responses generated by prompting (at scale) a larger, "teacher" model, rather than being trained directly on a text database.

ICO indicates it may regulate, enforce against AI developers such as Apple and Google for inaccurate AI-generated chat summaries based on personal data

Following incidents in which AI text summary products and services developed by major tech and AI companies, including Apple and Google, have produced erroneous and "hallucinated" output, the ICO has now indicated that it is considering whether to put in place new rules where such text summaries are generated using personal data.

Apple suspended a feature of "Apple Intelligence" offering which used AI to generate summaries of news articles for iPhone users in January after the feature was found to have generated entirely fictitious summaries of news content from providers such as the BBC and the New York Times, which we reported on in detail here.

An ICO spokesperson has now said that "where AI-generated summaries contain personal data, organisations should ensure accuracy of that information." Speaking at a conference in January, the Information Commissioner, John Edwards, also said that in respect of AI models and AI-enabled services such as this, "when the fuel of these models is personal data, data of your customers, you can't be taking those chances. And we will regulate."

X, TikTok face class actions in Berlin alleging violations of the AI Act and GDPR, among others

Four class actions have been filed in the highest regional court of the state of Berlin against social media platforms X and TikTok.

These class actions have been brought under three pieces of EU legislation – the GDPR, the Digital Services Act and the AI Act – and have been brought by the Dutch Foundation for Market Information Research, in what their legal representatives have called a bid to "halt the illegal practices and to hold TikTok and X financially accountable".

TikTok has been accused of using a system which uses AI recommendations, based on sensitive personal data, to maximise engagement from young users. It is alleged that this system is exploitative and falls foul of the AI Act's ban of manipulative AI. Compensation of €500 to €2000 per user is sought, across approximately 20 million TikTok users in Germany.

X, meanwhile, is alleged to have failed to report several data breaches (as well as failing to inform the affected users or to provide any sort of compensation). Additionally, the platform has been accused of processing sensitive user data to power its recommendation algorithms without having any legal basis to do so. In this case, €750 to €1000 per user is being sought in compensation, with Germany having approximately 11 million X users.

Our recent AI publications

If you enjoyed these articles, you may also be interested in our podcast series covering various aspects of the evolving world of AI and its implications for businesses and broader society. New entries in the series this month include an AI basics explainer and a podcast considering AI and the environment, with particular focus on the future impact of the EU's AI Act. The full podcast series is available for listening here.