Neural Network - April 2025

Neural Network - April 2025

In this edition of the Neural Network, we look at key AI developments from March and April.

In regulatory and government updates, the Government rejects pressure for additional copyright protections for copyright holders against AI companies; the UK's Artificial Intelligence (Regulation) Bill has cleared its initial stage; the European Commission has revealed a pioneering AI Action Plan; and the intention for new regulatory measures has been announced by the ICO to support the Government’s growth agenda.

In AI enforcement and litigation news, there is currently a debate over the fairness of copying books to train AI in the US courts; and Grok, X's AI chatbot, is under investigation by the Irish Data Protection Commission for its use of personal data from publicly accessible posts by EU-based users.

In technology developments and market news, we delve into the rise of autonomous AI agents, software programs that are able to execute tasks based on their perceived environment.

More details on each of these developments is set out below.

Regulatory and Government updates

Enforcement and Civil Litigation

Technology developments and market news

Regulatory and Government updates

Government rejects pressure for additional copyright protections for copyright holders against AI companies

The Data (Use and Access) Bill (the "DUA Bill") is a proposed piece of legislation that aims to reform the way data (including personal data) is controlled, processed and used in the UK. It includes, among other things, amendments to the UK's data protection regime and amendments to the scope of powers vested in the Information Commissioner's Office (the "ICO").

The Government has now confirmed that the following proposed amendments to the DUA Bill will be rejected:

  • the inclusion of a private right of action by a copyright holder to seek damages for misuse of their copyrighted works;
  • the addition of a clarification that copyright must be observed by AI developers and operators of software that scrapes information from the web to use to train AI models; and
  • empowering the ICO to enforce regulations on behalf of copyright-holders.

These amendments were proposed by the House of Lords earlier this year but voted against by the House of Commons. The Government's consultation on proposals for a new data scraping regime (which was open for responses until 25 February) was discussed in a previous Neural Network edition and can be found here. Data minister Chris Bryant noted that “we believe it is not the right time to act as part of this bill, which was a data bill” and the sheer number of responses to the Government's call for feedback indicate that more detailed policy might have to be developed to address these issues and “stakeholders do not want us to rush to legislate".

We have recently published a series of articles that provide an in-depth analysis of the various provisions of the DUA Bill, the most recent of which can be found here. The DUA Bill is has now moved to the Reporting Stage in the House of Commons - you can track the progress of the DUA Bill through Parliament here.

UK's Artificial Intelligence (Regulation) Bill clears its initial stage

The UK's Artificial Intelligence (Regulation) Bill (the "AI Bill") passed its first reading in the House of Lords after being re-introduced following the latest change of government and aims to positively advance AI in the UK through legislation. As an overview, the AI Bill defines AI as technology that approximates cognitive abilities such as interpreting data and making recommendations, and includes generative AI. Key aspects of the AI Bill are outlined and considered below.

Firstly, central to the AI Bill is the proposed creation of an AI authority (the "Authority") which will be tasked with ensuring that AI considerations are integrated and aligned across all relevant regulatory bodies. The Authority will have various functions including monitoring regulators, current frameworks and legislation; responding to emerging AI trends; assessing risks across the economy rising from AI; and accrediting independent AI auditors (meaning that any business which develops, deploys or uses AI "must allow independent third parties" accredited by the Authority to audit its processes and systems). Based on the wording in the AI Bill, there are no specific requirements for businesses to organise regular audits, but if this was requested by a regulator, the business would be required to comply with this audit request. It is clear that the powers intended to be provided to the Authority aim to: (i) create a cohesive AI framework for the UK; and (ii) allow for sufficient flexibility for the Authority to exercise discretion in its decision-making, which is crucial as AI continues to evolve rapidly.

Next, the AI Bill also codifies certain regulatory principles that will guide the use of AI, such as the Authority's principles in ensuring "safety, security, robustness" and "appropriate transparency and explainability" for businesses using AI to adhere to transparency through testing and compliance with the law, to name a few. According to the AI Bill, AI and its applications should be unbiased and comply with equalities legislation. Any restrictions imposed on the use of AI must be seen as proportionate to its benefits, which is arguably in line with the government's AI Action Plan as set out in January of this year.

Section 3 of the AI Bill sets out that sandboxes will be constructed by regulators to allow businesses to test innovative AI solutions in a controlled environment, providing firms with, among other things, support in identifying appropriate consumer protection safeguards and the chance to test on a small scale with real customers. In essence, these sandboxes serve as a virtual laboratory where companies can collaborate and experiment without constraint.

Importantly, the AI Bill provides that the Secretary of State, after consulting the Authority and other appropriate persons, must provide that any business that develops, deploys or uses AI has a designated "AI responsible officer" to ensure the safe, ethical, unbiased and non-discriminatory use of AI by businesses. While this additional governance requirement may seem burdensome, many businesses are already under an obligation to appoint a Data Protection Officer pursuant to Section 4 of the GDPR to manage compliance with data protection requirements – it is possible that many small and medium-sized organisations could expand this role to also cover the responsibilities of the AI responsible officer.

The AI Bill represents a significant step by the UK towards establishing a comprehensive framework for AI governance. By defining AI broadly and incorporating generative models, the Bill addresses the evolving nature of technology and its impact on society. You can track the AI Bill's progress and read it in full here.

The European Commission reveals pioneering AI Action Plan

Following the AI Act guidelines which were released in August 2024, the European Commission (the "Commission") has set course for Europe's AI leadership with an ambitious AI Continent Action Plan (the "Action Plan") which aims to transform Europe's strong traditional industries and its talent pool with AI innovation. The vision for this Action Plan was outlined at the recent AI Summit in Paris, reported in the most recent edition of Neural Network and can be found here. The Action Plan aims to turn the European Union into an "AI Continent" by accelerating and intensifying efforts in five key areas:

  • building large-scale AI data and computing infrastructure;
  • ensuring access to high-quality data for AI innovators;
  • stimulating the development of AI algorithms, leveraging their adoption in the EU’s strategic sectors;
  • cultivating a strong AI talent base; and
  • facilitating compliance with the AI Act to decrease market fragmentation (an enforcement overview of the AI Act can be found here, and you can read the AI Act in full here).

Building large-scale AI data and computing infrastructure with a network of AI providers (that will develop models and applications) will aid the Commission in ultimately establishing "AI Gigafactories" with facilities to make AI chips and integrate computing power and data centres to train and develop AI models at an unprecedented scale. To stimulate private sector investment for infrastructure, the Commission will also propose a Cloud and AI Development Act with a goal to at least triple the EU's data centre capacity in the next five to seven years, placing priority on highly sustainable data centres.

The Commission will work on increasing access to large and high-quality data by, amongst other things, launching their "Apply AI Strategy" which focusses on sectors where European know-how could contribute to increasing productivity and competitiveness. The Commission has also set its focus on strengthening AI skills by developing training programmes on AI and generative AI in key sectors, preparing the next generation of AI specialists and supporting the upskilling and reskilling of workers. This is crucial for sustaining long-term technological growth and preparing the European workforce for future challenges.

Lastly, with an aim to simply ease the regulatory burden, facilitating compliance with the AI Act increases the public's trust in technology and provides investors and entrepreneurs with the legal certainty they need to scale up and deploy AI throughout Europe. The Commission will also launch the AI Act Service Desk, assisting businesses to navigate the EU AI Act's regulatory requirements and creating a streamlined AI service offering.

The full Action Plan can be found here.

Intention for new regulatory measures announced by the ICO

On 17 March 2025, the ICO announced its plan to introduce new initiatives aimed at supporting the UK government's growth agenda. One of the ICO's main initiatives is the launch of a free data essentials training program tailored specifically for small businesses. The goal of this program is to help these businesses harness the power of personal data while enhancing customer trust.

From an AI perspective, the ICO intends to introduce a statutory code of practice for both private and public sector entities involved in the development or deployment of AI. Alongside this, simpler guidance will be provided to assist businesses in navigating the complexities of AI development and deployment. According to information published on the ICO's website, this simpler guidance "will enable organisations to unleash the opportunities of this technology while still safeguarding people's personal data." The ICO's aim is to enshrine these regulations into a statutory code of practice, which will ensure lasting certainty for AI-driven growth.

Enforcement and Civil Litigation

Debate over the fairness of copying books to train AI

Artificial intelligence company Anthropic (the "Company") has put forward a motion to have the copyright infringement lawsuit against it which was filed by US authors (the "plaintiffs") in California, dismissed. The lawsuit was initially filed in August 2024 and was reported in a previous Neural Network update, found here.

The plaintiffs allege that the Company violated their copyrights by using their work to train its language model, named Claude. Claude can be used to generate text and code and analyse visual inputs. However, in its motion to dismiss, the Company argues that using the authors' books to train Claude serves "a fundamentally different purpose from the books themselves," being "transformative in the extreme". The Company further argues that "the immense benefits that Claude provides to the public have not caused a single person to forgo buying one of the Plaintiff's books" and that the amount of copying is not excessive when taking into account Claude's "transformative purpose" because copying the books to train Claude serves an intrinsically different purpose from that of the books themselves.

The Company also emphasised that publishers did not have the rights to negotiate licensing needs, which the Company discovered when attempting to licence books from publishers in 2021, when it first started creating research models.

In response to the plaintiff's claim that they have lost income as Anthropic should have paid to license their work, the Company counters that the plaintiffs have not been able to produce any evidence that they have been denied a market to sell their books. The Company goes on to claim that "there is no evidence that such a market will or even could develop, given the breadth and size of the necessary training corpus, comprising trillions of data points reflecting billions of works".

The next stage in the lawsuit will be oral arguments on Anthropic's motion to dismiss, scheduled for 15 May 2025. We previously considered Amazon's $8 billion investment into Anthropic in a recent edition of Neural Network which can be found here.

Grok, X's AI chatbot, under investigation by Irish regulator

Grok is a generative AI chatbot based on a large language model of the same name developed by xAI and launched by Elon Musk. The Irish Data Protection Commission ("IDPC") has opened an inquiry into Grok's use of personal data from publicly accessible posts by EU-based users, as this was a subset of data that Grok was trained on. The inquiry will focus on the lawfulness and transparency of X's data processing activities in relation to the use of personal data and will assess the AI chatbot's adherence to various essential GDPR provisions.

This investigation is occurring at a time when there are rising geopolitical tensions about the regulation of AI, most recently apparent at the AI Summit in Paris, reported in a previous edition of Neural Network here, during which American Vice President JD Vance critiqued European over-regulation of AI, deeming it an obstacle to the opportunities AI presents.

The IDPC's official announcement of its investigation can be found here.

Technology developments and market news

The rise of autonomous AI agents

The release of Manus, the "world's first fully autonomous AI software" developed by the Wuhan based startup Butterfly Effect created ripples through the AI community globally. Manus has shifted the focus to AI's ability for self-directed action as opposed to merely passive assistance.

Autonomous AI agents such as Manus are designed to act independently and deliver end-to-end results without human input. Manus assesses new information and is equipped with decision making capabilities allowing it to generate research papers, design marketing campaigns or even build websites from scratch. These autonomous AI agents are known as "agentic artificial intelligence" ("agentic AI").

The lack of human input in such models is simultaneously the biggest strength as well as the inherent risk of agentic AI. One major limitation of agentic AI is its anonymity, which can be seen through the 'black box' problem which is as AI systems become more complex and efficient at solving issues without any human input, the more inexplicable these exact decisions become.

This potential lack of explanation and methodology for the decisions made by an AI system raises various regulatory issues. From an AI standpoint, the AI Bill specifically mentions generative AI, but not agentic AI, indicating that while the former can be regulated by the AI Bill, more specific wording might need to be considered if this technology matures, becoming more widespread.

From a data protection standpoint, Article 22 of the GDPR requires information and technology transparency, under which organisations must inform individuals when AI is used in automated decision making, especially where it significantly affects them giving individuals the right to understand and request human intervention if they disagree with such decisions. Examples of this would be in recruitment or credit scoring. The possibility that agentic AI could become widespread in the near future means that regulations surrounding it needs to start to be considered now, or at the very in least in tandem with the technology's development.

As the European Commission’s High-Level Expert Group on AI noted, trustworthiness of AI is a "prerequisite for people and societies to develop, deploy and use AI systems," and without the necessary transparency required for individuals to put their trust in AI, "their uptake might be hindered, preventing the realisation of the potentially vast social and economic benefits that they can bring". This further emphasises that without the ability for agentic AI's decisions to be explained, more comprehensive regulations to safeguard both the use of AI, and personal data need to be implemented.

Our recent AI publications

If you enjoyed these articles, you may be interested in our podcast series covering various aspects of the evolving world of AI and its implications for businesses and broader society. The full podcast series is available for listening here.

You might also be interested in our Data Protection update, the most recent of which can be found here.