Neural Network - December 2024
In this edition of the Neural Network, we look at key AI developments from November and December.
In regulatory and government updates, the European Commission's AI Office has consulted ahead of producing guidance on provisions in the EU's AI Act, and has published the first draft of a General-Purpose AI Code of Practice prepared by a group of independent experts; the EU's Cyber Resilience Act has been published in the Official Journal; the UK's new data reform Bill has come in for criticism from Parliamentarians for being light on AI-specific provisions; and a formal UK Government consultation on the topic of using copyrighted works for AI training will reportedly begin "shortly".
In AI enforcement and litigation news, OpenAI is facing new claims in both Canada and India alleging that it made unauthorised use of materials on news publishers' websites in training its AI models, breached website terms of use, and gained unauthorised access to materials that were behind subscription paywalls or other protections; and the UK Supreme Court has agreed to hear an appeal concerning patentability of AI inventions.
In technology developments and market news, Amazon has invested a further $4 billion in AI research company Anthropic; and new concerns have been raised as to the sustainability of AI development in light of the power demands of AI-training data centres.
More details on each of these developments is set out below.
Regulatory and government updates
- EU AI Office consults on prohibitions and definitions under the AI Act
- Commission publishes General-Purpose AI Code of Practice first draft
- EU Cyber Resilience Act published in official journal
- Lords tentatively welcome new data bill but express concerns regarding lack of AI provisions
- European Data Protection Supervisor publishes report on notable AI development trends, examining potential opportunities and risks
- Science Minister confirms UK will "shortly" begin consulting on AI and copyrighted works
- ICO publishes consultation responses on application of data protection law to AI models
- Luxembourg data protection regulator handed role as EU AI Act regulator
- Bank of England and FCA release results of survey on AI in the financial sector
Enforcement and civil litigation
- UK Supreme Court agrees to hear appeal on the patentability of AI inventions
- OpenAI faces litigation in multiple jurisdictions on use of copyrighted content to train AI models
- UK competition regulator takes no action on Google-Anthropic linkup
Technology developments and market news
- AI development set to drive more than twofold increase in data centre power demand by the end of the decade
- Amazon doubles its investment into AI research company Anthropic
Regulatory and government updates
EU AI Office consults on prohibitions and definitions under the AI Act
The AI Office of the European Commission has carried out a consultation to inform future guidance it will publish on the scope of the definition of an "AI System" under the EU's AI Act, as well as on the extent and application of the Act's prohibitions on "AI practices that pose unacceptable risks", such as "harmful subliminal, manipulative and deceptive techniques".
Consultation respondents were asked various questions including:
- which elements of the Act's definition of an "AI System" required further clarification;
- what systems may fall outside of the AI System definition; and
- whether respondents know of any AI systems currently in existence which they consider would meet all the criteria of any of the prohibited AI practices.
The guidelines that the Commission will produce, drawing on the results of this consultation, will be published "in early 2025".
Commission publishes General-Purpose AI Code of Practice first draft
On the 14 November the European Commission announced that the first draft of the General-Purpose AI Code of Practice has been published by the European AI Office. This concludes the first of four rounds of drafting, with the final round expected to occur in April 2025. Once finalised, the document will be instrumental in guiding the future development and deployment of safe and reliable general-purpose AI models.
The draft was created by a group of independent experts. It has been based on contributions from providers of general-purpose AI models and took into account international approaches.
According to the Commission, the first draft of the Code of Practice is intended to act as a "foundation for further detailing and refinement", and feedback is encouraged to help shape the final version of the Code. The Chairs and Vice Chairs of the thematic groups have also set out guiding principles and objectives for the Code, to provide stakeholders with an understanding of the potential form and content that could make up the final version.
The draft has already been discussed in dedicated working group meetings by around 1,000 stakeholders, who were joined by EU Member State representatives and European and international observers. Their feedback may be used to adjust measures in the first draft, as well as to add more detail to the Code.
The European Commission have also published an FAQ page dedicated to general-purpose AI models in the AI Act.
EU Cyber Resilience Act published in official journal
The EU Cyber Resilience Act ("CRA") has been published in the Official Journal of the European Union, kicking off a three-year phased implementation period, with the first obligations under the CRA entering into force in September 2026. We covered the CRA in our recent Data Protection Update, which you can read here.
The CRA is relevant to AI developers as it applies to products or software "with a digital component", which naturally includes AI. It will introduce mandatory requirements for manufacturers, developers and sellers of products and services of this sort, imposing obligations that must be met at every stage of the product lifecycle to ensure that they adhere to a minimum set of cybersecurity standards and requirements.
There are specific provisions in the CRA governing its interaction with the EU's AI Act. Where a high-risk AI system (as under the AI Act classifications) meets the relevant cybersecurity requirements under the CRA, that AI system will also be deemed to have complied with the AI Act's own cybersecurity requirements.
Lords tentatively welcome new data bill but express concerns regarding lack of AI provisions
On 19 November the newly proposed Data (Use and Access) Bill was debated in the House of Lords during its second reading.
On the whole, the Bill was received positively by the Lords, many of whom compared it favourably with the Data Protection and Digital Information Bill which had been proposed by the previous government but ultimately fell away due to the July 2024 general election.
However, concerns have been raised over the relative lack of AI-specific provisions in the Bill. Amongst those criticising the Bill was Baroness Kidron, who felt that the Bill failed to tackle current and anticipated uses of data by AI, and did not address concerns in areas such as data scraping. Speaking in the Lords, the Baroness questioned "why the Government did not wait a little longer to bring forward a bill that made the UK AI ready, understood data as critical infrastructure and valued the UK's sovereign data assets".
Also critical of the Bill was Lord Stevenson, who asked "why are the Government not doing much more to stop what seems clearly to be theft of intellectual property on a mass scale, and if not in this bill, what are their plans?". This is a particular concern for the UK's creative industries, due to the unlicensed use of works in AI training.
Responding to these concerns for the Government was Baroness Jones, minister at the Department for Science, Innovation and Technology, who first introduced the Bill to the House of Lords. She agreed that data scraping was not explicitly addressed in the Bill but stated that "any such activity involving personal data would require compliance with the data protection framework, especially that the use of the data must be fair, lawful and transparent".
Regarding AI-related concerns amongst the creative industries, Baroness Jones explained that the Government was working to develop an approach which would meet the needs of the UK and that more details would be announced in "due course".
The Bill has now moved on to committee stage, in which peers will consider proposed amendments to the Bill in detail. This phase commenced on 3 December.
European Data Protection Supervisor publishes report on notable AI development trends, examining potential opportunities and risks
The European Data Protection Supervisor ("EDPS"), Wojciech Wiewiórowski, has published a report considering the potential risks posed by AI technologies to the rights and freedoms of individual data subjects.
The EDPS bears responsibility for supervising and enforcing data protection compliance within the EU's own institutions, agencies and other bodies. With the entry into force of the EU AI Act, it has expanded its remit and is also the competent authority for AI systems deployed by these bodies.
The report identifies six key trends in AI development and considers the extent to which they may produce positive outcomes for individuals and, conversely, how they might pose risks to data subjects' rights and freedoms. The six trends considered are:
- "Retrieval-augmented generation" or "RAG" – describing AI systems which derive their inputs from multiple knowledge sources and which synthesise these to provide more relevant output to the user;
- "On-device AI" – wherein the data processing associated with the AI model occurs at the "edge of the network", directly on user-facing devices, rather than centrally within the network;
- "Machine unlearning" – mechanisms permitting AI systems to "forget" specific data or cause them to not influence the model output, whether on user request or on the model's own initiative;
- "Multimodal AI" – AI models that rely upon, and produce outputs in the form of, multiple different data types, such as text, images, and audio;
- "Scalable oversight" – the use of AI systems to monitor other AI systems during their early scale-up; and
- "Neuro-symbolic AI" – the combination of neural networks with symbolic reasoning, meaning the capacity of the model to engage with, use, and express concepts expressed in human-readable language.
The full report is available to read here.
Science Minister confirms UK will "shortly" begin consulting on AI and copyrighted works
Science Minister Lord Vallance has stated in Parliament that the UK Government will "shortly" begin a formal consultation on the issue of use of copyrighted materials to train AI models.
The issue has been a long-standing one, with a previous working group convened by the UK's Intellectual Property Office unable to achieve its goal of developing a voluntary code of practice for both AI model developers and IP-rich industries.
Following the failure of the working group to resolve the problem on a voluntary basis, the UK Government has previously indicated that it will now intervene directly to resolve the impasse. We reported on this in a previous edition of the Neural Network which you can read here.
ICO publishes consultation responses on application of data protection law to AI models
Following a series of consultations conducted between January and September 2024 on generative AI and data protection, the ICO has now published its responses to the full consultation series.
The consultations focused on areas such as the allocation of accountability for data protection and compliance along the supply chain for generative AI and web-scraping for the purpose of training AI models. We covered the consultations in more detail earlier this year, which you can read here.
The ICO has now modified some of its positions in response to the consultation. It has, for example, updated its stance on the "legitimate interests" data processing basis in the context of data-scraping for AI model training content, recognising that other data collection methods besides web scraping are available and viable for AI training and highlighting the need for organisations which do conduct web scraping to significantly improve their transparency measures, to avoid the potential hazards involved in "invisible processing".
In addition to these consultations, the ICO has also engaged with companies in the UK that process personal data for AI purposes, such as Microsoft and Meta. However, some data privacy advocates do not believe that this engagement went far enough. For example, in early November the data protection campaigner Open Rights Group criticised the ICO for taking an "overly cautious approach to enforcement".
Luxembourg data protection regulator handed role as EU AI Act regulator
The Luxembourg data protection authority, the Commission Nationale pour la Protection des Données or "CNPD", has announced that it has been designated as Luxembourg's national authority for the purposes of the EU AI Act.
It will also take on other roles in connection with the Act, including acting as the "single point of contact", the market surveillance authority for AI (other than for systems falling within the scope of specific sectoral authorities under the Act), and the national coordinator for competent authorities.
The full announcement (in French) can be read here.
Bank of England and FCA release results of survey on AI in the financial sector
On the 21 November, the Bank of England and the Financial Conduct Authority ("FCA") published the results from their third survey of AI and machine learning in UK financial services. The aim of the survey is to help the Bank of England and the FCA build on their existing work on understanding AI in financial services.
The survey had 118 respondents across six different financial sectors. From the survey, the Bank of England have shared results pertaining to the current and future uptake of AI in the financial sphere, the perceived benefits and risks of using AI and the level of current understanding of the AI technologies that are in use.
We will be reporting in more detail on the findings from this survey in a separate article which will be published soon on our data protection hub.
Enforcement and civil litigation
UK Supreme Court agrees to hear appeal on the patentability of AI inventions
The UK Supreme Court has granted Emotional Perception AI permission to appeal the decision of the UK Court of Appeal, handed down in July this year. Our report on the Court of Appeal decision in the Emotional Perception AI case is available here. In summary, the Court of Appeal ruled that Emotional Perception AI's patent application for a neutral network-driven music recommendation tool related to a computer program "as such" and was therefore excluded from patent protection.
The decision reversed the High Court's earlier ruling which, in November 2023, found that the invention described in Emotional Perception AI's patent application did not invoke the statutory exclusions to patentability relating to a program for a computer, prompting the UK IPO to issue revised guidelines for the examination of patent applications concerning neutral networks.
The UK Supreme Court has now agreed to take the case up for review and is expected to hear the appeal in the middle of next year. It will be an important opportunity for the UK Supreme Court to consider the law on the patentability of computer implemented inventions in the age of AI, and the outcome of the case will be much anticipated by those looking to obtain patent protection in that space.
OpenAI faces litigation in multiple jurisdictions on use of copyrighted content to train AI models
ChatGPT developer OpenAI is facing claims in multiple jurisdictions alleging that it "scraped" and used copyrighted content without permission, to train its AI models.
In Canada, various news and media companies including the Toronto Star, the Canadian Press, Postmedia and others have filed a claim seeking damages and an injunction. The claimants are alleging that OpenAI scraped large volumes of news content from their websites, in violation of the websites' terms of use and breached copyright protections, subsequently using the scraped content for AI model training. The claimants further allege that in scraping the content, OpenAI circumvented various content protections such as subscription paywalls, and that it was fully aware of the ways in which its data-scraping exercises breached these protections and violated terms of service.
Indian news agency ANI is alleging similar facts in a suit it has brought in the New Delhi High Court against OpenAI – ANI similarly alleges that OpenAI knowingly breached website terms of use and circumvented other protections to scrape and use its content in training AI models. The claim also alleges that ChatGPT has erroneously attributed to ANI news articles that it actually generated itself.
The Canadian and Indian cases are the latest in an array of similar litigation that OpenAI is facing in many different jurisdictions, including in the US, where the New York Times and a separate group of eight other newspapers have separately filed claims alleging similar copyright and terms of use infringements.
UK competition regulator takes no action on Google-Anthropic linkup
The UK's Competition and Markets Authority ("CMA") has decided against taking action or carrying out any further investigation into Google parent company Alphabet's partnership with AI research company Anthropic, finding that there is no "relevant merger situation".
The partnership between Google and Anthropic involved Google providing computing capacity to Anthropic as well as distributing Anthropic's "Claude" family of AI foundation models on Google's "Vertex AI" platform for foundation model distribution.
As part of the tie-up, Google acquired several tranches of non-voting shares in Anthropic, and also issued convertible loan notes to Anthropic, convertible in some circumstances into further non-voting Anthropic shares. The CMA has found that these arrangements did not pass the threshold of creating a "relevant merger situation", as Google had not acquired "material influence" over Anthropic. Additionally, the turnover which Anthropic generates in the UK did not exceed the "UK turnover test" threshold governing when the CMA may investigate.
Technology developments and market news
AI development set to drive more than twofold increase in data centre power demand by the end of the decade
Concerns have been raised over the sustainability of AI applications, given the extent to which AI development has contributed to a precipitous increase in demand for data centres. Goldman Sachs has published research estimating that the demand for power from data centres is set to increase by 160% from present levels by 2030. The strain of powering AI has been recognised a cause for this projected uptick in demand.
This has led to key players, such as Meta, Google and Microsoft, exploring options such as nuclear power as a solution to provide carbon-free power – with varying degrees of success to date, as we reported on in last month's Neural Network.
For example, Microsoft has signed a deal to purchase power from the Three Mile Island nuclear plant in the US, which will be recommissioned in order to power its data centres. Meanwhile, Google has partnered with a company developing, and seeking to bring to market, small modular nuclear reactors to feed the data centres that underpin its AI technologies.
Questions have also been raised over the energy efficiency of the hardware on which AI will run in the future. For example, it has been noted that the graphic processing unit chips which form part of AI infrastructure have not been designed to be energy efficient, due to the fact that they were not expected to be so widespread.
These concerns were reflected in the report published by the International Energy Agency in July; which found that future energy consumption for data centers and AI is "highly uncertain" due to key uncertainties over areas such as demand and improved efficiency.
Amazon doubles its investment into AI research company Anthropic
Questions and concerns
Amazon has invested a further $4 billion into Anthropic, a US-based artificial intelligence public benefit startup. This brings Amazon's total investment in the company to $8 billion.
Alongside announcing the additional investment, Anthropic gave further details on how it is expanding its collaboration with Amazon Web Services ("AWS"). Anthropic is now working with Annapurna Labs at AWS to develop and optimise the next generations of "Trainium" chips – AI-optimised microchips developed by AWS specifically for AI training.
Claude, a family of large language models developed by Anthropic, has become the "core infrastructure for tens or thousands of companies seeking AI solutions at scale" through Amazon's "Bedrock" service, which makes multiple AI foundation models available to AI developers. Companies using Claude models through Amazon Bedrock include pharmaceutical giant Pfizer and software company Intuit.
Additionally, Anthropic is working with AWS to create the "technological foundation" for the next generation of AI research and development. They are in the processing of building a platform that "gives organizations of all sizes access to the forefront of AI technology".
Our recent AI publications
If you enjoyed these articles, you may also be interested in our article looking in overview at enforcement of the EU AI Act, which you can read here.
You can also read our summary of the Emotional Perception AI proceedings in the Court of Appeal (now heading to the Supreme Court) here.
We have also produced a podcast series covering various aspects of the evolving world of AI and its implications for businesses and broader society. New entries in the series this month cover topics including "AI value and supply chain", "Authorised representatives under the EU AI Act", and " Contracting for AI system and for services embedded with AI". The full podcast series is available for listening here.