The Neural Network – November 2024

The Neural Network – November 2024

In this edition of the Neural Network, we look at key AI developments from October and November.

In regulatory and government updates, the UK Government has announced the creation of a new, red-tape-busting Regulatory Innovation Office, opened a consultation on a new AI governance self-assessment tool for organisations, signalled plans to consult imminently on new AI safety legislation to be introduced to Parliament in the next year, and opened the UK's new National Quantum Computing Centre.

In technology developments and market news, Meta has suffered a setback in its efforts to build an AI data centre powered by nuclear energy; and Microsoft and Nvidia have announced the launch of an "Accelerator" programme for UK AI startups.

More details on these developments are set out below.

Regulatory and government updates

Technology developments and market news

Regulatory and government updates

UK plans to legislate on AI risk in the coming year, will launch consultation "shortly", say ministers

Ministers have indicated that the UK Government will be proceeding with plans to pass legislation to guard against AI risks in the coming 12 months, with a consultation to be launched soon.

Responding to a written Parliamentary question on 14 October, Feryal Clark MP, the Parliamentary Under-Secretary of State for AI and Digital Government stated that a "full public consultation" will begin "shortly", which will take in a broad range of views from industry and academic experts, civil society and the public at large. This consultation will inform the proposals that the Government will then put to Parliament.

The Secretary of State for Science, Innovation and Technology, Peter Kyle MP, speaking at the Financial Times' Future of AI summit on 6 November, stated that the relevant legislation will be forthcoming in the next year. Discussing the voluntary agreements on AI testing currently in place in the UK with major developers of AI, the Secretary of State defended their impact up to now and argued that these have been largely "working", but said that the new legislation would put these on a statutory and legally binding footing. He also signalled increased investment in the relevant infrastructure required for the UK AI development sector's future growth.

UK creates new Regulatory Innovation Office, with mandate to tackle regulatory barriers to innovative technologies

The UK Government has announced the creation of a new Regulatory Innovation Office ("RIO"), tasked with tackling regulatory "red tape" that stands in the way of the speedy deployment of innovative technologies – including AI – in ways that benefit the public.

The new office has been given the aim of reducing the regulatory burden on businesses seeking to bring new, technologically innovative products and services to the market. The Government's announcement particularly highlights AI deployment in healthcare – in the NHS as well as in private-sector life sciences and healthcare companies – as being a key area on which the new RIO will focus at the outset.

Other initial priority areas for the new office will be biotechnology and synthetic biology, the UK's space industry and connected and autonomous technology such as drones.

The RIO will support existing regulators, assisting them in updating regulation as well as streamlining cross-regulator work. It will report to the Government on what it identifies as key barriers to innovation created by regulation and will set overall strategic priorities for existing regulators which correspond to the Government's overall ambition to improve the breadth, depth and pace of technology innovation in the UK.

DSIT consults on AI Management Essentials tool for organisational self-assessment

The UK Department for Science, Innovation and Technology ("DSIT") has launched a consultation on a new AI Management Essentials ("AIME") tool.

The AIME tool is intended as a self-assessment tool for businesses and organisations to employ in connection with their development and use of AI. It is designed to help organisations assess the robustness of their internal processes and safeguards for responsible AI development, rather than to evaluate AI products or services directly.

The new tool is "sector agnostic" and available for use by any organisation that develops, provides, or otherwise uses AI systems or services that incorporate AI systems. It appears to be most suitable for SMEs. DSIT has developed the AIME tool in an effort to provide a standardised and accessible self-assessment tool, responding to the proliferation of AI standards and frameworks which, DSIT's research and industry engagement have found, organisations have generally struggled to navigate.

The tool is ultimately intended to comprise three components – a self-assessment questionnaire; "rating" scores generated in response to organisations' answers to this questionnaire; and tailored guidance for improvement, "action points", and recommendations based on the self-assessment questionnaire responses. Only the questionnaire is presently being consulted on, with DSIT intending to develop the scores and recommendations aspects in light of consultation feedback.

The consultation is open for responses until 29 January 2025.

UK Government will consult widely in effort to resolve AI and copyright impasse

In response to a parliamentary question, the Minister of State for Data Protection and Telecoms, Chris Bryant MP, has confirmed that publishers, rights holders and AI developers must be consulted on the UK government's proposals to resolve an AI copyright dispute.

Creative industries are concerned that the government's proposals to resolve this dispute could lead to the return of a text and data mining ("TDM") exception. This idea has been highly criticised, with the chair of Parliament's Culture, Media and Sport Committee warning the government against "resurrecting this flawed notion of a TDM exception".

Bryant has said that although the issue is "complex and challenging", resolving it is a priority for the UK government. During a roundtable in September with the Department for Culture, Media and Sport and the Department for Science, Innovation and Technology, Bryant confirmed that the ministries "will continue to work closely with a range of stakeholders" and that next steps would be laid out "soon".

The Prime Minister, Sir Keir Starmer, has recently stated that it is a "basic principle" that UK publishers "should have control over and seek payment for their work, including when thinking about the role of AI".

UK National Quantum Computing Centre opens

A new National Quantum Computing Centre has opened in the UK.

Based at a site at the Harwell Science and Innovation Campus in Oxfordshire, the new Centre houses several quantum computers and is intended to be at the bleeding edge of the development of the new technology. Quantum computing, and the computing power it has the potential to deliver, is considered to be a potentially key future enabler of AI development, among many other prospective use cases.

Notably in comparison to many other state-sponsored quantum computing initiatives internationally, the quantum computing platforms at the new UK centre will not be confined solely to governmental used cases. The opportunity will be open to "anyone with a valid use case" to make use of the centre's capabilities.

The new centre forms a key plank of the Government's overall ten-year quantum computing strategy. Industry outreach forms a key part of its mandate, with "crash courses" offered to those in industry and a user and industry engagement and outreach programme known as "SparQ" intended to explore potential practical applications for quantum computing in sectors such as energy and healthcare.

UK Information Commissioner reminds financial services organisations of data protection obligations when rolling out AI

The Information Commissioner, John Edwards, has signalled that the ICO is ready to strengthen its enforcement regarding AI.

Speaking at the Data, AI and the Future of Financial Services Summit late last month, Edwards highlighted that "we have moved from a world…in which artificial intelligence was a back office administrative tool to one that is going to transform every industry".

In light of this, Edwards has said that UK banks need to "pause" and consider data protection issues when rolling out AI-enabled services such as chatbots or transaction monitoring tools. Stating that "data protection law is not an obstacle, but it does introduce a necessary bit of frictions into that rush to market, and we will enforce the rules to ensure that you have undertaken an adequate … impact assessment, risk assessment, and mitigated those risks before using technologies in a way that ingest or process personal data".

The Commissioner also dismissed claims that there is "a regulatory Wild West" and stated that the General Data Protection Regulation "is technology neutral [and] … does stretch and apply to new technologies".

Speaking prior to the introduction of the Government's new Data (Use and Access) Bill into Parliament, Edwards also discussed the potential contents of any new Bill We covered the new Bill in more detail in our most recent Data Protection Update, which you can read here.

ICO makes recommendations for use of AI in recruitment and the job market, signals increased scrutiny

On 6 November, the ICO published a series of recommendations to AI developers and providers to help ensure improved protection of job seekers' information rights.

The ICO has found that AI is being increasingly used in the recruitment space to help source potential candidates, summarise CVs and score candidates. To ensure that these tools do not negatively impact job seekers, the ICO conducted an audit of AI providers and developers; following which, the regulator has produced nearly 300 recommendations to better protect job seekers' data.

The audit found that some AI tools were not processing personal information fairly, and that others were inferring protected characteristics based on a candidate's name. Additionally, the audit found that some AI tools currently used in the recruitment space collect more personal data than is necessary and retain it for indefinite periods without the candidate's knowledge.

Speaking about the audit, the ICO Director of Assurance, Ian Hulme, said:

"Our report signals our expectations for the use of AI in recruitment, and we're calling on other developers and providers to also action our recommendations as a priority. That’s so they can innovate responsibly while building trust in their tools from both recruiters and jobseekers".

Following the audit, the ICO stated that it is its intention to work with organisations using these AI tools to "build on its understanding of the privacy risks and potential harms of using AI to aid recruitment". The ICO will also be delivering a webinar on Wednesday 22 January 2025 for AI developers and recruiters, which aims to help them learn more about the findings of the audit and how the ICO's recommendations can be applied.

US Consumer Financial Protection Bureau urges a "human in the loop" approach when deploying AI in the workplace

The Consumer Financial Protection Bureau ("CFPB") has become the latest US federal agency to remind companies that humans need to be kept "in the loop" when they deploy new AI tools to automate tasks such as hiring or monitoring worker productivity.

This comes as state and local lawmakers have continue to pass laws to govern the use of AI in the workplace. A common theme arising from the new regulations so far is a requirement that humans should be involved when AI tools are used in the workplace.

Studies have shown that HR departments have been quick to take up AI, with one in four US organisations currently using AI to support HR-related activities. However, this has led to various risks, which it is hoped incoming and future regulation will address.

One such risk is that these AI tools allow for discrimination in the workplace, with experts finding that bias can percolate into AI tools that select potential job candidates if hiring decisions are based on problematic data.

Steps have been taken to address this issue. For example, the Equal Employment Opportunity Commission is boosting its enforcement efforts surrounding AI and machine-learning hiring tools; with the use of AI in employment a top "subject matter priority" for the Commission.

The US Department of Labor has also issued guidelines that aim to remind employers of principles that should be considered when deploying and using AI in the workforce (which we have covered in more detail in a separate article in this newsletter). These principles include "meaningful human oversight for significant employment decisions".

Other risks that require human mitigation are the risk of employees leaking sensitive information when using AI tools; and the gathering of excessive information on employees that could violate their privacy.

The full circular issued by the CFPB can be read here.

US Labor Department AI guidelines welcomed by tech companies and civil rights groups

Amidst fears that AI will lead to job cuts across various industries, the US Department of Labor has introduced new guidelines that are aimed at ensuring that AI improves job quality and is of benefit to workers – particularly those in underserved communities.

Speaking about these guidelines at an online event, the Acting Secretary for Labor, Julie Su, said that "we have a shared responsibility to ensure that AI is used to expand equality, advance equity, develop opportunity and improve job quality" – things these guidelines aim to help achieve.

Additionally, the guidelines encourage developers and employers to identify and mitigate potential risks from new AI systems before releasing and marketing them to the public, and if threats cannot be mitigated, to use different systems.

The new guidelines are available in full here.

Technology developments and market news

Meta forced to abandon plans for nuclear-powered AI data centre due to ecological sensitivity of chosen site

Plans by Facebook parent company Meta to build a new nuclear-powered data centre, intended to support its AI development efforts, are undergoing a rethink after the intended site for the new centre was ruled out due to the potential ecological impact of the construction.

A rare species of bee discovered on the proposed site has meant that the land is now considered too ecologically significant for construction to go ahead. This is particularly significant in this case as the site is adjacent to a nuclear power plant, which would have serviced the data centre's energy demands – this being a key reason for the site's initial selection.

Several of the "big tech" giants have been exploring the potential of nuclear power as a way to service the astronomical energy demands of AI development and the associated data centres and data processing required for this development, whilst also maintaining carbon reduction commitments.

Google and Amazon are both exploring the potential of new small-scale modular nuclear reactors and have recently signed deals with companies developing such reactors. Microsoft, meanwhile, has signed a 20-year power supply deal with the largest operator of conventional nuclear reactors in the US, Constellation Energy, notable as it will entail the recommissioning and reopening of the Three Mile Island nuclear plant in Pennsylvania.

Microsoft and Nvidia launch new "GenAI Accelerator" for UK startups

Microsoft, alongside Nvidia and Microsoft-owned development platform GitHub, have announced an upcoming "Accelerator" programme for UK startups in generative AI.

The programme, which will run between 22 January and 5 March 2025, is aimed at UK-based startups looking to develop, bring to market, and scale generative AI products which "have the potential to change people's lives for the better, create jobs, and have a significant economic impact."

The programme is open to startups UK-wide, with a hybrid part-online, part-in-person model aimed at ensuring that geographical location is no barrier to participation. Participants in the programme will be offered access to resources, advice and materials on various fronts, including technical, organisational, fundraising and marketing. The programme will conclude with a 'demo day' in which the participant startups will present to a group of venture capital and private equity investors.

The application period for the Accelerator closes on 22 November. Applications are open to "UK-based AI start-ups and scale-ups which already have a product in the market" across various sectors including financial, healthcare, medical and life sciences, and energy and "green" technology.

Our recent AI publications

If you enjoyed these articles, you may also be interested in our article on the obligations of GPAI model providers under the EU AI Act, which you can read here, as well as our article on using personal data in AI projects which is available here.

We have also produced a podcast series covering various aspects of the evolving world of AI and its implications for businesses and broader society. The full podcast series is available for listening here.