The Neural Network – August 2024

The Neural Network – August 2024

In this edition of the Neural Network, we look at key AI developments from July and August 2024.

In regulatory updates, the EU AI Office launched a consultation on trustworthy general-purpose AI models, the European Commission published updates to its AI Act Q&A page, and the European Data Protection Board adopted a statement on data protection authorities' role in the AI Act framework.

In AI enforcement and litigation news, the US Federal Trade Commission, US Department of Justice, the UK Competition and Markets Authority and the European Commission issued a joint statement on AI competition issues, and the Information Commissioner's Office and the Irish Data Protection Commission began investigations into X's collection of user data to train its AI chatbot, Grok.

In technology developments, OpenAI announced that it is testing an AI-powered search engine, SearchGPT.

More detail on each of these developments is set out below.

Regulatory

Enforcement and civil litigation

Technology developments

Regulatory

EU AI Office launches consultation on trustworthy general-purpose AI models

On 30 July, the EU AI Office launched a consultation on trustworthy general-purpose AI models. It also invited relevant stakeholders (such as providers of eligible general-purpose AI models, industry organisations and academics) to participate in the drawing-up of a general-purpose AI Code of Practice.

Under the AI Act, a general-purpose AI model is an AI model that is trained on vast amounts of data using self-supervision at scale and that displays "significant generality". It is an AI model that can be used to perform a "wide range of distinct tasks" in a variety of sectors without substantial modification. One example of a general-purpose AI model is a large language model, such as GPT-4.

The Code of Practice will provide guidance on how organisations involved with general-purpose AI models can comply with the AI Act. The consultation will cover this topic and will also inform some of the work undertaken by the AI Office, such as the template for the summary of the content used for training general-purpose AI models and the accompanying guidance.

As a recap, the AI Act sets out several obligations for providers of general-purpose AI models, including:

  • maintaining technical documentation (including training and testing process, and the results of its evaluation);
  • providing information to downstream providers to enable them to understand the capabilities and limitations of the model;
  • ensuring compliance with copyright laws;
  • making publicly available a sufficiently detailed summary of the content used for training; and
  • cooperating with the European Commission (the "Commission") and competent authorities.

Providers of general-purpose AI models that pose "systemic risk" will be subject to further obligations, such as needing to conduct a higher level of testing, and ensuring adequate cybersecurity measures are in place.

For an overview of the AI Act, please see our AI Act Quick Guide.

Commission updates its AI Act Q&As

On 1 August, the Commission updated its AI Act Q&A page.

Key takeaways:

  • Minimal risk and transparency obligations (also known as specific transparency risk) are classed in two separate categories. Some of the commentary on the AI Act had previously assumed that they were in the same category.
     
    Specific transparency obligations apply to certain AI applications, where there is "a clear risk of manipulation", for instance via the use of chatbots, or deep fakes.
     
    In contrast, the majority of AI systems are minimal risk, and can therefore be developed and used subject to the existing legislation without additional legal obligations. Providers of these AI systems can choose to comply with voluntary codes of conduct and apply the requirements for trustworthy AI.
  • Environmental protection is one of the fundamental rights that providers of AI need to take into consideration.

These updates should give AI providers and deployers greater clarity as to which aspects of the AI Act are relevant to them, and which regulations they should be complying with.

EDPB adopts statement on DPAs' role in AI Act framework

On 16 July, the European Data Protection Board ("EDPB") adopted a statement on data protection authorities' (each a "DPA") role in the AI Act framework.

Under the AI Act, EU Member States must appoint a market surveillance authority to help implement the AI Act ("MSAs"). The EDPB recommends that DPAs be appointed as MSAs for high-risk AI systems, particularly those that are likely to impact "natural persons' rights and freedoms with regard to the processing of personal data".

The EDPB stated that DPAs would be particularly suitable for the role of MSAs as they already have expertise in AI technologies and in "assessing the risks to fundamental rights posed by new technologies".

The AI Act also calls for the designation of a single point of contact for the public and AI stakeholders, and the EDPB suggested that if DPAs were appointed as MSAs, they should also be designated as the single point of contact for the public and counterparts at Member State and EU levels.

The EDPB's recommendations are not binding on Member States, however they can take them into account when making decisions.

Enforcement and civil litigation

FTC, DOJ, CMA and Commission issue joint statement on AI competition issues

On 23 July, the US Federal Trade Commission ("FTC"), US Department of Justice ("DOJ"), the UK Competition and Markets Authority ("CMA") and the Commission issued a joint statement on competition issues in generative AI foundation models and AI products.

The statement highlighted the key risks to competition in the AI space, in particular how:

  • a concentrated control of key inputs, such as specialised chips, could allow a small number of companies to have a disproportionate influence over the future development of AI tools;
  • "large incumbent digital firms" already have many advantages in the AI industry, for instance through controlling the distribution of AI-enabled services. This could enable these firms to extend their positions in the market to the disadvantage of future competition; and
  • some arrangements involving key players in the AI industry could "steer market outcomes in their favour at the expense of the public".

The joint statement also emphasised three key principles for the protection of competition in the AI ecosystem: fair dealing, interoperability, and choice.

  • Fair dealing – the statement encourages fair dealing in the AI ecosystem in order to encourage investment, innovation and competition in the sector.
  • Interoperability – the competition authorities warn that failure to ensure interoperability in an AI system, particularly where developers claim that this is not possible due to privacy or security measures, this will be "closely scrutinised".
  • Choice – the statement advises that the competition authorities will be "scrutinising ways that companies may employ mechanisms of lock-in" to prevent businesses or consumers from being able to choose other options of AI products or businesses.

This joint statement is another example of how competition authorities across the globe are taking a keen interest in competition issues in the AI space. For example, in the UK, the CMA has recently launched several AI-related actions, launching inquiries into Microsoft's deal with AI startup Inflection, as well as the deal between Amazon and the AI company Anthropic.

ICO and DPC investigate X over its collection of user data to train AI chatbot

The Information Commissioner's Office ("ICO") and the Irish Data Protection Commission ("DPC") are currently investigating X over the platform's collection of user data to train its AI chatbot, Grok, without their knowledge or consent.

The social media platform had activated a setting by default which allowed X to utilise users' "posts…interactions, inputs and results" to be used as training data for Grok. Additionally, X did not allow users to easily opt-out of this default setting, as this was only possible via the desktop site, and not the more commonly used app.

The ICO is concerned about X's lack of transparency, as the platform had not taken any steps to notify users that their data was going to be used for this purpose and had not given them a chance to object.

On 6 August, the DPC made an urgent High Court application to prevent X from continuing to train its AI chatbot on users' data without their consent.

Despite an initial statement from X that they would be "pursuing all available avenues to challenge” the DPC's actions, on 8 August, X agreed to suspend its processing of users' data to train Grok.

Under the GDPR, consent must be "given by a clear affirmative act" – so pre-ticked boxes do not constitute valid GDPR consent. Therefore, organisations relying on consent as their legal basis for processing personal data must ensure that individuals take a positive action to indicate their consent, such as ticking an opt-in box.

Technology developments

OpenAI tests new search engine

On 25 July, OpenAI announced that it is currently in the process of testing a search engine powered by its AI models, called SearchGPT.

SearchGPT aims to use AI to increase the speed at which users can find the results they are searching for on the web, and to allow users to ask follow-up questions if they require more information after their initial search. OpenAI also announced that it would be partnering with publishers and news agencies, such as the Atlantic and News Corp, in order to help them manage how their content appears to users of SearchGPT.

AI-powered search tools are a growing trend in the technology industry, with the search engine Perplexity AI increasing in popularity and Google integrating AI into its search engine, with AI Overviews. However, both of these rollouts have encountered problems. Perplexity has faced plagiarism allegations from media outlets such as Forbes and Wired. Google's AI Overviews, which summarises search results so that users do not have to click through to websites, received mixed responses from users, who claimed that many of the summaries provided by the feature were inaccurate.

SearchGPT is currently only available to 10,000 test users, and there is no confirmation of when it will be made available to the wider public.

Our recent publications

If you enjoyed these articles, you may also be interested in our recent publication on patents for computer implemented inventions

Or our briefing note on the Hong Kong Commissioner's new Model AI Framework: