AI technology projects – the regulatory landscape

24 February 2023. Published by Helen Armstrong, Partner and Ricky Cella, Senior Associate and Joshy Thomas, Knowledge Lawyer

Parties engaged in AI technology projects should be mindful of the regulatory landscape, and the changes taking place within it. A failure to do so could result in an AI solution that is not compliant from a regulatory perspective, the use of which potentially creates risk for the technology provider and user.

The EU's Artificial Intelligence Act

 The European Commission adopted the proposal for a Regulation to lay down harmonised rules on artificial intelligence (AI Act) in April 2021. The proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI, robotics and related technologies. 

In December 2022, the EU reached agreement on a draft version of the AI Act which will now be debated and discussed by EU governments, the Commission and European Parliament, following agreement by the European Parliament of its common position. However there have been disagreements between key political groups, in particular as to how the law classifies AI systems as 'high risk'—many groups are keen to ensure that only truly high risk cases are included in the list of high risk scenarios (contained in Annex III of the draft text). They are also seeking contractual freedom to allocate responsibility to various operators along the value chain and no overlap or competing obligations with existing legislation. The result of these disagreements is that the full parliamentary vote is now likely to be delayed until April 2023 at the earliest.

The current draft text seeks to distinguish AI from simpler software systems by defining AI as systems developed through machine learning approaches and logic and knowledge-based approaches. It looks to prohibit certain AI practices (such as use of AI for social scoring) and will create obligations and duties for those operating 'high risk' applications. 

The proposed rules will also deal with enforcement after AI systems are placed on the market and provide a governance structure at European and national level. Once an AI system is on the market, designated authorities will provide market surveillance while providers will be subject to a post-market monitoring system and will have to report serious incidents and malfunctioning.

Notably, the EU has also proposed a new AI Liability Directive that will potentially make it easier for those who suffer harm caused by an output or failure of an AI system to claim damages by introducing in certain circumstances (1) a rebuttable presumption that fault on the part of the AI provider or user led to the harm; and (2) a right to disclosure of evidence relating to the AI system. 

The US—a voluntary set of AI standards

 There is currently no comprehensive federal legislation regulating AI systems in the US. If passed, a proposed US Algorithmic Accountability Act would oblige large companies to undertake impact assessment and to demonstrate responsible development and deployment of AI.  Until then, the regulatory framework is wholly voluntary. 

On 26 January 2023, although not a regulator, the US National Institute of Standards and Technology (NIST) released version 1.0 of its AI Risk Management Framework, a voluntary set of standards intended to address risks in the design and use of AI products, services, and systems.

The TTC Joint Roadmap for Trustworthy AI and Risk Management was published in December 2022 'to guide the development of tools, methodologies, and approaches to AI risk management and trustworthy AI by the EU and the United States in order to advance a shared interest in supporting international standardisation efforts and promoting trustworthy AI on the basis of a shared dedication to democratic values and human rights. The roadmap aims to take practical steps to advance trustworthy AI and uphold a shared commitment to the Organisation for Economic Co-operation and Development Recommendation on AI'.

Regulating AI in the UK 

 The UK is currently far from adopting a singular regulatory framework for AI. In a White Paper published on 29 March 2023, the government confirmed that there won't be a single piece of legislation governing the use of AI, nor a single specialist regulator. Instead, each sector regulator will be responsible for ensuring potential harm from AI is properly addressed.  

The White Paper established a "new national blueprint" for regulators to take into account in order to drive "responsible innovation".  This blueprint sets out five principles that will guide the use of AI in the UK (safety/security, transparency, fairness, accountability and contestability.  By applying these broad principles to each sector, the government aims to create an adaptable and context-driven framework of regulation.  A new consultation on the proposals is open until 21 June, which will inform how the framework is developed in the months ahead.

One challenge identified is the lack of a standard international definition of AI with doubt expressed that there will be a unifying definition. The White Paper has therefore highlighted that AI will be defined by reference to the functional capabilities or characteristics or adaptivity and autonomy, rather than adopting any rigid legal definition. 

The reality for UK businesses using AI is that the UK's less centralised approach will mean that they will need to deal with multiple regulators including: Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency.  The Data Protection and Digital Information Bill also includes measures on AI. The reasoning behind this approach is that the sector specific regulators understand the context of how AI is being deployed within their own sectors and the kinds of harms that can occur. In addition to that, they also have the best understanding of the existing rules and requirements that are in place, and therefore what may need to be built on or where future regulation may be needed. 

However, the White Paper recognises while there is a tremendous amount of guidance, regulation and standards out there (some of which is overlapping), there are also gaps. These overlapping areas and gaps suggest a need for a mapping exercise and an allocated central body to help oversee it, such as the Office for AI, who is able to convene the right regulators together to look at how they plug those gaps in a coherent and co-ordinated way. 

In the meantime, the interactive online platform—the AI standards hub (also launched in October 2022) aims to help UK organisations to navigate the evolving landscape of AI standardisation and related policy developments as well as funnel the UK’s contribution to the development of international standards for AI. 

The White Paper also recognises the need for the UK to work closely with international partners when developing its AI regulatory framework.  The UK may well look to other international initiatives such as Singapore's 'AI Verify' (an AI governance testing framework and toolkit that will allow industry, through a series of technical tests and process checks, to demonstrate their deployment of responsible AI directly to Government) or Canada's Algorithm Impact Assessment (a mandatory AI impact assessment for public bodies deploying AI) when finalising its approach to AI regulation. 

Conclusion

A significant number of tech companies and other businesses will be looking to use AI technologies and many of these companies will be contracting with overseas businesses. Managing regulatory risk will be challenging with a lack of alignment between regimes. It will therefore fall to the individual parties to the project to develop practices that enable them to comply with the relevant national frameworks. 

Stay connected and subscribe to our latest insights and views 

Subscribe Here