Lorem

Our webinar explored liability that can be incurred from the creation use or deployment of AI systems and raised some important questions around the current and future regulatory landscape applicable to AI and liability. We also discussed the regulatory and tortious liability regimes in the UK and across the EU and considered the potential for insuring against the risks posed by AI.

This article summarises the key questions we considered – for more detail, see the recording of our webinar.

AI does not exist in a legal vacuum

Q: What is the wider legal context to bear in mind when considering AI liability risks?

A: EU product laws take a holistic, whole life cycle approach to the regulation of products, which is also the approach in the UK where product liability law derives from EU law, although the UK is starting to diverge from this position following Brexit.

Consumer protection laws include sector-specific regulations and cross-sectoral measures like the EU General Product Safety Directive and the Product Liability Directive, which impose strict liability on manufacturers, importers and distributors for unsafe products.

The EU AI Act governs the construction and market placement of AI, whilst the proposed EU AI Liability Act aims to address consumer-facing liability for AI-related issues.

In addition to relying regulatory frameworks, there is the potential for contractual claims to be initiated between contracting parties, and tortious claims may be brought where a duty of care can be established.

When looking more specifically at evolution of extra contractual liability in relation to AI systems, there are two key pieces of legislation, namely the proposal for a revised EU Product Liability Directive and the proposal for an EU AI Liability Directive.

Product liability and AI

Q: What is the current product liability regime in the EU?

A: The current EU Product Liability Directive (85/374/EEC) has been in place for nearly forty years. It is outdated, having been designed to relate to analogue physical products, and does not address modern digital products, non-embedded software, 3D printing or the Internet of Things. It is akin (albeit not completely in line with) a strict liability regime under which the consumer may be compensated, but only in relation to personal injury (physical harm) and property damage. It does not extend to psychological harm and modern property rights like data loss or corruption.

Consumers bear the full burden of proof under the current Directive, which is particularly challenging with complex digital products, making it difficult to prove a product defect and causation in court. As the existing Directive was implemented at Member State level (as a Directive), variations in procedural laws across the EU have also emerged.

Reform of product liability in relation to AI

Q: What is on the horizon in terms of terms of changes to EU law relating to product liability and AI?

A: A new EU Product Liability Directive is being introduced, designed to maintain the fundamental principles of the outgoing law whilst adapting to modern technologies including AI. The new Directive will allow for compensation of moral damages in addition to physical damages, expanding the scope of compensable harm.

It also clarifies that software, including AI systems, will be treated as a product going forward. As AI systems can be standalone products or components of other products, this raises questions about liability sharing between manufacturers in case of damages from unexpected behaviours, however.

The new Directive will introduce factors for assessing product defectiveness that are meant to adapt the analysis to AI system. The list of factors in the directive includes for example, the impact of AI learning capabilities on the product safety and compliance with cybersecurity obligations.

From a procedural standpoint, the EU Product Liability Directive introduces some new disclosure obligations and a set of presumptions to alleviate the burden of proof for claimant, especially when the claimant faces excessive evidentiary difficulties due to technical complexity. The necessity to explain internal functioning of AI systems is one of the examples provided to illustrate cases where this could apply.

Q: If Company A integrates an AI system into its product that has been provided by Company B and the product goes on to cause damage, who will be liable under the product liability regime?

A: The short answer to that is: either party, or potentially both parties, which is the case under the current EU Product Liability Directive and will be the same under the incoming Directive. The principle behind this is to make life as easy as possible for the consumer and allow the consumer to choose which entity to pursue. If the consumer is successful in its claim, whichever entity it has chosen to pursue must pay the claim in full. The law leaves it to the manufacturer of the product and the manufacturer of the allegedly defective component to essentially fight out respective responsibility between themselves in a in a contribution claim.

Q: What about contractual exclusions of liability? Can the AI system provider exclude any product liability that it might be open to?

A: No, not in a contract with a consumer. But in a contract between an AI system provider and (for example) a component supplier, a contract could include a liability exclusion, subject to specificities of Member States’ law and case law.

Timeline of the two EU directive proposals

Q: When will the new EU legislation come into effect?

A:  It is difficult to predict, but the new EU Product Liability Directive may be implemented by Member States as soon as 2026-2027 following adoption in the EU. The progression of the new AI Liability Directive is slower, but a similar timeline for introduction of the legislation is not impossible.

There is the potential for both Directives to have extra-territorial effect – by applying to products and AI systems which are made available within the EU, even if the manufacturer or provider is based outside the EU.

Q: What is the difference between the draft EU AI Liability Directive and the draft Product Liability Directive? Why do we need both?

A: There are overlaps in places, but key distinctions include the need to prove fault under the AI Liability Directive, which is not required under the Product Liability Directive. However, the AI Liability Directive in some ways is wider in scope, applying to both legal and natural persons and covering all types of damages recognised by EU Member States (i.e. not only death, personal injury, and damages to property and to data).

UK approach

Q: Are there any corresponding changes to the law in the UK?

A: The UK is taking a different approach to AI regulation compared to the EU, focusing on sector-specific guidelines and voluntary measures rather than comprehensive legislation. English law also contains the equivalent of the General Product Safety Directive and the Product Liability Directive, which were retained in English law following Brexit.

The English equivalent law suffers from the same defects that are leading the EU to revise both of their Directives. There is no current plan to revise the Consumer Protection Act, however, and the UK is taking a different path to the EU on AI regulation, with no AI-specific law currently in place.

From a practical perspective, a UK manufacturer looking to sell a product in both the EU and UK may well try to meet the standards laid down in the EU legislation to avoid having to meet different sets of requirements in the different regimes.

Tortious liability and AI

Q: What issues relating to tortious liability are particular to AI?

A: Arguably, the biggest issues are technical in nature, rather than legal.  AI introduces technical challenges in proving causation due to the ‘black box’ or explainability problem. As it may be impossible to determine what has happened in the ‘black box’ of an AI system to produce an unexpected result, it can be very difficult to find out what caused an AI defect. To address this and other issues relevant to the burden of proof, the draft AI Liability Directive proposes flipping the burden of proof on causation from the claimant to the defendant, particularly for complex AI systems where claimants may lack technical expertise.

In addition, the draft Directive may require AI providers to disclose technical information in claims related to AI, similar to common law disclosure principles in the UK and elsewhere. Beyond these changes, liability is likely to be assessed under existing Member State regimes, with the draft Directive aiming to support those systems in handling AI-related claims.

Insuring against AI risks

Q: Is it possible, or will it become possible, to insure against AI risks?

A: From a legal standpoint, AI risks are insurable (except for criminal sanctions and, potentially, fines/penalties imposed by regulators). The big issue is whether AI insurers will be willing to cover AI risks, which are new, quickly evolving and still difficult to assess. The insurance market is for the moment in an observation phase: most risk carriers have not modified their policies to either expressly cover or exclude AI-related risks, and only a few insurers have developed AI-specific insurance solutions.

It is difficult to foresee how the insurance market will evolve to react to the emergence of AI-related risks. It will depend on how the technology and associated risks evolve. If AI risks can be assessed and monitored, more insurers may offer coverage in future and develop specific products. 

By Jeanne Dauzier, Nick Rock, Phillip Kelly, Aurelia Pons, Alexis Andre and Lisa Urwin