Who is liable for AI errors?

Already today, AI errors can lead to significant liability risks. On the one hand, the damages caused by AI can be physical, such as when AI causes a car accident or misinterprets an X-ray image. They can be equally incorporeal, such as when chatbots betray trade secrets or discriminate through AI selection decisions. Current law provides for narrow liability rules, especially for the production and use of Artificial Intelligence. In addition to contractual liability under Section 280 (1) German Civil Code (Bürgerliches Gesetzbuch), there is extensive tort product and producer liability. And in certain areas, strict liability, such as in road traffic. In addition, AI decisions must be measured against prohibitions on discrimination. The use of AI can also have consequences under criminal law.

In a world where decisions are increasingly made by AI, the damage caused by AI use is also increasing.

Tightening of liability law planned

The liability framework is likely to become considerably stricter in the near future. The AI Act will create a far-reaching protective law within the meaning of Section 823 (2) German Civil Code (Bürgerliches Gesetzbuch). This means that anyone who violates the requirements of the AI Act will not only face heavy fines, but also considerable civil liability claims from any AI victims. In terms of liability for AI use, the issue of digital literacy is also likely to gain considerable importance.

In addition, on 28.11.2022, the European Commission unveiled two new proposed directives whose scope can hardly be overestimated: The revised Product Liability Directive, on the one hand, explicitly provides for strict liability in the case of software (i.e., also AI systems). The loss of data is also to constitute a compensable pecuniary loss. In addition, there is a threat of liability for persons who offer Open Source Software or data. In addition, there will be considerable simplification of evidence in favor of any claimants. The new AI Liability Directive, on the other hand, is intended to supplement existing tort law with evidence-based measures in favor of any parties harmed by artificial intelligence. Due to far-reaching presumption rules, AI providers are threatened with a shortage of evidence when defending AI liability claims and thus a loss of equality of arms in court.

We navigate you away from liability traps

Aitava provides you with a clear overall view of current and future liability issues. We help you identify impending liability risks at an early stage and implement tailored safeguards to protect you from potential risks.

It is to be expected that many participants in the AI value chain will be liable for AI damage in the future as part of an overall liability – possibly even IT suppliers and end users. We show you how to optimally allocate such risks today through suitable liability or recourse agreements. And protect yourself with suitable AI insurance policies.

AI & Data Strategy

If you want to be relevant tomorrow, you need to make strategic decisions today. Aitava removes obstacles so that companies can seize the opportunities of AI.

Learn more

AI & Data Compliance

Even today, numerous regulations must be observed when using AI and Data. The AI Act and Data Act herald a new era in technology regulation.

Learn more

Data Sharing

Many companies are sitting on a treasure trove of data. Other companies can make use of this treasure trove of data. We help with data exchange.

Learn more

Intellectual Property and Protection of Secrets

Is it okay to train AI with other people's data? Who owns the training data? The trained system? And the prompts? We have answers and more questions.

Learn more