Compensation for damages in the case of AI use – enforcing claims against providers and operators
hire a lawyer now
When artificial intelligence causes damage: Who is liable – and how to assert your claims
An AI system makes a wrong decision, an AI agent acts autonomously and causes damage, or a company uses AI without taking the necessary safety precautions. Such situations are no longer future scenarios – they happen every day. The question of who bears responsibility and how those affected can obtain justice is one of the most pressing legal issues of our time. Rogert & Ulbrich represents clients on both sides: in enforcing claims for damages against AI providers and companies that use AI.
Enforce legal claims in just 3 steps
Simple, convenient & fast – we enforce your rights.
commissioning
Give us your mandate easily and conveniently via online form from home.
1lawsuit & trial
We will take care of all the remaining steps for you. Sit back and relax.
2Success
We will successfully enforce your claim and you can enjoy the success immediately.
3When AI systems cause damage – a two-sided problem
Imagine the following scenarios: A bank's AI system incorrectly rates your creditworthiness as negative and denies you a loan, even though your financial situation is sound. An AI agent negotiating contracts on behalf of a company agrees to terms that severely harm the other party. An automated AI diagnostic system in a hospital delivers a false assessment, leading to delayed or incorrect treatment. Or an AI chatbot gives a customer incorrect legal or financial information, which they trust, resulting in financial losses.
In all these cases, the question immediately arises: Who is liable? The manufacturer of the AI system who developed and sold the technology? The company that deployed and configured the AI? Or is it due to a system error for which no one ultimately feels responsible? This very question is currently one of the most difficult in German and European liability law – and it has not yet been definitively settled by the courts.
This doesn't mean that victims are unprotected. It means that you need to know the right legal approaches – and have the right lawyer on your side. If you have been harmed by the use of AI, you should act now.
We will take care of your case – quickly & with commitment.
Claims for damages against the AI provider – when the manufacturer is liable
The AI provider – that is, the company that developed and marketed the AI system – is liable if the system itself is faulty. This sounds simple, but it isn't. AI systems are complex, learning technologies whose behavior can change over time. A fault is not always obvious, and the causal link between the system's behavior and the resulting damage must be proven.
The most important legal basis for recourse against the manufacturer is product liability. Product liability law stipulates that manufacturers are liable for damages caused by defective products – and AI systems can be classified as products within the meaning of this law. A product defect exists when the system does not provide the level of safety that one can reasonably expect. This could be a design flaw, a problem with the training data, or an error in the documentation and user manual.
The EU AI Liability Directive, being developed in parallel with the AI Regulation, aims to facilitate the enforcement of these claims. It is intended to grant injured parties, under certain conditions, the right to access evidence – that is, the information that normally resides with the provider and is necessary to prove the error. This is a crucial step forward, as many claims currently fail due to the burden of proof: Who is to prove that an opaque AI system operated incorrectly?
Were you harmed by an AI system and suspect that the system itself was faulty? Talk to us – we will examine whether claims against the provider are enforceable.
Claims for damages against the company that uses AI – when the operator is liable
Often, the AI provider is not the right contact person – or at least not the only one. In most cases, between the manufacturer of an AI system and the injured party, there is a company that uses the AI in its own operations: a bank, an insurance company, an online retailer, a doctor, or a government agency. This company is the so-called operator – and it, too, can be held liable.
The operator's liability can arise from various legal grounds. First, from general tort liability if the company acted negligently – for example, by using an AI system that was unsuitable for the specific purpose, because there were insufficient controls, or because employees did not receive proper training. Second, contractual liability may exist if a contract existed between the injured party and the operator and the AI-supported service was defective. Third, the AI Regulation significantly tightens the operator's obligations: Anyone operating a high-risk AI system without the required documentation, training, and oversight mechanisms has breached their duty of care.
The question of liability is particularly complex when it comes to AI agents – systems that can act independently, make decisions, and even conclude contracts. If an AI agent acts on behalf of a company and causes damage, the question arises: Is the company liable, as it would be for an employee? There is no established case law on this yet, but the prevailing legal opinion is that companies are generally responsible for the actions of their AI agents.
Have you suffered damages due to an automated decision by a company? Or is your company facing claims for damages? We clarify who is liable and to what extent.
AI agents – when autonomous systems act independently and make mistakes
AI agents are a new category of AI systems that go far beyond simply answering questions. They plan, decide, and act autonomously—booking appointments, concluding contracts, conducting transactions, communicating with customers, and managing business processes. This makes them a powerful tool—and a new source of legal risks.
The fundamental problem is this: the more autonomously an AI agent acts, the more difficult it becomes to assign clear responsibilities. If a human employee makes a mistake, the chain of liability is clear. If an AI agent makes a mistake, the first questions that arise are whether a human was involved in the decision-making process, who configured the agent, what instructions it received, and whether the system acted within its intended purpose. The answers to these questions determine who is liable.
Concrete risk scenarios arise, for example, when an AI agent in sales makes false promises or agrees to unauthorized prices. Or when a booking agent books a trip under incorrect conditions. Or when an AI agent in the supply chain triggers orders that violate contractual quantity restrictions. In each of these cases, the company operating the agent is primarily held responsible – with the possibility of seeking recourse from the AI provider if the system itself was faulty.
Does your company use AI agents – or plan to? Clarify now what liability risks are associated with this and how you can protect yourself.

The biggest obstacle: How to enforce claims for damages despite problems of proof
The central problem with claims for damages in the field of AI is the burden of proof. Under German law, the party claiming damages must prove the error, the damage, and the connection between the two. This is difficult enough with traditional damages – with AI systems, it's even more challenging because the decision-making processes are often opaque. Who is supposed to explain why an algorithm arrived at a particular result when even the provider cannot give a complete answer?
Several legal approaches can help here. First, the AI Regulation requires operators of high-risk systems to extensively document and maintain logs. These documents can be requested in the event of a dispute. Second, the future EU AI Liability Directive provides for a reversal of the burden of proof under certain conditions – if a company has violated its compliance obligations, it is presumed that this was the cause of the damage. Third, anyone who has suffered a data breach due to AI can, under the GDPR, request access to the data stored about them and the decisions made – which often provides valuable evidence.
In practice, this means acting quickly, requesting the right information, and intelligently combining the available legal bases. This is not a matter for legal laypersons. We are familiar with these approaches and apply them for our clients.
Don't wait until evidence disappears or deadlines expire. Talk to us now.
If your company is facing claims for damages – how we defend you
Not every claim for damages in the field of AI is justified. Companies using AI are increasingly confronted with claims based on incomplete knowledge or incorrect assumptions about how AI systems function. At the same time, there are genuine risks that operators must take seriously. The difference between these two situations lies in a thorough legal analysis.
If your company receives a claim for damages due to the use of an AI system, several questions need to be clarified immediately: Did the system actually make a mistake, or was it used as intended and the result is simply unfavorable to the claimant? Have you fulfilled all obligations under the AI Regulation – i.e., training, documentation, human oversight? Has the AI provider breached any obligations, allowing you to seek recourse? And: Are your contracts with the provider structured in such a way that the allocation of liability is clearly defined?
The situation is particularly critical if the company has failed to comply with the AI regulations. Missing documentation, insufficient training, or the absence of human oversight mechanisms are serious disadvantages in a liability dispute. They signal to the court that the company has neglected its due diligence obligations. Therefore, compliance is not merely a bureaucratic requirement, but genuine risk prevention.
Is your company facing a claim for damages due to AI? We will review the claim, develop your defense strategy, and represent you.
FAQs – Frequently Asked Questions about Compensation for Damages in AI Use
Rogert & Ulbrich – Your lawyers for AI liability and damages
Rogert & Ulbrich represents clients in all matters of AI liability – on both sides: in enforcing claims for damages against AI providers and operators, as well as in defending companies facing such claims. Dr. Marco Rogert and Tobias Ulbrich combine expertise in liability law, technology law, and data protection law with a deep understanding of the AI Regulation and the evolving European AI liability rules.
We provide out-of-court support in enforcing and defending against claims for damages, requesting documentation and logs under the AI Regulation and GDPR, and drafting and reviewing liability clauses in AI contracts. If litigation becomes necessary, we will stand by your side. Contact us and protect your rights.

Professional advice & support
We offer you a professional and comprehensive initial consultation regarding AI regulations. Take advantage of this opportunity and avoid mistakes.