EU AI Law: New rules for ChatGPT, Gemini and similar systems come into force

As of today, stricter regulations for high-performance AI systems like ChatGPT and Gemini have come into force in Europe. But what exactly are these regulations – and what impact can be expected?

Criticism comes from authors, artists, and producers in the music and video industries, among others. They complain that AI systems were trained on their copyrighted works – and are now competing with them with automatically generated texts, images, music, or videos.

The new EU law aims to address these concerns: providers of large AI models will in future be required to disclose which internet sources they use to train their systems – especially if these contain copyrighted content.

Legal scholar Philipp Hacker of the European University Viadrina explains: "This regulation would be particularly relevant if providers actually admitted that they had used so-called shadow databases – i.e., platforms that illegally provide copyrighted material." However, voluntary openness is hardly to be expected.

EU regulations aim to prevent legal disputes

A recent case from the US demonstrates the risks facing AI companies: Three authors sued the company Anthropic for allegedly using their book content to train the AI model Claude without permission. The court upheld the claim. The exact amount of damages is still pending – but according to lawyer Philipp Hacker, it could reach into the hundreds of billions.

Such lawsuits could also increase in Europe in the future, Hacker believes. However, the new EU regulations are intended to prevent such disputes: providers of powerful AI systems will in future be required to provide standardized proof that their training methods are compatible with European copyright law.

EU Digital Commissioner Henna Virkkunen emphasizes that this not only protects rights holders but also benefits companies themselves: "In doing so, we are creating legal certainty for innovation and investment." While many countries do not yet have comparable rules, the EU is deliberately sending a signal: "We are clearly showing developers what is expected of them – and thereby simplifying the process."

Extended safety requirements for AI systems

Since February, the EU AI Act has fundamentally prohibited the use of facial recognition in public spaces. The use of AI systems for so-called "social scoring," i.e., the evaluation of people based on their social behavior, is also prohibited.

The current extensions to the regulation now add additional security requirements for so-called General Purpose AI (GPAI) – i.e. large, versatile AI models such as GPT-4 (OpenAI), Llama (Meta) or Claude 4 (Anthropic), which often form the technological basis for many AI applications.

"In the future, these models will have to undergo a security check similar to a stress test," explains legal scholar Philipp Hacker. Experts will analyze the potential damage posed by the systems. Companies are obligated to take concrete measures to prevent misuse—for example, through technical protection mechanisms and transparent security concepts.

USA pursues opposite course – focus on deregulation

Implementing the new EU regulations on artificial intelligence is challenging: With the help of scientific expertise, the EU Commission has developed a "Code of Practice" – a practical guide designed to help companies implement the legal requirements in a meaningful way in their everyday lives.

Companies that adhere to this voluntary code of conduct benefit from reduced reporting requirements. However, broad voluntary participation by large US technology companies is questionable: Meta AI has already signaled its unwillingness to participate.

The reason: The political approaches on both sides of the Atlantic are increasingly diverging. While the EU is committed to regulation and transparency, the US – especially under Donald Trump – is pursuing an opposite strategy. Immediately after taking office, Trump rescinded the AI regulations of his predecessor, Joe Biden.

With his new AI plan, Trump is now explicitly committing to deregulation: states that enact their own strict AI laws will be excluded from receiving federal funding in the future. The goal, Trump says, is to create "the largest and fastest AI ecosystem in the world."

What happens if the new AI rules are violated?

One thing is certain: Anyone wishing to operate in the European market, with its approximately 450 million consumers, must now comply with the new EU AI law. Companies that do not wish to adhere to the guidelines are still obligated to meet the legal requirements by other means.

A one-year transition period is intended to give companies the opportunity to adapt their processes accordingly. Starting in August 2026, the EU Commission will then be granted official enforcement powers and can impose sanctions such as fines for violations, explains legal scholar Philipp Hacker.

But even before that, legal consequences are looming: As early as next year, affected citizens could file lawsuits for violations of the rules – as could competitors who feel disadvantaged by unfair advantages when other providers circumvent EU regulations.

Overall, the new rules mark another important step on the European Union's path to clearly regulate the use of artificial intelligence, prioritizing legal certainty and consumer protection.