From August 2, 2026, new labeling requirements for AI-generated content will apply across Europe. Companies, agencies, and other providers that use artificial intelligence to create or manipulate text, images, videos, or audio content and distribute it publicly will be required to clearly and unambiguously indicate this.
The aim of the regulation is to protect consumers from confusing AI content with real, human-generated information.
In the following article, we will explain when you are affected by the AI regulation and how you can protect yourself from sanctions.
Which companies are affected by the AI regulation?
The EU AI Regulation (AI-VO) generally applies to all companies that develop, provide, or use artificial intelligence, provided that the AI systems are used in the EU or their results affect individuals within the EU. This includes not only IT companies but also businesses that use AI for purposes such as marketing, workforce planning, customer communication, or content creation.
The AI Regulation distinguishes between different legal roles, each with its own specific obligations. These roles can change depending on the type of use. For example, if an imported AI component is integrated into the company's own software and distributed under its own name, the company is legally considered a supplier.
Providers develop AI systems themselves or have them developed and market them under their own name. Among other things, they must ensure compliance with the AI Regulation, conduct risk assessments, and provide technical documentation.
Operators use AI systems as part of their business activities. They are subject to labeling and transparency obligations, particularly regarding AI-generated content or deepfakes.
Importers and dealers also have obligations to inspect, inform and due diligence.
Important: There is no general exemption for SMEs. The only exceptions are purely private, non-professional use and specific areas as defined in Article 2 of the AI Regulation.
Risk groups under the AI Regulation: Which AI applications are particularly regulated?
The EU AI Regulation (AI Regulation) assigns AI systems to so-called risk groups. The decisive factor is the potential risk to health, safety, and fundamental rights posed by an AI application. The higher the risk, the stricter the legal requirements for providers and operators.
AI systems with unacceptable risk
Among the impermissible AI systems are, in particular, applications that are incompatible with the fundamental rights of the European Union. These include, for example, social scoring models, systems for the targeted cognitive or behavioral manipulation of individuals, and methods for emotion recognition in the workplace. The use of such AI applications is prohibited under EU law and has been since [date missing]. Completely banned from February 2nd, 2025.
High-risk AI systems
High-risk AI systems are those that have a significant impact on the health, safety, or fundamental rights of individuals. This applies particularly to AI applications in sensitive areas such as medicine, transportation, human resources management, education, and the financial sector. Examples include AI systems for evaluating MRI images or for analyzing and assessing loan applications. The use of such high-risk AI is subject to strict legal requirements, including comprehensive risk assessments, ensuring transparency, high technical robustness, and effective human oversight.
AI systems with limited risk
These AI systems interact directly with people and therefore pose a manageable risk. Typical use cases include chatbots in customer service. Transparency obligations apply to such systems: users must be clearly and unambiguously informed that they are communicating with an AI and not a human.
AI systems with minimal risk
This category includes everyday AI applications without any significant risk potential. Examples include spelling and grammar checkers, spam filters, and AI-powered games. No additional legal obligations exist for such systems; only voluntary adherence to codes of conduct is recommended.
Use and integration of basic models (GPAI systems) according to the AI Regulation
The EU AI Regulation (AI-VO) contains specific rules for so-called GPAI systems (General Purpose Artificial Intelligence), also known as basic models. These are AI systems with a general purpose, trained on large datasets, and flexibly deployable for various tasks. GPAI systems can also be integrated into other AI applications and, depending on their area of application, may fall under different risk categories. A well-known example of a GPAI system is ChatGPT.
The AI Regulation distinguishes between GPAI without systemic risk and GPAI with systemic risk. The latter applies when an AI system is particularly powerful, widely deployed, and can have significant impacts on the economy, society, or security. In these cases, more stringent regulatory requirements apply.
This distinction primarily concerns providers of GPAI systems, who may be subject to additional documentation, transparency, and risk mitigation obligations. For operators or users of GPAI systems—that is, companies that use or integrate such AI models—the AI Regulation does not stipulate any separate special obligations. The general role and usage obligations under the AI Regulation remain applicable.
Supervision and sanctions under the AI Regulation
The EU AI Regulation (AI Regulation) provides for state oversight of the use of AI systems. Each EU member state is obliged to designate competent supervisory authorities to which potential violations of the AI Regulation can be reported and prosecuted.
Companies must therefore expect their AI applications to be subject to regulatory oversight in the future. Violations will result in severe penalties: fines can reach up to €35 million or up to 7 million of a company's global annual revenue, whichever is higher.
Therefore, an early legal review of the use of AI is strongly recommended.
Entry into force of the AI Regulation and relevant transitional periods
The EU Regulation on Artificial Intelligence (AI Regulation) has been formally in force since August 1, 2024. However, its practical application will be phased in gradually over several years to give companies the necessary time to adapt internal processes, systems, and compliance structures. The following key dates are important in this regard:
From the February 2, 2025 The use of AI systems that pose an unacceptable risk is prohibited. This applies in particular to applications that are incompatible with the fundamental rights of the European Union. At the same time, companies are obligated to ensure that the individuals involved in the use of AI systems possess sufficient expertise and application skills.
For the August 2, 2026 For the first time, the specific regulations for general-purpose AI (GPAI) systems come into force. These include, in particular, so-called basic models, such as those used in large language models. At the same time, this date marks the point at which the AI regulation is, in principle, fully applicable.
However, for certain high-risk AI systems, the legislator provides for an extended transition period. In these cases, the corresponding obligations only apply from [date]. August 2, 2027.
Early legal assessment is recommended to minimize implementation and liability risks.
What additional things need to be considered when using AI?
In addition to the EU AI Regulation (AI-VO), companies must comply with other legal requirements when using artificial intelligence. Particularly relevant are labor law, data protection, and intellectual property law (copyright and trademark law).
In employment law, AI offers numerous applications, particularly in recruitment and human resources management, such as creating job postings, pre-selecting applicants, and internal communication. At the same time, risks exist under the General Equal Treatment Act (AGG), as AI systems can adopt discriminatory patterns from their training data. Human oversight of the results is therefore essential.
Furthermore, Article 22 of the GDPR limits the use of fully automated decisions: personnel decisions may not be made exclusively by AI.
The co-determination rights of the works council may also be affected, for example according to Section 87 Paragraph 1 No. 6 of the Works Constitution Act, especially if AI is used for performance or behavior monitoring.
Copyright law states that purely AI-generated content is generally not protected. However, protection may arise if AI is used only to support other functions. Additionally, the terms of use of the AI tools and potential trademark infringements must be examined.
Finally, there are labeling requirements, primarily for deepfakes or AI-generated content with the potential to mislead. A legal review is therefore recommended.
Recommendations for companies on the use of AI
Companies should take organizational and legal measures early on to comply with the requirements of the EU AI Regulation (AI Regulation). A key requirement is that employees who use or monitor AI systems have an adequate understanding of AI. In addition, the introduction of an internal AI policy is recommended, which clearly defines usage limits, responsibilities, and legal requirements. Existing co-determination rights of the works council must be taken into account. Here are the most important recommendations:
Companies should first conduct a comprehensive inventory of all deployed AI systems and assign them to the respective risk classes and their own roles according to the AI Regulation. Based on this, compliance with legal requirements must be ensured. Extensive obligations exist, particularly when using high-risk AI systems, for example, regarding risk assessments, effective control mechanisms, and detailed documentation requirements.
Furthermore, a high degree of transparency and robust documentation are essential. Comprehensible documentation is required to demonstrate compliance with the AI Regulation during official audits. In addition, companies should conduct regular training and awareness campaigns to ensure that employees are familiar with the legal and practical requirements and can implement them correctly in their daily work.
The AI Regulation aims to ensure the responsible and safe use of AI. Companies that prepare early not only reduce liability and fine risks, but also strengthen the trust of customers and business partners.
Have your AI implementation legally reviewed. We would be happy to assist you with classifying your AI systems, creating an AI policy, and implementing the AI regulation in practice. Schedule a free consultation now.



