“The AI Act complements regional regulations and other digital laws as a horizontal legal framework, but there is insufficient coordination with them.” This is the main point of the study that has now been published and was prepared by Philipp Hacker, Professor of Law (Frankfurt/Oder) for the Bertelsmann Foundation. Many AI applications that fall under the regulation’s broader requirements for systems containing artificial intelligence (AI) are therefore already subject to other regulations.
Advertisement
The hacker has mentioned the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA). At the same time, the AI Act is also “in tension” with the requirements of the financial, medical and automotive sectors, for example, when it comes to AI-based credit assessment or diagnostic systems or the functions of autonomous driving.
According to the lawyer, the AI Act’s far-reaching, risk-based approach aims to classify AI applications according to their risk potential and create strict requirements for potentially particularly dangerous systems. This is not limited to EU companies, but also affects companies based outside the EU whose AI systems are introduced into the Community or whose services are used there. By August 2026, the business community and Member States will be tasked with gradually implementing AI regulation in practice. But the different parts don’t fit together yet, comes out of paper: Discrepancies, overlaps and ambiguities may hinder smooth implementation and create legal uncertainty.
Enforcing rules and guidelines as a solution
In particular, hackers perceive a conflict between the risk analysis obligations of the DSA and the AI Act. These primarily affect platforms that integrate generic AI technologies such as large language models. The challenge here is to coordinate platform-specific and AI-related risks. Legal scientists point out that to date, there are no clear rules for reusing personal data for AI training. This makes it more difficult to comply with the GDPR and the AI Act. Civil rights organization Privacy International is here now came for evaluationModels such as GPT, Gemini or Cloud were “trained without a sufficient legal basis” using personal information and are not able to protect the rights of those affected under the GDPR.
Hacker points out that in the financial industry, different data security requirements can make AI-supported risk analysis more difficult. In the automotive industry, the integration of driving assistance systems into existing product safety and liability regulations represents a dual regulatory challenge. In the healthcare sector, conflicting regulations could slow the spread of AI-based medical applications given already limited approval capabilities. This includes, for example, tools for detecting cancer or creating doctor’s letters.
In the short term, the author recommends better linking existing rules to avoid duplication and increase efficiency. This has already been achieved under the AI Act, for example in relation to quality management systems in financial institutions. The EU Commission can encourage similar interactions through implementing rules. National supervisory authorities should also issue guidelines on the application of AI regulation in specific regional contexts. In the long term, national and European approaches are necessary to harmonize AI regulation with other legal acts and permanently eliminate contradictions. Furthermore, the frameworks should be regularly reviewed to adequately take into account technological and social developments.
(MKI)