Diritto al Digitale
Diritto al Digitale is the must-listen podcast on innovation law, brought to you by Giulio Coraggio, data and technology lawyer at the global law firm DLA Piper. Each episode explores the cutting-edge legal challenges shaping our digital world—from data privacy and artificial intelligence to the Internet of Things, outsourcing, e-commerce, and intellectual property.
Join us as we illuminate the legal frameworks behind today’s breakthroughs and provide insider insights on how innovation is transforming the future of business and society.
You can contact us at the details available on dlapiper.com
Diritto al Digitale
AI Act Changes: Is Europe Rewriting Its AI Regulation Strategy?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The European Union has agreed on major changes to the AI Act to reduce overlap with machinery, product safety, and sector-specific regulation. But is this just simplification — or the beginning of a broader rethink of AI regulation in Europe? In this episode of Diritto al Digitale, Giulio Coraggio, technology and data lawyer at DLA Piper, analyzes the latest AI Act amendments, the delay of high-risk AI obligations, the impact on industrial AI, medical devices, robotics, and what companies must do now to prepare for AI compliance, cybersecurity, and governance challenges.
📌 You can find our contacts 👉 www.dlapiper.com
There's a question that today every company investing in artificial intelligence should start asking itself. What happens when regulation designed to create trust in AI starts becoming a barrier to deploying it? Because that is exactly the debate currently exploding inside Europe. For years, the European Union presented the AI Act as the global benchmark for AI regulation. The first comprehensive AI law in the world, the European model, the idea that Europe could lead not through scale or technological dominance, but through regulation. But now, only months before some of the most important obligations become applicable, Brussels is already revisiting the framework. And this is not a minor technical adjustment. This is a political signal, because what started as an AI regulation discussion is rapidly becoming something much bigger. A debate about industrial competitiveness, innovation, cybersecurity, product safety, and Europe's ability to remain relevant in the global AI race. And the reality is that many companies, especially industrial groups, manufacturers, and operators deploying AI into physical products, started telling European institutions something very simple. This framework is becoming impossible to operationalize. And when that message comes from major European industries, particularly German manufacturing groups, policymakers normally listen. And today we discuss one of the most important developments in European AI regulation since the approval of the EU AI Act itself. The decision by the European Union to amend parts of the AI Act in order to reduce overlaps with sector-specific legislation, especially machinery regulation and product compliance frameworks. Because this story is not only about AI compliance, it's about the future of European regulation itself. Let's start from the beginning. The AI Act was designed around a risk-based approach. The higher the risk posed by an AI system to individuals, safety or fundamental rights, the higher the compliance obligations are. On paper, the logic was elegant. Minimum risk AI systems face almost no obligations, limited risk systems require transparency, high risk systems face extensive compliance obligations. And prohibited AI practices are banned altogether. The issue is that reality is much messier than theory, because many AI systems do not exist in isolation. They are embedded inside products already heavily regulated under European law. Think about industrial machinery, medical devices, connected vehicles, robotics, manufacturing systems, critical infrastructure, smart products, financial systems using embedded AI. These sectors were already subject to extremely sophisticated regulatory frameworks, such as product to product safety legislation, cybersecurity obligations, conformity assessments, technical certifications, sector specific standards, postmarket monitoring, incident reporting requirements. And suddenly the AI Act added another compliance layer on top of all of this. The result, many companies started realizing they would potentially need to comply simultaneously with the AI Act, the Machinery Regulation, the Cyber Resilience Act, NIST 2, DORA, the Data Act, the Product Safety Legislation, the revised Product Reliability Directive, and on top of them, sector-specific certification obligations. And at a certain point, the question stopped being how do we innovate using AI? And it became how do we survive compliance? This is where the political pressure became enormous. Over the last months, negotiations between the European Parliament, the Council, and the Commission became increasingly tense, particularly because Germany strongly pushed to reduce overlaps between the AI Act and industrial machinery legislation. And eventually, Europe reached a compromise. Under the latest agreement, many AI-enabled machinery products regulated under sector legislation will avoid direct duplication with some AI obligations. This is extremely important politically, because for the first time, the European Union is implicitly acknowledging something many companies haven't been saying for years. You cannot indefinitely stack overlapping digital regulations without creating a competitiveness problem. And this matters enormously because the AI Act was never operating in isolation. Over the last years, Europe has created an unprecedented wave of digital regulation. The GDPR, the DSA, the M ⁇ A, NISTU, DORA, the Data Act, the Cyber Resilience Act, the AI Act individually. Each framework makes sense. But inside companies, these frameworks collide continuously. And this is exactly what Brussels is now starting to confront. Another critical aspect is timing. Originally, many obligations for high risk AI systems were expected to become applicable from August to 2026. But companies increasingly warned that implementation timelines were unrealistic. Particularly for industrial systems integrated into products requiring conformity assessments and technical certifications. And the compromise now substantially delays several obligations. Based on the latest agreement, certain high-risk AI obligations may apply only from December 2027. AI systems embedded into regulated products may be postponed until August 2028. And this delay is not simply administrative. It reflects something deeper. European institutions have realized that many organizations are still struggling with the foundational AI governance questions. For example, what actually qualifies as an AI system under the AI Act? How should organizations classify AI tools internally? How do you distinguish between traditional software and regulated AI systems? Who owns AI governance inside the company? Legal, IT, compliance, cybersecurity, risk, procurement, business teams. And this is something I see daily when working with companies, with clients. Many organizations are still trying to build a basic inventory of AI systems. Some companies do not even know how many AI-enabled tools are already being used internally. And this becomes even more complicated when employees independently adopt generative AI solutions. Because AI governance is no longer only about compliance, it's also about cybersecurity, confidentiality, data protection, procurement risk, intellectual property, and liability exposure. And this leads to another major shift happening in Europe. The discussion around AI regulation is becoming far more pragmatic. For months, the European AI debate focused heavily on principles ethics, transparency, explainability, accountability, fundamental rights, all extremely important concepts. But industrial reality introduces very different questions. How do you certify adaptive AI systems embedded into machinery? How do you manage continuous software updates? How do you conduct conformity assessments on systems that evolve over time? How do you coordinate AI governance with cybersecurity obligations? How do you ensure compliance when AI functionality depends on third-party foundation models? And perhaps most importantly, who becomes legally responsible when AI systems integrate into products function? The manufacturer, the software developer, the deployer, the importer, the distributor, the provider of the underlying AI model, because one of the biggest transformations happening in Europe right now is the convergence between AI regulation and liability frameworks. The revised product liability directive now explicitly recognizes software and AI systems as products, meaning AI failures may increasingly generate product liability exposure. At the same time, cybersecurity incidents involving AI systems may trigger obligations under NIST II or DORA, for instance. And suddenly, AI governance is no longer only a legal issue. It becomes an enterprise risk management issue. And there is another interesting geopolitical element behind all of this. Europe is starting to realize that regulation itself has economic consequences. For years, the European approach to digital policy was largely built around the idea that trust creates competitive advantage. But the global AI race is accelerating incredibly fast. The United States remains driven by private investment and innovation speed. China continues scaling AI development aggressively. Meanwhile, Europe risks becoming trapped in regulatory complexity. And the irony is that large multinational companies often manage compliance better than smaller players. The real pressure falls on startups, medium-sized companies, industrial companies adopting AI, mid-sized European innovators, because large technology companies can absorb compliance costs. Smaller organizations often cannot. And this is why the AI Act debate is increasingly becoming part of the broader European competitiveness discussion, initiated also after the so-called dragie report on European competitiveness. The fear is simple. If deploying AI in Europe becomes excessively burdensome, companies may simply innovate elsewhere. But companies should not misunderstand what's happening now. This is not the end of the AI Act. Far from it. The AI Act remains the most ambitious AI regulatory framework globally. And organizations making the mistake of posing compliance initiatives may face serious problems later. Because what is happening is not their regulation, it's regulatory redistribution. The complexity is shifting. Companies now need to understand when the AI Act applies directly, when sectorial legislation prevails, when overlaps are managed, how AI governance integrates with cybersecurity and product compliance, how contractual allocation of liability should work across supply chains. And this requires much more mature governance structures. AI governance can no longer be handled exclusively by IT teams. Organizations increasingly need multidisciplinary AI governance committees, AI inventories, internal risk classification methodologies, vendor due diligence frameworks, AI procurement controls, monitoring mechanisms, escalation procedures, incident response frameworks, coordination between legal, cybersecurity, compliance, HR, procurement, and business teams. And this is particularly important because regulators themselves are still interpreting many provisions. Meaning that guidance, delegated acts, standards, and enforcement approaches will continue evolving over the coming years. And perhaps this entire story tells us something even bigger about the future of technology regulation in Europe. For years, regulators focused on regulating technologies individually. Privacy, platforms, cybersecurity, data sharing, artificial intelligence, but companies do not experience regulation in silos. Inside organizations, these frameworks merge operationally. And Europe is now facing the real challenge. How do you create a coherent digital regulatory ecosystem without paralyzing innovation? Because the future of AI regulation will not depend only on legal principles. It will depend on whether regulation remains operationally sustainable for businesses. And that may become one of the defining legal and economic questions of the next decade. Will Europe succeed in creating an AI governance model that balances trust and competitiveness? Or will economic pressure gradually force a broader simplification of the AI Act? And could strong AI governance eventually become a competitive advantage for European companies rather than simply another compliance button? Send me an email with your thoughts at giuccoraggio at yellowlipiper.com. And if you found this episode interesting, subscribe to Diritto al Digitale, activate the notification bell so you don't miss future episodes and leave us a 5-star review on Apple Podcasts or Spotify. I'm Giulio Coraggio, this is the podcast Diritto al Digitale.