When Does Artificial Intelligence Fall Under Medical Device Regulation?
A Practical Comparison of EU, UK, and US Frameworks
Artificial intelligence is now firmly embedded in modern healthcare. From image analysis software supporting radiologists to algorithms that flag patient deterioration in real time, AI systems are increasingly influencing clinical decisions and operational workflows.
As these technologies move from experimental use into regulated clinical environments, a recurring question emerges across development, regulatory, and quality teams:
When does AI become subject to medical device regulation?
The answer depends largely on geography. The European Union, United Kingdom, and United States all regulate AI-enabled medical devices, but they do so through different legal mechanisms, approval pathways, and oversight philosophies. For organisations developing or maintaining AI-based medical products, understanding these differences is critical to avoiding delays, rework, or compliance gaps.
The Expanding Role of AI in Regulated Medical Technology
The number of AI-enabled medical devices on the global market has grown steadily over the past decade. By 2025, regulatory authorities - particularly in the United States - had authorised well over a thousand AI-based medical products, with the majority entering the market in recent years.
Medical imaging remains the most common application area, followed by cardiovascular diagnostics, pathology support tools, neurology, and clinical workflow optimisation software. These areas benefit from structured datasets and repeatable clinical patterns, making them well suited to machine learning approaches.
Regulation, however, has evolved more cautiously. Rather than introducing entirely new approval systems, regulators have generally adapted existing medical device frameworks and supplemented them with guidance, pilots, and - in some regions - additional AI-specific legislation.
What Triggers Medical Device Regulation for AI?
Across jurisdictions, regulation is not triggered by the use of artificial intelligence itself, but by intended medical purpose.
AI software is regulated as a medical device when the manufacturer intends it to be used for diagnosing, preventing, monitoring, predicting, treating, or alleviating disease or injury. This applies whether the software operates independently as Software as a Medical Device (SaMD) or functions as part of a broader system (Software in a Medical Device, SiMD).
The grey area often lies in clinical decision support. Software that organises, visualises, or retrieves medical information without directing clinical action may fall outside device regulation. By contrast, software that prioritises findings, generates diagnostic outputs, or influences treatment decisions is typically regulated.
Regulators assess not only what the software does, but how its outputs are framed, how much autonomy it appears to have, and how easily clinicians can challenge or verify its recommendations.
European Union: MDR Compliance Plus AI-Specific Obligations
Medical Device Regulation (MDR)
Under the EU Medical Device Regulation, applicable since 2021, AI-enabled software is regulated within the same framework as other medical devices. Risk classification is determined by intended purpose and potential clinical impact, with many diagnostic AI systems classified as Class IIa or IIb, requiring assessment by a Notified Body.
Manufacturers must demonstrate clinical performance, safety, and risk control through structured technical documentation, clinical evaluation, quality management systems, and post-market surveillance. While MDR does not explicitly address AI, it places strong emphasis on traceability, validation, and lifecycle control - areas where AI systems introduce additional complexity.
The EU AI Act
The EU AI Act, adopted in 2024, introduces a horizontal regulatory layer for AI systems across industries. Most AI-enabled medical devices fall into the high-risk AI category under this regulation, as they are either medical devices themselves or safety components regulated under MDR or IVDR.
High-risk classification introduces obligations related to:
- Data governance and dataset quality
- Transparency and explainability
- Human oversight
- System robustness and cybersecurity
- Ongoing monitoring of AI performance
For medical device manufacturers, this means that AI governance now extends beyond traditional device compliance into broader algorithmic accountability.
Timing Considerations
Although the AI Act entered into force in 2024, its requirements are phased. For AI systems already regulated as medical devices, full compliance is expected from August 2027, allowing time for standards and guidance to mature.
United States: Regulating AI Through Existing Device Pathways
The U.S. Food and Drug Administration has taken a pragmatic approach, regulating AI-enabled medical devices through established medical device pathways rather than creating a standalone AI regime.
Classification and Market Entry
AI-enabled devices are classified as Class I, II, or III based on risk and are reviewed through standard pathways such as 510(k), De Novo, or Premarket Approval (PMA).
Most AI devices authorised to date have entered the market via the 510(k) route, relying on substantial equivalence to predicate devices. Novel AI applications without suitable predicates typically follow the De Novo pathway, while only a limited number require PMA.
Managing AI That Evolves Over Time
A key challenge with AI is that performance may change as models are retrained or updated. To address this, the FDA introduced the Predetermined Change Control Plan (PCCP).
A PCCP allows manufacturers to define anticipated post-market changes, the methods used to manage those changes, and the criteria for ensuring continued safety and effectiveness. When authorised, changes within the approved scope can be implemented without new submissions.
FDA Guidance and Lifecycle Oversight
Recent FDA guidance reinforces a Total Product Life Cycle approach to AI-enabled medical devices, emphasising development controls, validation, real-world performance monitoring, and post-market surveillance. Transparency, bias assessment, and performance tracking are recurring themes.
United Kingdom: A Flexible, Globally Aligned Model
The UK regulates AI-enabled medical devices under the UK Medical Devices Regulations 2002, with oversight by the Medicines and Healthcare products Regulatory Agency (MHRA).
Software and AI as a Medical Device Programme
The MHRA’s Software and AI as a Medical Device Change Programme addresses regulatory gaps related to software qualification, classification, and post-market change management. The programme focuses on explainability, adaptivity, and lifecycle oversight rather than introducing separate AI legislation.
International Reliance
In 2025, the MHRA announced expanded use of international reliance pathways, allowing devices already authorised by trusted regulators - such as the FDA or Health Canada - to access the UK market more efficiently. This approach applies to Software as a Medical Device, including AI-based systems.
The AI Airlock
The MHRA’s AI Airlock pilot created a regulatory sandbox where developers and regulators could explore how AI medical devices behave in real-world scenarios. Lessons from the pilot continue to inform UK regulatory policy and guidance.
What These Differences Mean in Practice
While all three regions regulate AI-enabled medical devices based on risk, their approaches differ:
- The EU combines medical device law with explicit AI governance obligations
- The US adapts existing frameworks through guidance and lifecycle controls
- The UK prioritises flexibility, international alignment, and regulatory experimentation
For manufacturers, these differences influence documentation strategy, development timelines, and decisions about where to launch first.
Key Takeaways for Manufacturers and Regulatory Teams
Several themes apply across jurisdictions:
- Clear documentation of data sources, model design, and validation is essential
- AI systems require structured plans for change and ongoing monitoring
- Regulatory requirements are evolving and must be actively tracked
Teams that integrate regulatory considerations early in AI development are better positioned to scale globally without repeated rework.
Looking Ahead
AI-enabled medical devices are no longer on the regulatory horizon—they are already here. Oversight frameworks in the EU, UK, and US continue to evolve as regulators gain experience with real-world AI deployment.
Success in this environment depends not only on technical performance, but on governance, transparency, and lifecycle discipline. For organisations developing AI-based medical technology, regulation is increasingly part of responsible product design rather than an afterthought.
Learn More with Keynotive
For professionals seeking practical, applied understanding of how these regulatory frameworks work in real projects, Keynotive's Navigating AI-Enabled Medical Devices Masterclass provides focused, practitioner-led training.
Led by Leon Doorn, CEO and Co-Founder of MedQAIR, the programme examines regulatory strategy, technical documentation, AI-specific risk management, and post-market governance across the EU, UK, and US.
Dates: April 16th - 17th, 2026
Format: Virtual masterclass, two half-day sessions

.webp)