European product liability law is being quietly but fundamentally rewritten. Software, AI systems and open source components move from the periphery into the legal core of what counts as a “product”, while cyber security and lifecycle management become part of the defect analysis. For management and engineering teams this means that software composition, open source usage and SBOM can no longer be treated as purely technical housekeeping; they are now part of the liability model.
This article outlines the key elements of the new regime, explains how software, AI and open source are treated, and shows why SBOM and the Cyber Resilience Act (CRA) will be central in practice.
From analogue products to software and AI
The new Product Liability Directive (EU) 2024/2853 replaces the 1985 framework and must be transposed into national law by 9 December 2026. It retains the core of strict (fault‑independent) liability for defective products, but substantially widens the notions of “product” and “defect” in order to accommodate networked, updatable and learning digital technologies.
Software is now explicitly included in the product definition, irrespective of form or delivery: local executables, downloaded apps, firmware, cloud‑based services and SaaS models are all treated as products. The Directive also brings “digital construction data”, such as CAD files that control 3D printers, and functionally essential “connected digital services” into the liability net by classifying them as components. In addition, it instructs courts to consider whether a product meets the applicable safety requirements of Union and national law, including cyber security requirements, when assessing whether the safety expectations of the public were met.
The German draft Product Liability Act mirrors this structure. It extends the scope of the existing ProdHaftG to software, data and digital components, removes the previous overall liability cap and the deductible for property damage, and broadens the circle of liable actors to include importers, authorised representatives, fulfilment service providers, certain online platforms and entities that substantially modify products. From an engineering perspective this means that “pure” software products, complex digital stacks and mixed hardware–software systems are all treated under a single, technologically open liability framework.
2. AI systems, connected services and the role of control
AI systems are treated as a specific subset of software products; the recitals explicitly refer to AI systems when explaining why software must be covered by the new rules. The Directive does not attempt to redefine AI – that is the function of the AI Act – but it ties legal expectations to characteristics that software engineers recognise immediately.
First, the learning capability of a product becomes legally relevant. When assessing whether a product is defective, courts must take into account the effects of its ability to learn or acquire new functions after being placed on the market. This is not a liability escape clause for “unpredictable” machine learning; in fact it works in the opposite direction by making it clear that learning behaviour has to be engineered and monitored in a way that prevents dangerous emergent behaviour.
Second, the interplay with other products and components is expressly made part of the safety assessment. Given the prevalence of microservice architectures, APIs and layered stacks, the Directive focuses on “reasonably foreseeable” combinations and interactions rather than on a static view of isolated devices. This mirrors the technical reality: failures often arise at integration points rather than inside a single, neatly bounded component.
Finally and third, cyber security is elevated from a background concern to an explicit element of product safety. If mandatory cyber security requirements, for example under the CRA or sectoral regulation, are not met, the product will typically be regarded as defective in the sense of product liability law. Conversely, compliance with those requirements does not automatically exempt manufacturers from liability, but it is an important reference point for what counts as “state of the art”.
Connected digital services illustrate these ideas particularly well. The Directive treats services that are so integrated or connected that the product cannot perform one or more of its functions without them as components, provided that the manufacturer has them under its “control” by integrating or approving them. Typical examples include cloud‑provided navigation data, health monitoring services tied to wearables, or voice assistants that control household devices. If such a service supplies erroneous data or flawed logic that leads to damage, both the provider of the service and the end‑product manufacturer can be strictly liable, even if the defect sits in the service alone. For system architects, this simply formalises the intuitive notion that responsibility follows control over the architecture and the choice of dependencies.
3. Open source: protected communities, responsible integrators
The new regime takes a nuanced approach to open source software. On the one hand, the Directive excludes “free and open‑source software that is developed or supplied outside the course of a commercial activity” from its scope. The German draft adopts this approach and clarifies in the explanatory memorandum that non‑profit foundations, academic institutions and community projects that make code available without pursuing commercial purposes are not considered to place products on the market in the sense of product liability. This carve‑out is intended to avoid chilling effects on volunteer‑driven open source ecosystems.
On the other hand, the protection is strictly limited to that non‑commercial layer. Once a business integrates open source into its own product as part of a commercial activity – whether by selling licences, bundling software with hardware, offering a SaaS solution or monetising user data – it becomes the manufacturer of the overall product and bears strict liability for defects, including those that originate in OSS components. There is no legal route to shift strict liability back onto non‑commercial OSS authors, who by definition fall outside the product liability system. Commercial distributors or maintainers of OSS packages may, of course, be liable under contract or national tort law, but that is a separate question.

For engineering management, this means that open source components must be treated as fully fledged, liability‑relevant parts of the product stack. Selection, security evaluation, licence compliance, integration testing and timely patching are not merely good practice; they form part of the due care expected of a manufacturer. From a developer’s perspective, each “import” or container pull in a build pipeline is also an act of risk assumption on behalf of the organisation.
4. SBOM and the Cyber Resilience Act: evidence of diligence rather than explicit requirement
Neither the Directive nor the German draft mentions “SBOM” explicitly; there is no direct statutory obligation in product liability law to generate a Software Bill of Materials. The SBOM enters the legal picture through cyber security and product safety law, above all via the Cyber Resilience Act and accompanying technical standards such as the BSI’s TR‑03183.
The CRA requires manufacturers of products with digital elements to implement a vulnerability management process, to track components and known vulnerabilities throughout the lifecycle and to provide appropriate documentation. In technical terms this maps directly onto what SBOM formats such as SPDX or CycloneDX deliver: a machine‑readable inventory of first‑party and third‑party software components, including versions and relationships, which can be used to assess exposure to CVEs and to coordinate patches.
The bridge to product liability is built through the defect concept. The Directive explicitly instructs courts to consider whether a product complies with the applicable safety and cyber security requirements of Union law when assessing whether it provides the safety that the public is entitled to expect. If CRA‑driven obligations to maintain a SBOM and to manage vulnerabilities are ignored, this will weigh heavily in favour of finding a defect. Conversely, a documented SBOM‑based process will not guarantee an absence of liability, but it will be a central element in demonstrating that the manufacturer acted in line with the state of the art.
In practice, a SBOM becomes both an operational tool and a piece of legal evidence. For development teams it is the only scalable way to answer, with reasonable speed and confidence, whether a newly disclosed vulnerability affects specific products or deployments. For lawyers it is part of the dossier needed to show that the company had adequate visibility into its software supply chain and reacted appropriately to risks.
5. Procedure, proof and the importance of documentation
The new regime also modifies the procedural environment in which product liability claims are litigated. To address the information asymmetry between injured parties and complex manufacturers, the Directive and the German draft introduce powers for courts to order disclosure of relevant evidence and a set of rebuttable presumptions.
Courts will be able to order manufacturers to disclose documentation relating to the product’s design, production and safety features, subject to proportionality and protection of trade secrets. If a manufacturer refuses without justification, courts may presume that the product is defective. In addition, the law provides for presumptions that a product is defective if it clearly fails to comply with mandatory safety requirements or exhibits obvious malfunctions in the course of normal or reasonably foreseeable use, and for presumptions on causation where a defect and a typical type of damage are established.
A particularly relevant innovation for digital and AI‑heavy products is the treatment of technical and scientific complexity. Where the complexity of a product or service makes it excessively difficult for a claimant to prove defect or causation, and where the claimant can nonetheless show a sufficiently high probability that the product was defective or contributed to the damage, courts may ease the burden of proof and place more of it on the manufacturer. This is clearly aimed at black‑box systems and highly layered software stacks, where external parties cannot realistically reconstruct internal behaviour without access to documentation and logs.

All of this reinforces a simple point that both engineers and lawyers understand, albeit in different vocabularies: documentation is not bureaucracy; it is part of the safety case. Architecture diagrams, component inventories, test strategies, security concepts, update and patch policies and incident response playbooks are not merely internal housekeeping. Under the new law they form the factual basis for arguing that a product met the legitimate safety expectations at the relevant point in time.
6. Aligning legal risk with engineering reality
The emerging European framework does not try to turn software engineers into jurists or to legislate specific technologies such as particular SBOM formats or CI/CD tools. Instead, it formulates a set of expectations that are remarkably close to good engineering practice: visibility into the stack, control over dependencies, security and resilience throughout the lifecycle, and a clear allocation of responsibility along the supply chain.
For management, the strategic task is to ensure that governance, contracts and insurance arrangements reflect this reality and that product development, security and legal functions work against a shared model of risk. For developers and architects, the practical challenge is to embed these expectations into everyday workflows: to treat SBOM generation as part of the build, to design for maintainable update paths, to choose and monitor open source components with the same care as internal modules, and to document safety‑relevant decisions in a way that remains intelligible years later.
The law is not asking for perfection. It is asking for systems that do not become dangerous in foreseeable ways, for processes that can be explained and defended, and for a level of transparency that matches the societal reliance on software and AI. From someone who writes code and reads directives, the bottom line is straightforward: if you design and operate your systems so that you would be comfortable explaining them line by line to a critical peer review, you are also much closer to where you need to be under the new product liability regime.
- When East Meets West: The Legal and Cultural Minefield for Chinese Companies Expanding into Germany - 18. January 2026
- The new EU product liability landscape for software, AI and open source - 30. December 2025
- Shutdown of Cryptomixer.io - 2. December 2025
