The integration of Artificial Intelligence (AI) is one of the driving forces of digital transformation, opening new horizons for businesses. However, alongside technological and economic considerations, legal aspects play a central role. This article highlights the most important contractual challenges and provides practical insights for software developers and management.
The Diversity of Contract Types
AI services can be provided as a service (AIaaS), as custom software, or in hybrid models. The contractual classification significantly affects the rights and obligations of the parties involved. AI contracts are often classified as either rental or work contracts, with SaaS solutions like AIaaS typically resembling long-term obligations.
A special case is the training of AI models, which often involves external platforms such as AWS or Azure. This raises the question of whether these services are to be regarded as purely technical services or as copyright-protected contracts for work and services. In particular, responsibility for the training data and its quality is a key issue. In the end, I see a service contract here in which (solely) the proper execution of the (ongoing) training is owed.
Data as a Contractual Subject
Data forms the foundation of any AI system. The Data Act establishes new frameworks for data access and usage, particularly through the introduction of B2B access rights. Businesses must establish clear agreements on data origin, usage, and ownership.
Specific challenges arise in the B2B context:
- Quality and Representativeness of Data: Insufficient or biased data can lead to faulty models.
- Liability and Security: Contracts should define who is liable if data is misused or compromised.
Warranty and Liability
AI systems are inherently dynamic and non-deterministic. This raises unique questions in the areas of defect rights and product liability. While traditional software is often assessed based on functionality, AI services must be evaluated based on their results. Contracts should therefore:
- Clearly define the condition of the system at the time of risk transfer.
- Include provisions for rectification and improvement tailored to the dynamic nature of AI.
Determining defects in AI systems is a particular challenge due to their dynamic and learning nature. A central aspect is the agreement on the target condition. This should explicitly define what performance the AI system must deliver from the outset and within what framework it may be optimized through training.
A deviation from the agreed condition, such as unforeseen malfunctions or inadequate learning behavior, may constitute a defect. Contracts should also define the expected learning capabilities, such as how quickly and under what circumstances the AI should improve its performance.
Another issue concerns the “initial functionality.” This relates to expectations for the system immediately after delivery. For instance, if a voice control system is sold, it should be functional without further training unless otherwise agreed.
Rights concerning defects, such as rectification, price reduction, or withdrawal, remain essential tools for addressing deviations. However, it is crucial to recognize that with AI systems, the line between a defect and the natural limitations of the technology is often blurred. Clear contractual provisions are thus necessary to manage expectations on both sides.
Copyright Challenges
The issue of protecting AI-generated content remains contentious. Under current law, only works with human imprint enjoy copyright protection. Businesses should therefore contractually establish who owns the rights to AI outputs and how they may be used.
AIaaS: Specific Challenges
AI-as-a-Service (AIaaS) offers flexible usage opportunities but also poses unique demands on contract design. Key aspects include:
- Service Descriptions: Clear definitions of AI functions and performance expectations.
- Training Data: Clarification of responsibility for its provision and quality.
- Compliance: Ensuring that providers meet regulatory requirements such as the AI Act.
The Impact of the AI Act
The EU AI Act significantly impacts contract design for AI systems, especially concerning high-risk applications. The regulation imposes strict requirements on providers and operators to ensure the safety and transparency of AI systems. Key obligations for training and deployment include:
- Risk Management: AI providers must implement a comprehensive risk management system covering the entire lifecycle of an AI system. This system should not only minimize potential risks but also be continuously monitored for necessary adjustments.
- Data and Data Governance: Training data must meet strict standards. It must be representative, valid, and free from bias. Providers are also required to meticulously document their data processing procedures to ensure that training data serves its intended purpose.
- Transparency and Documentation: The regulation demands detailed technical documentation. Providers must explain the AI’s functionality and how decisions are made. This information should also be accessible to operators to facilitate compliance.
- High-Risk Systems: For high-risk AI systems, third-party conformity assessments are mandatory. This ensures that the system meets regulatory requirements before being marketed. Providers are also obliged to establish continuous monitoring systems to report incidents or malfunctions.
- Contract Design: Companies developing or using AI systems should structure contracts to clearly define responsibilities. This includes explicit provisions regarding data sources, AI Act compliance, and processes for collaboration between providers and operators.
The AI regulation adds a new dimension to contract design by directly integrating specific requirements into the legal relationships between parties. Companies should ensure that their contracts address both technical and regulatory demands.
Outlook
The legal challenges in the field of AI and contract law are as diverse as the technology itself. Companies should involve strategic and legal expertise at an early stage in order to minimize legal risks and make the best possible use of the opportunities offered by AI. Clear contracts that are adapted to the special features of AI are a central key to success – so much for the truism.
In my view, however, there is still too much confusion in this area. Legal essays on the subject approach the field of AI as if it were developing software (as it used to be). However, at least in my mandates, solutions are clearly predominant in which cloud solutions are trained with your own data – only to then have to be continuously recalibrated. Such contracts are more like services in long-term debt relationships, often coupled with fairly simple individual developments to connect the AI to existing systems. In addition, in my opinion, the developers of AI systems have an original duty to inform their clients about the requirements of the AI Regulation with regard to transparency and high-risk systems. Under no circumstances can they retreat to the fact that this is solely the responsibility of the client!
- BiotechCrime: Biotechnology and biohacking as a criminal offense - 10. February 2025
- European arrest warrant: Support in Germany - 2. February 2025
- Red Notice - 2. February 2025