From SaaS to AIaaS: Updating Contract Terms in the Age of AI
The Rise of AI-as-a-Service
Artificial intelligence has moved beyond the hype cycle. AI is already embedded in everyday tools, business processes, and enterprise platforms.
Businesses across all sectors are integrating AI capabilities into their offerings, transforming traditional Software-as-a-Service (SaaS) platforms into AI-as-a-Service (AIaaS) solutions. From automated contract review platforms to customer support bots and predictive analytics tools, software providers are deploying machine learning models and large language models (LLMs) to enhance or power their services.
For clients and providers alike, this evolution brings new legal considerations. AI-infused services raise unique questions around intellectual property ownership, liability, model updates, data usage, and transparency, issues that are not always addressed in standard SaaS templates.
Why Traditional SaaS Agreements Fall Short
Conventional SaaS agreements were designed for static or incrementally updated software delivered over the cloud. These agreements typically cover user access, uptime guarantees, data protection, and intellectual property terms related to the underlying software platform.
However, AI-driven services involve dynamic elements such as ongoing model training, variable or unpredictable outputs, and reliance on customer data for performance improvements. These elements introduce legal grey areas unless specifically addressed in the agreement.
Revisiting Key Clauses in the AI Context
We highlight here some key clauses which will have to be looked at carefully in AI-as-a-Service agreements:
Intellectual Property and Output Ownership
AIaaS arrangements must clarify who owns the outputs generated by the AI system. While the provider typically owns the platform and models, the ownership or licensing of AI-generated content such as data, reports, recommendations, or designs is often unclear. The agreement should specify whether the client obtains full ownership of the output, a licence to use it (and under what terms), and whether any limitations apply to commercial use or redistribution.
Use of Customer Data in Model Training
Where customer data is used to train or fine-tune AI models, the agreement should address transparency and consent. Providers must state whether they use input data or usage patterns to improve the model, whether such data is anonymised or aggregated, and whether the customer may opt out. Failing to clarify this could result in data protection breaches or unexpected claims over training data.
Warranties and Disclaimers for AI Behaviour
AI systems may produce unpredictable and, in some cases, incorrect results. Agreements should reflect this by adjusting the scope of warranties. Providers may limit warranties on output accuracy, reliability, or fitness for purpose. At the same time, clients should be advised of any known limitations or risks in relying on AI-generated content for critical decisions.
Model Updates
Routine software updates are usually covered in SaaS agreements. However, where AI model updates significantly affect how the service operate, for example, changes to logic, accuracy, or regulatory outcomes, the clients may require prior notice, a right to maintain use of a prior version, or a re-validation process. These measures help mitigate disruption and manage risk in regulated environments.
Audit and Transparency Rights
In regulated sectors such as financial services, healthcare, and digital infrastructure, clients may be subject to obligations under EU legislation such as the NIS2 Directive, the Digital Operational Resilience Act (DORA), and the EU Artificial Intelligence Act. These frameworks impose varying degrees of responsibility for oversight, operational resilience, and risk management, including when third-party technologies such as AIaaS are used.
AIaaS agreements should, therefore, include provisions that allow clients to request appropriate technical and operational documentation to fulfil their regulatory obligations. This may involve access to model documentation, descriptions of training data categories, security controls, performance metrics, or summaries of monitoring and testing activities. While these rights must be balanced against the provider’s need to protect confidential information and IP, contractual mechanisms should be in place to facilitate regulatory cooperation without exposing proprietary algorithms or source code.
Compliance with the EU AI Act
Where the AI system qualifies as high-risk under the EU AI Act (Regulation (EU) 2024/1689), AIaaS licensing terms should be aligned with the Act’s transparency, documentation, and risk classification requirements. In addition, providers of general-purpose AI models may be subject to technical documentation, usage summaries, and systemic risk controls if those models are deployed at scale. Moreover, certain AI use cases involving user interaction or synthetic content generation may trigger specific transparency obligations even if the system is not classified as high-risk. Awareness of these varying regulatory touchpoints is critical when drafting or negotiating AIaaS agreements.
Liability and Indemnity in AIaaS Agreements
The legal consequences of relying on AI outputs should be clearly allocated. Providers may seek to limit liability for harm caused by inaccurate or biased outputs, while clients may expect assurances against infringement or misuse of third-party data. Where third-party models or data sources are integrated into the service, liability frameworks should be carefully structured to allocate risk appropriately between the parties.
Best Practices for Businesses Using AIaaS
Businesses should review their existing agreements to assess whether the provisions adequately address AI-specific risks. Identifying the type of AI functionality involved, understanding how outputs are generated and used, and defining ownership, risk, and liability will be critical steps. Implementing updated licensing terms that are tailored to AI services is not only a compliance issue but a strategic step in mitigating future legal uncertainty.
More information
At Aptus Legal, we assist technology companies, platform providers, and enterprise clients in reviewing and updating their software licensing arrangements to reflect the realities of AI deployment. We draft and negotiate software agreements that address ownership of AI-generated content, use of training data, model transparency, and liability structures.
We also support clients navigating regulatory frameworks such as DORA, NIS2, and the EU AI Act, ensuring their AI-related services and supplier agreements are aligned with legal and commercial expectations. Whether you are delivering or procuring an AI-driven product, we provide practical, forward-looking advice that protects your business and enables innovation.
Please contact Aptus Legal at info@aptuslegal.com to learn more.