Anthony Hall - stock.adobe.com

AI and outsourcing: What’s the future for relationships and contracts? (Part two)

In the second part of this two-part series, two technology lawyers offer guidance on what the integration of artificial intelligence in IT outsourcing and business process outsourcing will mean for customers and providers in outsourcing relationships and contracts

What follows is based on our combined experience of around 52 years of legal advice to the IT and outsourcing industries, and how we’ve seen outsourcing relationships and agreements develop to accommodate waves of new IT infrastructure, technologies, systems, applications and business processes.

We look broadly at the key areas of focus for customers within the outsourcing relationship and how contracts need to evolve. Providers will, of course, have to shape their contracts (if they’re not working on customer terms), and (if they are) their responses to customer requests for proposals (RFPs), negotiating positions and contractual terms, accordingly.

Pre-contract legal preparation

Ensure you understand the actual and potential impacts of artificial intelligence (AI) integration on your current applications and platform(s) licensed estates. For example, and just from a legal perspective, will your provider’s bots or AI virtual users need additional licences, assuming your software and platform licences allow for such use?

Having surveyed the provider market and those who seem best able to meet your solution needs in the diligence and RFP phases, make sure you include their responses in the contract.

Relationship-based, and relational, contracts

To realise current and future benefits of AI integration, focus on the through-life strategic, contractual and commercial relationship with the provider, including high-level strategic and tactical goals translated into contractual obligations on both parties, based on experience-driven measurement and outcomes.

New governance approaches and mechanisms

Encourage AI innovation and benefits through formal and relational governance, including specific AI-related reporting, assessment and joint action. Choose and provide for a robust, but flexible, governance model ahead of trying to draft everything you need in formal legal language.

Contractual success factors and measurement

Devise and apply AI-related and experience-driven service levels, combined with metrics-based service levels for the more “traditional” elements of the outsourced services.

Create and apply output- and outcomes-based specifications.

Decide where to position and formalise the benefits of AI integration – higher up or lower down the service value chain.

Pricing

Apply output- and business outcomes-based pricing.

Consider gain share and pain share mechanisms, with balanced levels of risk for both parties.

Agree project-based pricing for development, integration and implementation. Include cost reduction commitments from the provider.

Transition away from FTE pricing and input-based pricing mechanisms as and when they are no longer needed.

New contractual services and obligations

To empower the provider to take responsibility for, and to transform, whole processes, specify end-to-end processes to be handed to the provider, where process elements might otherwise have been handed-off to customer personnel.

Craft new outsourced duties for provider AI oversight, reporting and remediation, with cost allocated as between you.

Consider independent third-party audits of AI development, integration and outputs, with the provider required contractually to remediate audit recommendations.

Cyber, information security and confidentiality

Try to quantify the added impact of AI-related cyber and information security risks, as well as the potential leakage of your sensitive corporate information through access to training datasets and data outputs. Upgrade your information security terms accordingly.

If appropriate, prevent or restrict the use of your corporate data inputs and outputs, including derivations of that data or inferences that include that data.

Innovative approaches to benchmarking

In line with a relational approach, revamp your benchmarking terms to measure progress and deployment of AI against the market, and agree the consequences where deliverables don’t match specified (and agreed) market comparators. Avoid “traditional” contractual remedies for adverse benchmarking findings – though there should be some level of enforceable inducement on the provider to match agreed market comparators.

More dynamic change management

For new outsourcing arrangements, consider flexible and priced change management provisions to facilitate AI deployment and to bring in future changes in AI technologies and processes, without the need to go through a formal change in every case.

Compliance with law and regulation

Existing “shadow AI laws” will be found by courts and regulators to apply to various AI systems and processes. Current laws like GDPR/UKGDPR will be applied to regulate AI systems and processes. New, AI-specific, rules and binding guidance will be introduced by sector regulators (for example, in the financial services and critical national infrastructure markets). And new regulations like the EU’s AI Act will be adopted and developed. So, consider how, and to what extent, you should allocate as between you and the provider the responsibility for legal compliance and its cost.

Require your provider to be transparent about deploying AI that has not already been contracted for and to be clear with you about the impact of that deployment, with the need for your formal approval before doing so.

Supply chain management

Ensure you have visibility and operational and contractual management of your provider’s third-party supply chain – who in the supply chain is using AI in support of material or potentially material outsourced services, and what AI are they using?

Intellectual property ownership and use

You will start by allocating and agreeing between you and the provider your respective ownership of intellectual property rights (IPR) in background technologies, and particularly data provided by each of you – for example, datasets used to train generative AI.

Beyond that, you will either want to own (if workable given the structure of the AI systems concerned, or as a matter of business sense), or at least have full use of, IPR in the AI systems and processes developed, integrated and deployed.

You may, if feasible, legitimately claim ownership of elements of the AI and datasets used, both as inputs and outputs, if there are regulatory requirements to do so, or if there is a demonstrable strategic need to do so, for example, because of the sensitivity, commercial value or market-differentiating nature of the AI and data.

Because of the risk of third-party IPR infringement claims, for example as in the current litigation that Getty Images has brought against Stability AI in the US and UK, and the New York Times lawsuit against OpenAI, you should allocate liability for such claims, backed in the usual way by indemnities. We expect that, in many cases, liability will be mutual.

Planning for exit

Plan how to avoid the supplier trap. On termination of the outsourcing, you may want or need to continue using the AI-integrated services, at least for a period. Alternatively, you will want to know that you can have continuity through alternative service provision, either using the AI as part of outsourced services, or receiving similar services from the alternative provider.

Much will depend, of course, on the kind of AI systems, processes and datasets involved in the original outsourcing and the technical, operational, and commercial feasibility and complexities of securing that continuity. Key will be continuity of data, including that generated during the original outsourcing.

Liability allocation

Integrating AI in outsourced services will obviously expose both the customer and the provider to new risks and liabilities. These will include greater scope for cyber and information security breaches, third party IPR infringements through software and data breaches, third-party and regulatory claims where AI has infringed rights or has been found non-compliant with regulation, and the costs of remediating defective AI systems and processes.

You should consider allowing the provider relief from liability, including reimbursement where you are responsible for defects inherent in existing processes and data or for causing or contributing to the situation (as seen currently in “relief events” or similar in outsourcing contracts).

Ultimately, though, the allocation and extent of liability should be proportionate, and not act as a deterrent to the integration and deployment of AI in the outsourcing.

These are some of the key areas that customers and providers will need to consider, negotiate and document. There will undoubtedly be more, or they may be handled differently, as AI, and specifically AI in outsourcing, develops.


Also read part one: AI and outsourcing what’s the future?


Simon Bollans is partner and head of Stephenson Harwood’s technology and outsourcing practices. He focuses on IT transactions, outsourcing and emerging technologies. He has significant experience advising on complex IT and managed services arrangements, BPO/IT outsourcing, and systems integration and digital transformation projects. Simon also advises on new disruptive technologies, including AI and machine learning.

Mark Lewis is a senior consultant in Stephenson Harwood’s technology and outsourcing practices. For over 35 years, he has specialised in transactions for, and advice on, the acquisition and use of IT and business process products, systems and services, including all forms of outsourcing and cloud computing transactions. He is a visiting professor in practice in the London School of Economics Law School, where he lectures on AI and machine learning, cloud computing and cyber security.

Read more on IT consultancy

CIO
Security
Networking
Data Center
Data Management
Close