AI and Technology Contracts: At A Glance

Understanding the evolving legal landscape surrounding AI is crucial. In this article, we explore how AI is shaping technology contracts, the risks involved, and strategies to ensure compliance and mitigate liability.

Key Considerations

  1. Data Rights and Ownership

When it comes to technology contracts, customers will typically own their data, and will typically wish to protect their data rights. However, AI providers will often need to use this data to provide the products and services their customers need. Companies should, therefore, be prepared to negotiate data use provisions that allow them to use customer data to provide the services and enhance their AI product, while taking into consideration: 

  • Their ability to aggregate (or segregate, where required) a specific customer’s data with other customer data; 

  • Data anonymization provisions (and practices) to prevent customer data from being linked to a specific customer; and 

  • Data protection and security standards to ensure the confidentiality of customer's data, including training data and personal information.

2. Data Privacy 

Companies that use AI tools to process personal information will need to comply with all applicable data privacy laws. Generally, companies will outline their data privacy/data protection practices in a data protection agreement (“DPA”) or similar document. A well-structured DPA is critical for managing liability, maintaining regulatory compliance, and setting expectations between parties. When preparing or negotiating a data protection agreement, companies should consider the following:

  • Scope and Applicability of the DPA. Companies should:

    • Ensure that the DPA accurately reflects the company’s current data handling practices.

    • Consider the operational burden of negotiating custom DPAs for individual clients; standardization may be more practical.

    • Where appropriate, limit the scope of the DPA to personal information only. Some customers may seek to extend privacy protections to all confidential information, which should be evaluated case by case.

    • Avoid accepting responsibility for the accuracy or quality of customer-provided data. That responsibility generally lies with the customer.

  • Consent and Breach Preparedness. Companies should:

    • Confirm that customers have fulfilled their obligations under applicable privacy laws, including obtaining valid consents for the use of personal data.

    • Review obligations related to data security incidents and ensure the company has the internal processes and controls to meet any required response timelines.

  • Audit and Vendor Oversight. Companies should:

    • Carefully consider the scope of customer audit rights, particularly in the context of cloud-based or distributed services where on-site audits may not be appropriate.

    • Routinely review and update DPA templates to reflect regulatory changes in all relevant jurisdictions.
      Evaluate service provider DPAs to ensure you fully understand how third-party vendors handle customer data and that their data protection practices align with your compliance expectations.

3. Regulatory Compliance

The regulatory environment covering AI is evolving quickly. The EU has a comprehensive AI regulation, the AI Act, that classifies AI applications by risk level and assigns obligations to AI providers and their customers. While the US doesn't have one single comprehensive AI law; Colorado, Virginia and Texas have adopted laws regulating AI. 

Similarly, many data privacy laws regulate AI when it is used to process personal information. Regulated activities include: (i) training AI models with personal data; (ii) using AI for automated decision-making; and (iii) deploying AI systems that interact with individuals. When using AI tools to process the personal information of their customers, companies should consider the following:

  • Companies' privacy policies or other similar documents should be transparent about how the company uses AI to process personal information, this includes informing customers about automated decision-making and providing explanations for the outcomes.

  • Data privacy laws will typically apply if AI models are trained on personal information, even if that information is anonymized after the initial training. 

  • Automated decision-making that has a significant impact on the rights or outcomes for an individual may be prohibited under certain circumstances or subject to additional requirements such as consent. Where consents are required, companies should ensure that they are collected and preserved in a compliant manner. 

Companies should monitor regulatory developments closely to avoid relying on outdated compliance frameworks.

4. Product Liability

Establishing the appropriate risk allocation in contracts covering AI technology is a complex issue. Companies offering AI-enabled devices should be aware that legal risk extends beyond the software, especially where physical goods are involved.

Many companies offer "smart" products that incorporate AI and internet connectivity into a physical device. Whether the device is a product, a service, or a hybrid of both will impact a companies’ exposure for product liability claims. Companies should consider the relationship between a device’s software and the physical product and how warranty liability should be allocated. 

It is also critical that companies pay close attention to the other risk allocation provisions in their customer and vendor agreements, such as representations and warranties, indemnities, limitations of liability and insurance sections. 

How Apex Legal Can Help

AI presents incredible commercial opportunities for businesses, but these opportunities come with legal challenges and potential risks. At Apex Legal, we specialize in helping businesses navigate the complexities of contracting for their AI products and services. Our team provides:

  • AI policy development and compliance strategies.

  • Contract review; template creation and playbook creation. 

  • Deal negotiation support.

  • Guidance regarding data use and data rights.

Ready to bring clarity to your AI contracts?

We work with companies across industries to draft, review, and negotiate technology agreements that are both commercially sound and legally compliant. Whether you're building AI-powered products or partnering with vendors who are, we help you stay ahead of regulatory risk.

Contact me today to schedule a conversation about your technology agreements. I’ll walk you through how Apex Legal can help ensure they are compliant, strategically sound, and aligned with your business priorities.


Next
Next

Navigating the DOJ's New Rule on Cross-Border Data Transfers: What Tech Companies Need to Know