Artificial intelligence (AI) is rapidly transforming how businesses operate. It’s also giving rise to a new category of legal disputes: AI-related contract litigation. These types of disputes are common when a new technology arises. Primarily they’re because either the person selling or buying (or both) the new technology doesn’t understand the new technology. From failed AI integrations to misrepresented capabilities and unexpected data breaches, companies are discovering that AI doesn’t just carry technological risks—it carries legal ones, too.
Why AI Is Creating New Contract Disputes?
AI solutions are being marketed to businesses across every industry—healthcare, logistics, finance, real estate—as tools to streamline operations and increase efficiency. But here’s the problem: many of these solutions don’t perform as advertised. These issues often lead to one common place: the AI underperforms or causes unforeseen problems like financial losses, operational disruptions, and costly cleanup efforts.
Common issues include: (a) Overstated capabilities of AI platforms in contracts or sales
materials; (b) Overstated or misunderstood expectations of the purchaser to the AI’s capabilities; (c) Integration failures due to poor support or miscommunication; (d) Undisclosed third-party data usage or security flaws; (e) Noncompliance with privacy laws or industry regulations; (f) Delayed or unusable deliverables from AI vendors.
Key Legal Claims in AI Business Disputes If your business has been harmed by a failed AI project, you may have legal claims under a few different theories.
First, and most obvious, is a breach of contract for failure to meet performance benchmarks or delivery dates. Second, you may have a claim for fraud or negligent misrepresentation by the salesperson for false statements about what the AI’s capabilities are.
Additionally, if you have the foresight to include warranties in your AI contract, warranting the AI’s capabilities, you may have a claim for breach of warranty. Depending on the AI you choose or how it executes its directives, it may compromise protected consumer data. This could lead to various issues, including violations of data privacy laws or even class action lawsuits.
Lastly, if, for example, you use AI in hiring, there may be an AI bias and discrimination in the AI’s approach to hiring. Similarly, if insurance companies use AI to process insurance claims, there may be an inherent bias. Unlike traditional software contracts, many AI-related agreements are vague or full of industry jargon—making litigation more complex. At King & Jones, we know how to cut through the noise and get to the heart of the issues.
Real World Scenarios
Here are some real-world examples of how companies can breach AI-related contracts:
- AI provider failing to deliver promised cost savings.
- Copyright infringement from the use of unauthorized music lyrics
- Unauthorized use of trade dress.
- Unauthorized use of copyrighted content.
AI models are only as good as the data they are trained on. Inaccurate, incomplete, or
outdated data can lead to flawed output.
Misuse of patient data without appropriate consent, violating HIPAA and breaching its vendor agreement. If you have questions about a potential or actual breach of contract or business dispute related to AI, it is advisable to consult a trusted legal advisor to help you navigate the situation.