Confidentiality and AI Tools: Can You Upload Client Contracts to AI?

Confidentiality and AI Tools: Can You Upload Client Contracts to AI?
Forty-four percent of legal tasks could be automated by AI, according to a Goldman Sachs analysis. But before you upload your first client contract to an AI tool, there is a question you need to answer: does doing so violate your duty of confidentiality under Model Rule 1.6?
This is not a hypothetical concern. It is the single biggest practical barrier preventing solo and small firm lawyers from adopting AI contract review. The answer depends entirely on which tool you use, how that tool handles your data, and whether you have done your homework before hitting “upload.”
This article gives you the exact framework to evaluate any AI tool’s data handling practices, a comparison of how the major platforms stack up, and actionable steps to protect client confidentiality while still benefiting from AI-assisted review. Try Clause Labs Free to see how a purpose-built legal AI handles data security.
What Model Rule 1.6 Actually Requires
ABA Model Rule 1.6(a) states that “a lawyer shall not reveal information relating to the representation of a client” unless the client gives informed consent. Rule 1.6(c) adds a second obligation: “a lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”
When you upload a client’s contract to an AI tool, you are sharing client information with a third-party service. That is an act of disclosure. Whether that disclosure is permissible depends on whether your “efforts to prevent” unauthorized access are “reasonable.”
The critical word is “reasonable.” You are not required to guarantee absolute security. You are required to exercise the same diligence you would when choosing any technology vendor that handles client data, such as cloud storage, email, or practice management software.
What ABA Formal Opinion 512 Says
In July 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, the first comprehensive ethics guidance on lawyers using generative AI tools. The opinion addresses confidentiality directly and is worth reading in full.
Key requirements from Opinion 512:
- Know how the tool uses data. You must understand whether the AI tool retains your inputs, uses them for model training, or shares them with third parties. Ignorance is not a defense.
- Implement adequate safeguards. You must ensure data processed by the AI tool is secure and not susceptible to unwitting or unauthorized disclosure.
- Get informed consent for self-learning tools. If the AI tool trains on your inputs (meaning your client’s data improves the vendor’s AI), you need the client’s informed consent before using it. Boilerplate consent in engagement letters is not sufficient.
- Evaluate the vendor. Your obligation to vet third-party contractors extends to AI tool vendors. Investigate reliability, security measures, and policies.
The practical takeaway: you can use AI tools for client contracts, but you must do your due diligence first. The standard is similar to what you would apply when evaluating a cloud-based practice management system or document storage provider.
The Data Handling Spectrum: Not All AI Tools Are Equal
AI tools handle client data on a spectrum from dangerous to acceptable. Before you upload anything, place the tool on this scale.
Dangerous: Do Not Use for Client Data
Tools at this end of the spectrum share some or all of these characteristics:
- The tool trains on user-uploaded data, meaning your client’s contract improves their AI model
- No clear data retention policy, or data is retained indefinitely
- No encryption at rest
- Terms of service allow sharing data with third parties
- No data processing agreement available
The most common example: free-tier consumer AI chatbots with default training-on-inputs enabled. According to OpenAI’s own policies, free-tier ChatGPT conversations may be used to improve their models unless the user explicitly opts out. That means your client’s confidential contract language could end up influencing outputs for other users.
Caution: Review Carefully Before Use
These tools have better security but require careful configuration:
- Data retained for a limited period (30-90 days)
- Encryption in transit but unclear at-rest encryption
- Training opt-out available but default is opt-in
- Privacy policy exists but is vague or difficult to interpret
- No SOC 2 certification
Many general-purpose AI platforms with “business” tiers fall here. They may be acceptable with proper configuration, but you need to verify the settings and understand the defaults.
Acceptable: Meets Legal Industry Standards
Tools built for regulated industries typically offer:
- Zero data retention or user-configurable retention periods
- Explicit commitment to never train on user-uploaded documents
- Encryption in transit (TLS 1.2+) and at rest (AES-256)
- SOC 2 Type II certification or equivalent
- Clear, detailed privacy policy written for professional users
- Data processing agreement available on request
- Breach notification commitments
Purpose-built legal AI tools like Clause Labs and enterprise-tier offerings from major AI providers typically meet these standards.
How Specific AI Tools Handle Client Data
Here is how the most commonly used AI tools compare on the factors that matter for confidentiality compliance.
| Factor | Free-Tier ChatGPT | ChatGPT Enterprise/API | Claude (Anthropic) API | Purpose-Built Legal AI (e.g., Clause Labs) |
|---|---|---|---|---|
| Trains on inputs? | Default: Yes (opt-out available) | No | No (API) | No |
| Data retention | Conversations stored | Configurable (min 90 days) | Configurable | Minimal / configurable |
| Encryption at rest | AES-256 | AES-256 | Yes | AES-256 |
| Encryption in transit | TLS 1.2+ | TLS 1.2+ | TLS 1.2+ | TLS 1.3 |
| SOC 2 certified | No (consumer tier) | Yes | Yes (API) | On roadmap |
| DPA available | No (consumer tier) | Yes | Yes | Yes |
| Zero data retention option | No | Yes (ZDR API) | Yes | Yes |
| Suitable for client data? | No | Yes, with configuration | Yes, with configuration | Yes |
The ChatGPT Problem
Many lawyers use free-tier ChatGPT for contract-related tasks without understanding the implications. By default, OpenAI may use conversations to improve their models. You can opt out through settings, but the consumer product was not designed for handling confidential client data.
ChatGPT Enterprise and the API are different. OpenAI explicitly states that they do not train on Enterprise or API inputs. But the Enterprise tier costs significantly more, and you still need to configure data retention settings appropriately.
What to Look for in Any Tool
The tool’s marketing page is not sufficient. Read the actual terms of service, privacy policy, and data processing agreement. If the vendor cannot clearly answer how they handle your data, that is your answer.
The 8-Point Data Security Checklist
Before uploading any client document to an AI tool, verify these eight factors. If the tool cannot answer all eight clearly, do not use it for client data.
1. Data Retention: Does the tool store your documents? For how long? Can you delete them on demand?
2. Training Data Policy: Does the tool use your uploads to train or improve its AI models? Is the default opt-in or opt-out?
3. Encryption: Is data encrypted in transit (minimum TLS 1.2) and at rest (minimum AES-256)?
4. Access Controls: Who at the AI company can access your uploaded data? Under what circumstances?
5. Security Certification: Has the tool been independently audited? SOC 2 Type II is the standard for SaaS products handling sensitive data.
6. Data Processing Agreement: Will the vendor sign a DPA? This is standard for any tool handling regulated data.
7. Sub-Processors: Does the vendor route your data through third-party processors? If so, which ones, and what are their security standards?
8. Breach Notification: Will the vendor notify you of a data breach? Within what timeframe? (72 hours is the standard under most regulatory frameworks.)
Print this checklist. Run every AI tool through it before use. Document your findings. That documentation is your evidence of “reasonable efforts” under Rule 1.6(c) if questions ever arise.
Practical Steps to Protect Confidentiality
Choosing a secure tool is necessary but not sufficient. These additional practices reduce risk further.
Anonymize When Possible
Before uploading, consider whether you can remove or replace identifying information. Replace party names with “Party A” and “Party B.” Remove specific addresses, dollar amounts, or other details that are not relevant to the clause-level analysis you need. Most AI contract review tools analyze clause structure and risk patterns. They do not need to know who the parties are to identify a one-sided indemnification clause.
This is not always practical, especially for full-contract risk analysis. But for targeted clause review, anonymization adds a layer of protection at minimal cost.
Check Your Engagement Letter
Does your standard engagement letter address AI tool usage? If not, it should. ABA Formal Opinion 512 recommends that lawyers obtain informed consent before using client data in AI tools, and notes that boilerplate provisions are inadequate for self-learning tools.
For tools that do not train on inputs, a clear disclosure in your engagement letter is sufficient in most jurisdictions. For tools that do train on inputs, you need explicit, informed consent that explains the risk. For a deeper look at how disclosure requirements vary by state, see our state-by-state guide to AI disclosure requirements.
Review Your Malpractice Insurance
Does your professional liability policy cover AI-related data incidents? Most standard policies cover technology-related errors, but the intersection of AI tools and confidentiality is new enough that coverage may be uncertain. Contact your carrier and get clarity in writing.
Document Your Due Diligence
Keep a record of your AI tool evaluation process. Save the vendor’s privacy policy, terms of service, and DPA. Note the date you reviewed them. This documentation demonstrates your compliance with Rule 1.6(c)’s “reasonable efforts” standard.
What to Tell Clients About Data Security
Proactive communication about your AI data handling builds trust and reduces the risk of complaints.
Sample Engagement Letter Language
Standard disclosure (for tools that do not train on inputs):
“Our firm uses AI-powered contract review tools as part of our quality assurance process. These tools assist with clause identification, risk analysis, and missing provision detection. All AI-generated analysis is reviewed, verified, and supplemented by attorney judgment before inclusion in any client deliverable. Our AI tools use encryption at rest and in transit, do not train on client data, and comply with industry security standards.”
Detailed disclosure (for firms wanting maximum protection):
“Our firm uses [Tool Name], an AI-assisted contract review platform, to enhance the quality and efficiency of our contract review services. This tool analyzes contract language to identify clauses, assess risk levels, and detect missing provisions. Your documents are encrypted during transmission and storage. The tool does not retain your documents after analysis is complete and does not use your data to train its AI models. A licensed attorney reviews all AI-generated analysis before it is included in any work product delivered to you. You may request that we not use AI tools in your matter at any time.”
For guidance on how to ethically integrate AI into your practice more broadly, see our guide on how to use AI without risking your license.
The Informed Consent Approach
Some practitioners go beyond disclosure and seek explicit client consent for AI use. This approach offers maximum protection but adds friction.
When explicit consent is appropriate:
- Matters involving trade secrets or highly sensitive IP
- Clients in regulated industries (healthcare, financial services)
- Jurisdictions that mandate AI disclosure (check your state’s requirements)
- Engagement letters that specifically restrict technology use
When standard disclosure is sufficient:
- Routine contract review using tools that do not train on inputs
- Tools with zero data retention policies
- Jurisdictions with no specific AI disclosure requirements
- Standard commercial agreements without heightened sensitivity
The trend is toward more disclosure, not less. As of 2026, more state bars are issuing guidance that favors transparency about AI use. According to a Justia survey of all 50 states, the number of states with specific AI ethics guidance has increased significantly since 2023.
Attorney-Client Privilege and AI
A separate but related question: does uploading a client document to an AI tool waive attorney-client privilege?
The short answer: probably not, if the tool is properly secured. Courts have generally held that sharing privileged information with a service provider does not waive the privilege, provided the disclosure is necessary for the service and the provider maintains confidentiality. This is sometimes called the “Kovel doctrine” (after United States v. Kovel, 296 F.2d 918 (2d Cir. 1961)), which protects communications shared with agents necessary to facilitate legal representation.
AI tools are analogous to other technology vendors, such as e-discovery platforms, cloud storage, and document management systems, that routinely handle privileged materials without waiving privilege. The key is ensuring the vendor has appropriate confidentiality protections in place.
However, this area of law is evolving. If you are working with exceptionally sensitive privileged materials, consult with a legal ethics specialist in your jurisdiction before proceeding. Reviewing your approach against established contract red flag frameworks can also help you develop a consistent, defensible process.
Frequently Asked Questions
Can I use ChatGPT for client contracts?
Not the free consumer version, at least not without significant caveats. Free-tier ChatGPT may train on your inputs by default, lacks SOC 2 certification for the consumer product, and does not offer a data processing agreement. ChatGPT Enterprise and API tiers are different: they do not train on inputs and offer configurable data retention. If you use the Enterprise or API tier with appropriate settings, it can be acceptable. But a purpose-built legal AI tool like Clause Labs is designed from the ground up for handling confidential legal documents.
Is uploading to AI the same as uploading to cloud storage?
The analysis under Rule 1.6 is similar. Both involve sharing client data with a third-party service provider. The key differences: cloud storage typically stores data without processing it, while AI tools process the content and may use it for training. The “reasonable efforts” standard applies to both, but AI tools require additional diligence around training data policies and model improvement practices.
What if my client’s contract contains trade secrets?
Apply heightened scrutiny. Consider whether the AI tool’s data handling is sufficient for the sensitivity level. Anonymize where possible. Use tools with zero data retention. Get explicit informed consent from the client. Document everything. For trade secrets specifically, any inadvertent disclosure could destroy the trade secret status entirely, so the stakes are higher than for ordinary confidential information.
Does attorney-client privilege protect AI-processed documents?
Most likely, yes, provided the AI tool vendor maintains appropriate confidentiality protections. The principle is the same as with other technology service providers. But this area of law is still developing, and no court has issued a definitive ruling on AI tools specifically. Maintain strong vendor confidentiality agreements as a safeguard.
What if the AI tool has a data breach?
Your obligation under Rule 1.6(c) is to take “reasonable efforts” to prevent unauthorized disclosure, not to guarantee it never happens. If you chose a reputable tool with appropriate security measures and documented your evaluation process, you have met the standard even if a breach occurs. However, you should have a response plan: notify affected clients promptly, assess the scope of exposure, and consult your malpractice carrier. A vendor’s breach notification timeline (ideally 72 hours or less) gives you the information you need to respond.
This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation.
More articles
What Is Contract Redlining? How Lawyers Mark Up Agreements
What Is Contract Redlining? How Lawyers Mark Up Agreements The average commercial contract goes through 3.4 rounds of negotiation before execution. Each round involves at least two lawyers marking up the same document, tracking who changed what, and trying not to lose revisions in an email chain that has grown to 47 messages. According to [...]
What Is a Master Service Agreement (MSA)? A Plain-English Guide
What Is a Master Service Agreement (MSA)? A Plain-English Guide A technology company signs a three-year deal with a consulting firm. Six months in, the consultant takes on a second project. Then a third. Each time, both legal teams spend three weeks negotiating payment terms, liability caps, and confidentiality obligations they already agreed to in [...]