Is AI Contract Review Ethical? What Every Bar Association Says in 2026

Is AI Contract Review Ethical? What Every Bar Association Says in 2026
Yes — and the more interesting question is whether not using AI is becoming the bigger ethical risk.
ABA Model Rule 1.1, Comment 8 requires lawyers to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” As of 2026, more than 40 states have adopted this technology competence language or its equivalent. When AI tools can catch contract risks faster and more consistently than manual review — and when they cost less than one billable hour per month — the duty of competence starts to cut both ways.
This article breaks down exactly what the ABA, state bars, and Model Rules say about using AI for contract review, gives you a practical ethics framework you can implement today, and addresses the specific concerns that keep lawyers from adopting tools that could meaningfully improve their practice. Try Clause Labs Free to see an ethically designed AI contract review workflow in action — purpose-built for lawyers who take their ethical obligations seriously.
What the ABA Says: Formal Opinion 512
On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 — the first comprehensive ABA guidance on lawyers’ use of generative AI tools. The opinion confirms that AI tools can be used in legal practice, provided lawyers fulfill their existing ethical obligations.
The key takeaways:
AI is a tool, not a shortcut. The opinion states that generative AI “can be a useful tool to increase efficiency in the practice of law” but that “attorneys utilizing GAI need to fully consider their applicable ethical obligations.” Translation: you can use AI, but you cannot outsource your professional judgment.
Six ethical areas are implicated. Formal Opinion 512 analyzes AI use under competence (Rule 1.1), confidentiality (Rule 1.6), communication (Rule 1.4), candor toward the tribunal (Rule 3.3), supervisory responsibilities (Rules 5.1 and 5.3), and fees (Rule 1.5).
Verification is mandatory. The opinion is unambiguous: “Attorneys should not rely on GAI outputs without independent verification or review.” This applies to all AI-assisted legal work, including contract review. You must check the AI’s work product before relying on it.
Informed consent may be required. For confidentiality purposes, the opinion recommends that lawyers “secure clients’ informed consent before using client confidences in GAI tools” and warns that “boilerplate consent included in engagement letters will not be adequate.” The specificity of the consent must match the tool being used.
Formal Opinion 512 is not a prohibition. It’s a permission structure with guardrails. Lawyers who follow its framework can use AI confidently and ethically.
The 5 Model Rules That Matter for AI Contract Review
Not every Model Rule applies equally to contract review. Here are the five that matter most, with specific guidance on compliance.
Rule 1.1 — Competence
What it says: “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”
How it applies to AI contract review: You must understand how the AI tool works before using it on client matters. You must be able to evaluate its output critically. And you must stay current on developments in legal AI technology.
How to comply:
- Learn what the AI tool actually does. Contract review AI identifies clauses, scores risks, and flags missing provisions. It does not provide legal advice, generate case citations, or make strategic judgments.
- Test the tool on contracts you’ve already reviewed manually. Compare the AI’s findings to your own. Understand where it’s strong (clause identification, missing provisions, pattern-based risks) and where it’s limited (business context, enforceability in specific courts, novel provisions).
- Review every AI output before relying on it. The AI’s risk score and clause flagging are a starting point — not a conclusion.
The flip side of Rule 1.1 is increasingly relevant: if AI tools can catch risks more consistently and faster than manual review, and if a lawyer’s failure to use available technology results in a missed issue, technology competence may require awareness of AI tools — even if it doesn’t yet require their adoption.
Rule 1.4 — Communication
What it says: A lawyer shall reasonably consult with the client about the means by which the client’s objectives are to be pursued.
How it applies to AI contract review: In jurisdictions that require AI disclosure, you must inform your client that you’re using AI tools to assist with their contract review. Even in jurisdictions without explicit disclosure requirements, transparency about your workflow builds trust.
How to comply:
Add AI disclosure language to your engagement letter. Here’s sample language that satisfies most state requirements:
“Our firm uses AI-assisted contract review tools to enhance the accuracy and efficiency of our analysis. These tools identify contract clauses, flag potential risks, and detect missing provisions. All AI-generated insights are reviewed and verified by a licensed attorney before being included in any client deliverable. Your confidential information is processed using enterprise-grade AI tools with encryption in transit and at rest, no data retention after analysis, and no use of your data for model training.”
This disclosure is specific to the tool’s function and data handling — not the generic boilerplate that Formal Opinion 512 warns against.
Rule 1.5 — Fees
What it says: A lawyer shall not make an agreement for, charge, or collect an unreasonable fee.
How it applies to AI contract review: If AI reduces the time required to review a contract from 90 minutes to 30 minutes, can you still charge for 90 minutes of work?
How to comply: This is where many lawyers get anxious, but the ethical analysis is straightforward.
Value billing: If you charge flat fees for contract review, AI doesn’t change the fee calculation. The client is paying for the outcome — a thoroughly reviewed contract with flagged risks and recommended changes — not for the hours it took. A faster, more thorough review at the same price is a better deal for the client, not a worse one.
Hourly billing: If you bill hourly, bill for the time you actually spend. That includes time reviewing the AI output, applying professional judgment, and preparing the client deliverable. It does not include billing 90 minutes for a 30-minute AI-assisted review. According to Florida Bar Ethics Opinion 24-1, attorneys must ensure that fees and costs remain reasonable when using AI, and passing along the cost of AI tool subscriptions requires disclosure and client agreement.
The honest answer: AI makes individual reviews faster, which means you can either reduce per-review pricing (competitive advantage) or handle more reviews in the same time (capacity advantage). Either approach is ethical. What’s not ethical is billing as if AI doesn’t exist.
Rule 1.6 — Confidentiality
What it says: A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent.
How it applies to AI contract review: When you upload a client’s contract to an AI tool, you’re sharing confidential information with a third-party technology provider. This triggers the same analysis you’d apply to any third-party vendor — cloud storage, e-discovery platforms, or outside counsel.
How to comply:
Before uploading any client contract to any AI tool, verify:
- Data encryption: Is client data encrypted in transit and at rest?
- Data retention: Does the tool retain client data after analysis? For how long? Can you request deletion?
- Training data: Is client data used to train AI models? Any tool that trains on your client’s contracts is a confidentiality risk.
- Subprocessors: Who has access to the data? Are there subprocessors with their own data handling policies?
- Compliance certifications: Does the tool have SOC 2, ISO 27001, or equivalent security certifications?
Tools that typically pass this analysis: Purpose-built legal AI tools (Clause Labs, Spellbook, LegalOn) that are designed for lawyer workflows and understand confidentiality requirements. These tools typically offer no-retention policies, encryption, and explicit commitments about training data.
Tools that require caution: General-purpose AI chatbots (ChatGPT, Claude, Gemini) when used in their default consumer configurations. OpenAI’s default terms allow data usage for model improvement unless you opt out or use the API. Enterprise tiers with data processing agreements may address this, but you must verify.
Rule 5.3 — Supervision of Nonlawyer Assistance
What it says: A lawyer who employs or retains nonlawyer assistants shall make reasonable efforts to ensure that the person’s conduct is compatible with the professional obligations of the lawyer.
How it applies to AI contract review: The ABA has analogized AI output to work product from a junior associate or paralegal — it must be supervised. You are responsible for what the AI produces, just as you’re responsible for what a first-year associate drafts.
How to comply:
Treat AI contract review output the way you’d treat a junior associate’s first draft:
- Read the AI’s clause identification against the actual contract. Did it categorize correctly?
- Review each flagged risk. Is the risk assessment reasonable given the contract type and business context?
- Check “missing clause” findings. Is the clause actually missing, or did the AI fail to identify it in a different section?
- Apply your professional judgment. The AI doesn’t know whether the client has strong negotiating leverage, whether this is a must-sign deal, or whether the counterparty will walk if you push too hard.
- Sign off on the final work product as your own. It’s your analysis. You’re responsible.
Want to see what an ethically designed AI contract review workflow looks like? Upload any contract to Clause Labs — structured output, no hallucinated citations, full confidentiality protections. The tool is built around the exact framework Formal Opinion 512 requires.
State-by-State Bar Positions on AI in 2026
Beyond the ABA’s national guidance, individual state bars have issued their own opinions and rules. Here’s where the major jurisdictions stand.
States with Specific AI Ethics Guidance
| State | Guidance | Key Requirement | Citation |
|---|---|---|---|
| California | Practical Guidance (Nov 2023) | Competence requires understanding LLMs before use; assess hallucination risks and data privacy | State Bar Board of Trustees |
| Florida | Opinion 24-1 (Jan 2024) | Mandatory disclosure when AI impacts billing or costs; reasonable oversight; confidentiality protections | Florida Bar |
| Texas | Opinion 705 (Feb 2025) | Human oversight of AI-generated work; prevent submission of fabricated citations | Texas Ethics Committee |
| New York | NYSBA AI Task Force Report (2025) | Phased roadmap for AI adoption; requires 2 annual CLE credits in AI competency | NYSBA |
| Oregon | Formal Opinion 2025-205 | Comprehensive coverage: competence, confidentiality, billing disclosure, court filings, supervision | Oregon State Bar |
| D.C. | Rule 1.1 Comment Amendment (Apr 2025) | Adopted technology competence language matching ABA Model Rule 1.1 Comment 8 | D.C. Court of Appeals |
| Puerto Rico | Rule 1.19 (effective Jan 2026) | Goes beyond ABA Model Rules — requires technological competence and diligence as a standalone rule | Supreme Court of Puerto Rico |
The Trend Across All States
According to Justia’s 50-state survey of AI and attorney ethics rules, the trajectory is clear:
- No state bar has prohibited AI use in legal practice
- Multiple states require or are considering mandatory AI disclosure to clients
- Florida is leading on billing transparency for AI-assisted work
- New York is leading on CLE requirements for AI competence
- Every state with published guidance emphasizes the same core principle: AI output must be verified by a licensed attorney
If your state hasn’t published specific AI guidance, the ABA’s Formal Opinion 512 provides the baseline framework — and your state’s adoption of Model Rule 1.1 Comment 8 (technology competence) creates an independent obligation to understand AI tools.
The Mata v. Avianca Problem — And Why It Doesn’t Apply to Contract Review
Every conversation about legal AI eventually circles back to Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023). Attorney Steven Schwartz used ChatGPT to research case law, ChatGPT fabricated six non-existent cases, Schwartz submitted the brief without verifying the citations, and the court imposed $5,000 in sanctions against Schwartz and his co-counsel for violating Federal Rule of Civil Procedure 11.
Mata v. Avianca is the case that made lawyers afraid of AI. But the lesson isn’t “don’t use AI” — it’s “don’t submit AI output without verification.” Schwartz didn’t fail because he used ChatGPT. He failed because he trusted it blindly.
More importantly, contract review AI operates in a fundamentally different risk category than legal research AI:
Legal research AI generates content from scratch — case citations, legal arguments, rule interpretations. This is where hallucination risk is highest, because the AI is creating output that doesn’t exist in the input.
Contract review AI analyzes a specific document you provide. It identifies clauses in text that exists. It flags risks based on what’s actually in the contract. It detects missing provisions by comparing against a framework. It doesn’t generate legal citations, invent case law, or fabricate rules.
The hallucination risk in contract review is not zero — AI might miscategorize a clause, overstate a risk, or miss a nuance. But it’s categorically different from the kind of fabrication that led to sanctions in Mata v. Avianca. There are no citations to verify because the tool doesn’t generate citations. The output is a structured analysis of text you can see.
For a complete analysis of the distinction between research AI and review AI, see our article on the Mata v. Avianca problem and how to avoid it. For a broader comparison of purpose-built tools versus general AI for contract review, see our Clause Labs vs ChatGPT analysis.
Practical Ethics Framework for AI Contract Review
Here’s a usable five-step framework you can implement today.
Step 1: Before Adopting Any AI Tool
Verify data security: Does the tool encrypt data in transit and at rest? Does it retain data after analysis? Does it train on user-uploaded contracts? Is it SOC 2 certified or equivalent?
Understand how it works: What does the tool analyze? What’s its methodology? What are its known limitations? If you can’t explain it to a client, you’re not ready to use it.
Check your state bar guidance: Review the table above and check your state bar’s website for published opinions on AI use. If no guidance exists, use ABA Formal Opinion 512 as your baseline.
Step 2: Before Each Client Use
Assess appropriateness: Is AI contract review suitable for this contract type? Purpose-built tools handle standard commercial agreements (NDAs, MSAs, employment agreements, SaaS agreements) well. Highly bespoke or novel agreements may need more human attention.
Client consent: Does your engagement letter include AI disclosure? Does your jurisdiction require specific consent? When in doubt, disclose.
Step 3: During AI Review
Upload the contract to your approved AI tool.
Review the output clause by clause against the actual contract text. Did the AI identify clauses correctly? Are the risk assessments reasonable?
Verify “missing clause” findings. Is the clause actually missing, or is it addressed in a different section, exhibit, or referenced document?
Step 4: After AI Review
Apply professional judgment. Add business context, client-specific considerations, and negotiation strategy that the AI can’t know.
Prepare the client deliverable. The work product is yours — signed, reviewed, and verified.
Document your process. Keep records of which tools you used, how you reviewed the output, and what professional judgment you applied. This documentation protects you against malpractice claims and bar complaints.
Step 5: Ongoing Compliance
Stay current. Subscribe to your state bar’s updates on AI ethics. Follow the ABA’s professional responsibility publications for updates to Formal Opinion 512 and related guidance.
Review your AI use policy quarterly. State rules, tool capabilities, and best practices are evolving rapidly.
Invest in CLE. New York now requires AI competency credits. Other states are likely to follow. Getting ahead of mandatory CLE requirements is both smart practice and good ethics.
The Ethics of NOT Using AI
This section makes some lawyers uncomfortable, but the argument is increasingly supported by the profession.
Model Rule 1.1 Comment 8 requires lawyers to stay current on technology that benefits their practice. When AI contract review tools can:
- Identify clause types and risks in 30 seconds that manual review might miss after 90 minutes
- Detect missing provisions that even experienced lawyers overlook when fatigued or rushed
- Process 10 contracts in the time it takes to manually review one
- Cost less than a single billable hour per month
…the question shifts from “Is it ethical to use AI?” to “Is it ethical to not even evaluate it?”
This doesn’t mean every lawyer must adopt AI tools today. But it means every lawyer should understand what AI contract review tools exist, what they can do, what their limitations are, and whether they might benefit client representation. Willful ignorance of available technology — when that technology could meaningfully improve client outcomes — sits uncomfortably with the duty of competence.
As the National Association of Attorneys General has noted, the ethical duty of technology competence is not about being an early adopter. It’s about being an informed practitioner.
Frequently Asked Questions
Do I need to disclose AI use to clients?
It depends on your jurisdiction. Florida (Opinion 24-1) requires disclosure when AI impacts billing or costs. New York’s Task Force recommends disclosure as best practice. California’s guidance emphasizes understanding the tools but doesn’t mandate specific disclosure language. The ABA’s Formal Opinion 512 recommends informed consent that goes beyond engagement letter boilerplate. When in doubt, disclose — transparency builds client trust, and over-disclosure is never an ethical violation.
Can I use ChatGPT for client contracts?
With significant caveats. ChatGPT’s default consumer tier may use your inputs for model training — a potential Rule 1.6 violation. ChatGPT’s output is inconsistent, unstructured, and prone to hallucination. And it lacks the legal framework, clause identification, and risk scoring that purpose-built tools provide. If you use ChatGPT for contract-related tasks, use the Enterprise or API tier with a data processing agreement, verify every output, and understand that you’re using a general tool for a specialized task. Purpose-built tools like Clause Labs’s free contract analyzer are designed specifically for this workflow — with structured risk output, clause-by-clause analysis, and the data handling safeguards that Rule 1.6 requires.
What if the AI makes a mistake in its analysis?
You’re responsible. Just as you’re responsible when a paralegal misreads a clause or a junior associate misidentifies a risk, you’re responsible for the final work product. This is why Rule 5.3 supervision is not optional. Review every AI output before relying on it. If you catch an error, correct it. If an error gets through because you didn’t review the output, the ethical failure is yours — not the AI’s.
Is there malpractice coverage for AI-assisted work?
Most malpractice policies don’t explicitly address AI — but they also don’t explicitly exclude it. The standard of care is the same: you must exercise the competence, diligence, and judgment expected of a reasonable lawyer. If AI helps you meet that standard (by catching issues you might have missed), it reduces malpractice risk. If you rely on AI without proper supervision and miss something, the malpractice exposure is the same as any other failure of competence. Best practice: notify your insurer that you use AI tools and get written confirmation that your coverage applies to AI-assisted work product.
Can I charge full rates for AI-assisted contract review?
If you bill flat fees: yes. The client is paying for the result, not the methodology. A thoroughly reviewed contract with flagged risks and recommended edits is worth the same to the client whether it took you 90 minutes or 30 minutes.
If you bill hourly: bill for the time you actually spend. That includes AI tool time, output review, professional judgment application, and client deliverable preparation. Do not bill 90 minutes for 30 minutes of work. Under ABA Model Rule 1.5, fees must be reasonable.
The broader trend in the profession is clear: AI-assisted efficiency should benefit both lawyer and client, and transparent billing for AI-assisted work builds trust and competitive advantage.
Ready to see what ethical AI contract review looks like in practice? Try Clause Labs free — upload any contract and get a structured risk analysis in under 60 seconds. No data retention, no model training on your documents, full encryption. Built for lawyers who take their ethical obligations seriously. Start with 3 free reviews per month, no credit card required.
This article is for informational purposes only and does not constitute legal advice. Ethics rules vary by jurisdiction, and the guidance in this article reflects the legal landscape as of February 2026. Consult your state bar’s ethics hotline or a legal ethics attorney for advice specific to your jurisdiction and practice.
More articles
What Is Contract Redlining? How Lawyers Mark Up Agreements
What Is Contract Redlining? How Lawyers Mark Up Agreements The average commercial contract goes through 3.4 rounds of negotiation before execution. Each round involves at least two lawyers marking up the same document, tracking who changed what, and trying not to lose revisions in an email chain that has grown to 47 messages. According to [...]
What Is a Master Service Agreement (MSA)? A Plain-English Guide
What Is a Master Service Agreement (MSA)? A Plain-English Guide A technology company signs a three-year deal with a consulting firm. Six months in, the consultant takes on a second project. Then a third. Each time, both legal teams spend three weeks negotiating payment terms, liability caps, and confidentiality obligations they already agreed to in [...]