Legal AI Ethics13 min read

How to Use AI for Contract Review Without Risking Your License

Featured image for: how to use ai contract review ethically

How to Use AI for Contract Review Without Risking Your License

A $5,000 fine and national humiliation. That was the cost for the attorneys in Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023) who submitted ChatGPT-generated fake case citations to a federal court. By late 2025, researchers had tracked over 120 court cases involving AI hallucinations, with 128 lawyers implicated and sanctions ranging from $100 to $31,100. The headlines make AI in legal practice sound dangerous. The reality is more nuanced: the risk is not in using AI. It is in using it wrong.

This article gives you a concrete, seven-rule framework for using AI contract review tools ethically, with practical workflows, disclosure templates, and billing guidance you can implement today. If you follow these rules, you will not be the next cautionary tale.

Try Clause Labs Free — purpose-built for ethical AI contract review with structured output, no hallucinated citations, and no data retention.

The 7 Rules for Ethical AI Contract Review

These rules are grounded in the ABA Model Rules of Professional Conduct, ABA Formal Opinion 512 (issued July 2024), and emerging state bar guidance. They apply regardless of which AI tool you use.

Rule 1: Choose Purpose-Built Over General-Purpose

Using ChatGPT for contract review is not the same as using a dedicated contract review AI. This distinction matters for your ethical obligations and your malpractice exposure.

Purpose-built contract review tools (like Clause Labs, LegalOn, or Spellbook) analyze a specific document you provide. They identify clauses, score risks, flag missing provisions, and generate structured output. They do not fabricate case law, invent statutes, or generate citations from thin air.

General-purpose AI (ChatGPT, Claude, Gemini) generates freeform text based on probability. When asked about legal authority, it can produce confident-sounding citations to cases that do not exist. Every single lawyer sanctioned for AI misuse so far used general-purpose AI for legal research, not purpose-built legal tools.

The practical rule: if a purpose-built tool exists for the task, use it. Save general AI for brainstorming, drafting emails, and summarizing meeting notes. For a detailed comparison of how ChatGPT performs against purpose-built tools on real contracts, see our ChatGPT NDA test case study.

Rule 2: Understand What the AI Does (and Does Not Do)

ABA Model Rule 1.1 requires competent representation. Comment 8 specifically requires lawyers to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” As of early 2026, 42 U.S. jurisdictions have adopted this duty.

ABA Formal Opinion 512 makes this explicit for AI: lawyers must “understand the capacity and limitations of GAI and periodically update that understanding.”

Before using any AI contract review tool, you need clear answers to four questions:

  1. What does it analyze? Clause identification, risk scoring, missing clause detection, redline generation
  2. What does it NOT analyze? Business context, client-specific goals, enforceability in your specific jurisdiction, opposing counsel’s negotiation posture
  3. How does it handle your data? Encryption in transit and at rest, retention policy, whether it trains on user-uploaded documents
  4. What are its limitations? May miss bespoke provisions, cannot evaluate business context, risk scores reflect general standards rather than deal-specific dynamics

The red flag test: if you cannot explain to a client how the tool works in plain English, you are not ready to use it on their matter.

Rule 3: Never Submit AI Output Without Human Review

Every AI-generated analysis must be reviewed by a licensed attorney before it reaches a client or opposing counsel. This is not optional. It is the foundation of ethical AI use and the single rule that would have prevented every AI-related sanction to date.

“Review” means something specific:

  • Read every flagged clause against the actual contract language. Did the AI categorize it correctly?
  • Evaluate risk severity in light of the deal’s context. A one-sided indemnification clause matters more in a $5M MSA than in a $500/month SaaS subscription.
  • Consider what the AI cannot know. Your client’s risk tolerance. The relationship between the parties. Prior course of dealing. Industry customs.
  • Verify “missing clause” findings. Sometimes the provision exists but is in a different section or uses different terminology.
  • Sign off on the final work product as your own. Your name goes on it. Your malpractice insurance covers it.

For a structured approach to what to check in AI output, see our contract red flags checklist. For the specific legal framework underlying competence obligations, see our deep dive on Rule 1.1 and attorney technology competence.

Rule 4: Protect Client Confidentiality

Model Rule 1.6 requires you to protect “information relating to the representation of a client.” Before uploading any client document to any AI tool, verify five things:

Security Requirement What to Look For Red Flag
Encryption TLS 1.2+ in transit, AES-256 at rest No mention of encryption standards
Data retention Clear policy on deletion after processing “We may retain data to improve our services”
Training exclusion Your data is not used to train the model No explicit opt-out or unclear training policy
Compliance certification SOC 2 Type II, or equivalent No third-party security audits
Privacy policy Compatible with Rule 1.6 obligations Vague or overly broad data usage rights

Florida Bar Opinion 24-1 specifically requires attorneys to evaluate AI tools for confidentiality compliance before use. Formal Opinion 512 directs lawyers to “put in place adequate safeguards to ensure that data processed by GAI is secure.”

Tools to be cautious with: Any free, consumer-grade AI tool that does not clearly state its data handling practices. ChatGPT’s free tier, for instance, may use your input for training unless you opt out.

Rule 5: Disclose AI Use When Required

Disclosure requirements vary by state and are evolving rapidly. As of early 2026, the landscape breaks down roughly as follows:

States with specific AI disclosure guidance:
Florida: Opinion 24-1 requires disclosure when AI use impacts billing or costs
California: Ethics guidance emphasizes disclosure when AI materially affects representation
Texas: Opinion 705 requires human oversight of all AI-generated work product
New York: Formal Opinion 2025-6 addresses AI in meeting transcription and client communications

Even where not formally required, voluntary disclosure is increasingly the safer practice. Here is sample language for your engagement letter:

Our firm uses AI-assisted contract review tools to enhance the accuracy and efficiency of our analysis. All AI-generated insights are reviewed and verified by a licensed attorney before being included in any client deliverable. Client data is processed through tools that meet enterprise security standards including encryption and no-data-retention policies.

For a comprehensive state-by-state breakdown, Justia maintains a 50-state AI ethics survey that is updated regularly.

Rule 6: Document Your AI Workflow

If a client or bar disciplinary board ever questions your use of AI, your best defense is documentation. Maintain records of:

  • Which AI tools you use and for what specific tasks
  • Your evaluation process for each tool (security review, accuracy testing, limitations assessment)
  • Your verification process for each use (what you checked, what you confirmed, what you changed)
  • CLE or training related to AI competence
  • Your firm’s AI use policy (create one even if you are a solo practitioner)

This documentation does double duty: it protects against bar complaints and it strengthens your position if a malpractice claim arises. A lawyer who can show a deliberate, documented process for AI oversight is in a far stronger position than one who says, “I just uploaded it and used what it gave me.”

Rule 7: Stay Current

AI ethics rules are evolving faster than almost any other area of professional responsibility. Formal Opinion 512 was issued in July 2024. Multiple states issued their own opinions throughout 2025. Court standing orders on AI continue to proliferate. What is permitted today may be insufficient tomorrow.

Practical steps:
– Subscribe to your state bar’s AI-related updates
– Follow LawSites/LawNext for legal tech ethics coverage
– Take at least one CLE course annually on AI in legal practice
– Review and update your AI use policy quarterly

The 5-Step Ethical AI Workflow for Contract Review

Consolidating the seven rules into a practical daily workflow:

Step 1: Receive contract. Assess whether AI review is appropriate for this contract type, this client, and this deal. For bespoke, ultra-high-stakes transactions, manual review may still be warranted.

Step 2: Upload to an approved AI tool. Use a purpose-built contract review tool that passes your security checklist. For most solo and small firm lawyers, this means a tool with documented encryption, no-training policies, and structured output. Clause Labs’s free tier lets you test the workflow on up to 3 contracts per month.

Step 3: Review AI output. Go clause by clause. Check every flagged risk against the actual contract language. Verify missing clause findings. Evaluate risk severity in context.

Step 4: Apply professional judgment. Add the elements AI cannot provide: business context, client objectives, negotiation strategy, jurisdiction-specific considerations, prior course of dealing.

Step 5: Deliver to client. Include appropriate AI disclosure per your jurisdiction’s requirements. Retain documentation of your review process.

Total time for a standard NDA: 10-15 minutes (vs. 1-3 hours fully manual). Total time for a complex MSA: 30-60 minutes (vs. 3-6 hours fully manual). The time savings are real. The ethical framework makes them safe.

What Triggers Bar Complaints About AI Use

Based on documented cases and ethics opinions through 2025, the following behaviors have triggered or could trigger disciplinary action:

Submitting AI output without verification. This is the Mata v. Avianca scenario. The attorney did not check whether the cited cases existed. Every sanction to date involves this failure. The fix: Rule 3 above.

Uploading client documents to insecure AI tools. Using a consumer chatbot with no data protections for client contracts violates Rule 1.6 confidentiality obligations. The fix: Rule 4 above.

Failing to disclose AI use when required. As more states adopt disclosure requirements, ignorance of the rules is not a defense. The fix: Rule 5 above.

Billing for AI time as attorney time without disclosure. Formal Opinion 512 directly addresses this. You may not charge clients for time spent learning a technology for general use. The fix: see the billing section below.

Using AI for tasks beyond your competence to evaluate. If you use an AI tool for estate planning analysis but you are a transactional attorney who cannot evaluate whether the output is correct, you have a competence problem regardless of the AI. The fix: stay in your lane.

The Billing Question: Can You Charge for AI-Assisted Work?

This is the question lawyers worry about privately but rarely ask publicly. ABA Model Rule 1.5 requires that fees be reasonable.

Formal Opinion 512 provides specific guidance: lawyers “may not charge clients for time spent learning a technology to be used for client matters generally.” But it also leaves room for charging for the value of AI-assisted work.

Practical billing approaches that work:

  • Flat fee per contract review. AI speed becomes your efficiency advantage, not a billing problem. The client pays for the outcome (a thorough risk analysis), not the hours.
  • Reduced hourly rate with faster turnaround. “I can review your MSA in 2 hours at $350/hour instead of 6 hours. You get it back today instead of Thursday.” The client saves $1,400. You earn $700 in 2 hours instead of $2,100 in 6.
  • Hybrid model. AI-assisted review at a reduced rate ($200/hour) plus attorney analysis at full rate ($400/hour). Transparent, defensible, and client-friendly.

What NOT to do: Bill 3 hours at full rate for work that took 15 minutes with AI assistance. Even if you have not yet been caught, the trend is unmistakably toward transparency in AI-assisted billing.

According to Clio’s 2025 Legal Trends Report, 64% of mid-sized firms now offer flat fees, a trend accelerated by AI adoption. Moving to value-based billing may be the most ethical and profitable response to AI efficiency gains.

Malpractice Insurance and AI

Before relying on any AI tool for client work, contact your malpractice insurer and ask:

  1. Does my current policy cover AI-assisted work product?
  2. Are there exclusions for errors originating from AI tools?
  3. Do I need to disclose my use of AI tools?
  4. Are there specific AI tools that are approved or prohibited?

Most carriers have not yet excluded AI-assisted work, but the landscape is shifting. Getting written confirmation of coverage now protects you if a claim arises later. This is especially important for solo practitioners whose malpractice coverage is their only safety net.

When NOT to Use AI for Contract Review

Ethical AI use includes knowing when NOT to use it:

  • Highly bespoke agreements with novel structures the AI may not recognize (e.g., complex multi-party joint venture agreements, unusual earn-out structures). For common contract types like SaaS agreements, NDAs, and MSAs, AI review is well-suited
  • Matters where the client has specifically requested no AI use (respect this even if you disagree)
  • Areas outside your competence where you cannot evaluate whether the AI output is correct
  • Ultra-high-stakes single transactions where the cost of any error vastly exceeds the time savings (though even here, AI as a supplementary check adds value)

The ABA’s 2024 Legal Technology Survey found that 75% of lawyers cite accuracy as their top AI concern. That concern is healthy. It should drive verification practices, not avoidance.

Frequently Asked Questions

Can I lose my license for using AI?

No lawyer has been disbarred for using AI as of early 2026. Sanctions have ranged from $100 to $31,100, and every sanctioned lawyer used general-purpose AI (not purpose-built legal tools) and failed to verify the output. Using AI itself is not the problem. Failing to supervise AI output is.

This depends on your jurisdiction. Formal Opinion 512 does not require blanket client consent, but Rule 1.4 (Communication) may require disclosure when AI use materially affects the representation. Best practice: include AI disclosure in your engagement letter.

What if the AI misses a critical issue?

You bear responsibility. AI is a tool that supplements your judgment; it does not replace it. This is why Rule 3 (human review) is non-negotiable. If you followed a documented verification process and the AI still missed something that a reasonable attorney would also miss, your malpractice position is stronger than if you had no process at all.

Can I delegate AI supervision to a paralegal?

Model Rule 5.3 permits delegation to nonlawyers with appropriate supervision. A paralegal can run the AI tool and organize the output. But a licensed attorney must review and approve the final work product. The supervisory obligation cannot be delegated.

Should I mention AI use in my marketing?

Many firms now highlight AI capabilities as a competitive advantage. If you do, be accurate about what the AI does and does not do. Overstating AI capabilities in marketing could create client expectations you cannot meet, which is both an ethical and a business risk.

For a comprehensive overview of how AI tools compare for contract review, see our best AI contract review tools guide.

Ready to implement this framework? Start with Clause Labs’s free tier — 3 contract reviews per month, structured output designed for attorney verification, and no data retention. It is the safest way to begin building your AI competence.


This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation.

AI ethics,contract review,legal malpractice,Rule 1.1,Rule 1.6,ABA Formal Opinion 512,solo practitioners

Try AI contract review for free

3 free reviews per month. No credit card required.

Start Free