Legal AI Ethics17 min read

Ethical AI Use in Legal Practice: A CLE-Eligible Guide

Featured image for: ethical ai use legal practice cle

Ethical AI Use in Legal Practice: A CLE-Eligible Guide

Forty-two jurisdictions have now adopted the technology competence duty from Comment 8 to ABA Model Rule 1.1. That number rose from 40 to 42 in 2025 alone, with the District of Columbia and Puerto Rico joining the list. Puerto Rico went further than any other jurisdiction, creating an entirely new Rule 1.19 dedicated to “Technological Competence and Diligence” rather than burying the duty in a comment.

The direction is unmistakable: every state will eventually require lawyers to understand the technology they use — or choose not to use. And with 26% of legal organizations now actively deploying generative AI (Thomson Reuters 2025 survey), the ethical framework for AI use is no longer a hypothetical CLE topic. It is a daily practice requirement.

This guide provides a rule-by-rule analysis of the ethical obligations governing AI use in legal practice, compiles guidance from major state bars, examines case studies of both ethical and unethical AI use, and offers a decision framework you can apply immediately. Try Clause Labs’s free analyzer to see how a purpose-built legal AI tool differs from general chatbots in protecting your ethical obligations.


Rule-by-Rule Analysis: How Each Model Rule Applies to AI

Rule 1.1 — Competence: The Dual Obligation

ABA Model Rule 1.1 creates what is effectively a dual obligation for AI use:

Obligation 1: Competence in using AI tools you adopt. If you use an AI contract review tool, you must understand what it does, how it works at a functional level, what its limitations are, and where it is most and least reliable. You need not understand the underlying machine learning architecture. You must understand the tool’s inputs, outputs, and failure modes.

Obligation 2: Competence in knowing what AI tools exist. Comment 8’s requirement to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology” increasingly means that ignorance of AI tools is itself a competence issue — particularly when those tools are widely adopted by peers in your practice area.

What ABA Formal Opinion 512 says: Lawyers using AI must understand “the benefits and risks associated with” the technology, though they need not become AI experts. The standard is a “reasonable understanding of the capabilities and limitations” — enough to evaluate whether the tool’s output is reliable for a given task.

The practical test: Could you explain to a client, in plain language, what the AI tool does with their data, what it is good at, what it misses, and why you trust (or verify) its output? If not, your competence under Rule 1.1 is questionable.

Rule 1.4 — Communication: When and How to Tell Clients About AI

Model Rule 1.4 requires lawyers to keep clients “reasonably informed about the status of the matter” and to “explain a matter to the extent reasonably necessary to permit the client to make informed decisions.”

The disclosure question: Must you tell clients you are using AI?

ABA Formal Opinion 512 does not impose a blanket disclosure requirement, but it strongly recommends disclosure when AI use is material to the representation. The NYSBA Task Force Report goes further, advising lawyers to “disclose to clients when AI tools are employed in their cases.”

When disclosure is clearly required:

  • The AI’s analysis materially affects your advice to the client
  • Client data will be processed by a third-party AI tool
  • The client’s informed consent is needed under Rule 1.6 before uploading confidential information
  • The client specifically asks about your review methodology

When disclosure is good practice but not strictly required:

  • AI is used for initial screening that you independently verify
  • AI assists with non-substantive tasks (formatting, document organization)
  • The AI tool is functionally equivalent to other technology (spell-check, document comparison) that you do not typically disclose

Best practice: Disclose proactively. A brief technology disclosure in your engagement letter costs nothing and prevents problems later. Clients who learn after the fact that AI was used — even appropriately — may lose trust.

Rule 1.5 — Fees: The Billing Ethics of AI Efficiency

The fee implications of AI are more complex than they first appear.

What Opinion 512 prohibits:

  • Billing clients for time spent learning a general AI tool. If you spend 5 hours learning how to use an AI contract review platform, that is overhead, not client work.
  • Billing for hours not actually worked. If AI reduces a 3-hour review to 45 minutes, billing 3 hours is unethical.

What Opinion 512 permits:

  • Charging for time actually spent using AI on a specific client matter
  • Charging reasonable flat fees that reflect the value of the service
  • Passing through reasonable AI subscription costs with prior disclosure
  • Charging a client-requested premium for AI-specific expertise

The value-based billing opportunity: AI creates a compelling case for flat-fee contract review. If you can deliver a thorough NDA review in 40 minutes using AI — the same quality that took 2.5 hours manually — a flat fee of $500-$750 is a win for both you (higher effective hourly rate) and the client (lower total cost, faster turnaround).

Texas Opinion 705 addresses this directly: lawyers “cannot bill for unworked hours, even if AI makes tasks more efficient. However, reasonable costs for AI services — such as subscription fees — may be passed to Texas clients with appropriate prior agreement.”

Rule 1.6 — Confidentiality: The Non-Negotiable Obligation

Model Rule 1.6 is where the most serious risks lie, and where the distinction between general AI tools and purpose-built legal tools matters most.

The core issue: When you upload a client’s contract to an AI tool, you are sharing confidential client information with a third-party technology provider. Rule 1.6(c) requires you to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”

Opinion 512’s guidance: Lawyers must secure “informed consent” before using client confidences in AI tools, and “boilerplate consent included in engagement letters will not be adequate.”

This is the strongest language in Opinion 512. It means:

  1. You cannot simply add “we may use AI tools” to your standard engagement letter and call it informed consent
  2. You must explain specifically what AI tool you are using, what data it processes, and how that data is protected
  3. The client must understand and agree — not just fail to object

The general AI problem: When you paste a contract into ChatGPT, Claude, or similar general-purpose tools, you are sending client data to a platform that may:
– Use the input to train its models (exposing client data to other users’ outputs)
– Store the conversation indefinitely
– Share data with third-party sub-processors
– Not provide any contractual data protection commitments

The purpose-built legal AI solution: Tools designed specifically for legal contract review — like Clause Labs — typically:
– Do not train on client data
– Provide contractual commitments on data handling
– Implement data isolation between users
– Offer defined data retention and deletion policies
– Maintain security certifications (SOC 2 or equivalent)

Practical compliance checklist for Rule 1.6:

  • [ ] Review the AI tool’s terms of service and privacy policy
  • [ ] Confirm the tool does not train on your data
  • [ ] Verify data encryption at rest and in transit
  • [ ] Understand data retention periods and deletion procedures
  • [ ] Obtain informed (not boilerplate) client consent
  • [ ] Document your data protection assessment in the client file

Rule 5.3 — Supervision: AI as a “Nonlawyer Assistant”

The 2012 amendment to Rule 5.3 changed “nonlawyer assistants” to “nonlawyer assistance,” expanding the scope to include non-human assistance such as AI tools.

What this means practically:

You must supervise AI output with the same diligence you would apply to work product from a paralegal or junior associate. You would not send a first-year associate’s contract memo to a client without review. You should not send AI-generated analysis to a client without review either.

The firm-level obligation: Partners and managing attorneys must:
– Establish written policies governing AI use (what tools, what tasks, what safeguards)
– Train all attorneys and staff on proper AI use
– Implement review workflows that ensure AI output is verified before use
– Conduct periodic audits of AI-assisted work product

The individual attorney obligation: Every attorney who uses AI tools must:
– Review AI output before relying on it
– Apply professional judgment to AI-generated analysis
– Flag and correct AI errors before they reach clients
– Maintain documentation of the review process

Rules 3.1 and 3.3 — Candor: The Mata v. Avianca Warning

While primarily applicable to litigation, Rules 3.1 (meritorious claims) and 3.3 (candor toward the tribunal) carry a critical lesson for all lawyers using AI.

The case: In Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023), attorneys used ChatGPT to research legal precedent for a court filing. ChatGPT fabricated six non-existent case citations. The attorneys submitted the filing without verifying the citations existed. Judge P. Kevin Castel sanctioned the attorneys $5,000 and required them to notify each judge falsely identified as the author of the fabricated opinions.

Why this matters for contract lawyers: The principle extends beyond litigation. If you rely on AI-generated analysis of contract provisions — including analysis of governing law, statutory references, or case law implications — you must verify it independently. The Stanford study on AI legal tools found hallucination rates of 17% for Lexis+ AI, 33% for Westlaw AI-Assisted Research, and 43% for GPT-4. These are not edge cases. They are systematic failure rates.


State Bar Guidance: A Four-State Comparison

State bars have issued increasingly specific guidance on AI ethics. Here is a comparison of the four most influential state approaches.

California: Practical Principles, Not Prescriptive Rules

The State Bar of California’s Practical Guidance (approved November 2023) provides guiding principles rather than specific mandates:

  • AI is treated as “another technology” subject to existing competence, confidentiality, and supervision rules
  • No specific disclosure requirement, but the guidance emphasizes informed consent for data sharing
  • Treats AI as a living document issue — the guidance is periodically updated as technology evolves
  • Accessible via the Ethics & Technology Resources page

Key takeaway for practitioners: California’s approach gives you flexibility but requires you to think through each ethical issue case by case. There is no safe harbor of “I followed the checklist.”

Florida: Four Clear Ethical Caveats

Florida Bar Opinion 24-1 (January 2024) provides the most structured state guidance, organized around four ethical obligations:

  1. Protect confidentiality: Research the AI tool’s data policies before use
  2. Maintain competence and supervision: Develop policies for oversight; verify AI output
  3. Bill ethically: No double-billing or inflating hours
  4. Comply with advertising rules: AI chatbots on law firm websites must identify themselves and include disclaimers

Key takeaway: Florida’s approach is the most actionable — four clear requirements you can audit against.

New York: The Most Comprehensive Framework

The NYSBA Task Force on Artificial Intelligence Report (April 2024) is the most comprehensive state bar document on AI, with four core recommendations:

  1. Adopt AI guidelines (the report provides detailed guidelines)
  2. Prioritize education over legislation
  3. Identify risks requiring new regulation through expert study
  4. Examine the broader governance role of law in AI development

Key provisions:
– Lawyers should disclose AI use to clients
– AI should not replace professional judgment
– A standing committee should oversee periodic updates
– Education should be the primary response, not restrictive regulation

Key takeaway: New York’s framework is the broadest — it addresses not just practitioner ethics but the structural role of the legal profession in AI governance. Read the full report if you practice in New York or want the most thorough analysis available.

Texas: Competence Before Use, Fair Billing After

Texas Opinion 705 (February 2025) adds specific guidance not found in other states:

  1. Competence before use: Lawyers must “acquire basic technological competence before using any generative AI tool” — not after
  2. Confidentiality as a threshold: Always verify the tool “does not imperil confidential client information” before inputting any data
  3. Mandatory verification: “Always verify the accuracy of any responses received from a generative AI tool”
  4. Billing fairness: Lawyers “should not charge clients for the time ‘saved’ by using a generative AI program”

Key takeaway: Texas’s billing guidance is the most specific. The explicit prohibition on charging for AI-saved time pushes lawyers toward value-based or flat-fee pricing models.

Comparison Table

Issue ABA Opinion 512 California Florida 24-1 New York NYSBA Texas 705
Competence required Yes Yes Yes Yes Yes (before use)
Client disclosure Recommended Implied Yes (confidentiality) Yes (explicit) Implied
Informed consent for data Required (not boilerplate) Case-by-case Yes Yes Yes
Billing for AI-saved time Cannot bill unworked hours Not addressed specifically No double-billing Not addressed specifically Cannot charge for saved time
AI tool subscription passthrough Permitted if reasonable Not addressed Not addressed Not addressed Permitted with agreement
Written AI use policy Recommended Not required but implied Required (develop policies) Recommended Implied
Verification of AI output Required Required Required Required Required (always)

For more on how these rules apply specifically to contract review, see our CLE course on AI-powered contract review.


Case Studies: Ethical vs. Unethical AI Use

Case Study 1: The Fabricated Citations (Unethical)

What happened: In Mata v. Avianca (2023), attorney Steven Schwartz used ChatGPT to research legal precedent. ChatGPT generated six fabricated case citations. When the opposing party questioned the citations, Schwartz asked ChatGPT to verify them — and ChatGPT confirmed they were real. Schwartz submitted an affidavit attaching the fabricated “decisions.”

Rules violated: Rule 3.3 (candor toward tribunal), Rule 1.1 (competence — failure to understand AI limitations), Rule 3.1 (meritorious claims)

The lesson: Never use AI output without independent verification. ChatGPT’s confirmation that its own citations were real demonstrates a fundamental characteristic of large language models: they generate text that sounds correct regardless of whether it is factually accurate. Verification means checking the source, not asking the AI to verify itself.

Case Study 2: The Proper Contract Review Workflow (Ethical)

Clause Labs is one example of a purpose-built legal AI tool designed for this kind of structured, ethical workflow. The scenario below illustrates what a proper AI-assisted review looks like in practice.

Scenario: A solo practitioner receives a 45-page MSA from a client’s vendor. She uploads it to a purpose-built AI contract review tool. The AI identifies 23 clauses, flags 5 as high risk, identifies 2 missing provisions, and generates suggested redlines.

Her process:
1. Reviews the AI’s contract classification (correct — vendor MSA)
2. Examines each flagged risk against the specific deal context
3. Overrides one AI flag (the liability cap is standard for this industry)
4. Accepts two AI-suggested redlines and modifies a third
5. Adds her own analysis on two provisions the AI did not flag (a jurisdiction-specific payment term issue and a trade secret concern relevant to the client’s industry)
6. Prepares a client memo incorporating her analysis, not the AI’s raw output
7. Documents the AI tool used, its output, and her modifications in the file

Rules satisfied: Rule 1.1 (competent use of tool, independent judgment applied), Rule 1.4 (her engagement letter discloses AI use), Rule 1.5 (she charges a flat fee based on value), Rule 1.6 (she verified the tool’s data practices), Rule 5.3 (she supervised the AI output)

Case Study 3: The Confidentiality Breach (Unethical)

Scenario: An attorney pastes a client’s draft acquisition agreement into ChatGPT with the prompt “Review this contract and identify risks.” The agreement contains sensitive financial terms, the target company’s proprietary valuation data, and personally identifiable information of key employees.

Rules violated: Rule 1.6 (confidentiality — client data shared with a tool that may use it for training, has no data protection agreement, and stores conversations indefinitely), Rule 1.1 (competence — failure to understand the tool’s data practices)

The lesson: General-purpose AI chatbots are not configured for confidential legal work. Using them for client data without understanding their data policies is a Rule 1.6 violation regardless of the quality of the output.


The Ethical Decision Framework

When evaluating whether a specific AI use is ethical, apply this four-question framework:

Question 1: Do I Understand the Tool?

Can you explain to a colleague what the tool does, how it processes data, where it stores information, and what its known limitations are? If not, stop. You need to achieve basic competence before using the tool on client work (Rule 1.1).

Question 2: Is the Client’s Data Protected?

Have you verified the tool’s data practices? Does it train on inputs? Who has access? What are the retention policies? Have you obtained informed (not boilerplate) client consent? If any answer is unclear, do not upload client data until you resolve it (Rule 1.6).

Question 3: Will I Verify the Output?

Are you prepared to independently review the AI’s analysis, apply your professional judgment, and take responsibility for the final work product? If you plan to send the AI’s output to the client without meaningful review, you are not supervising the tool (Rule 5.3) and may be providing incompetent representation (Rule 1.1).

Question 4: Is My Billing Honest?

Are you billing for time actually worked? If AI reduced the task, are you adjusting your bill accordingly? If you are charging a flat fee, is it reasonable for the service provided? Can you justify the fee if questioned? (Rule 1.5)

If all four answers are affirmative, the AI use is likely ethical. If any answer is negative or uncertain, pause and address the gap before proceeding.

Building Your Ethical AI Practice

The lawyers who will thrive in the AI era are not those who adopt AI fastest or those who resist it longest. They are the ones who adopt AI thoughtfully — with clear ethical frameworks, verified tool selection, documented processes, and unwavering commitment to independent judgment.

The rules have not changed. Competence, confidentiality, communication, and supervision remain the foundation. What has changed is the context in which those rules operate. AI gives you the ability to review contracts faster, catch more issues, and serve more clients. The ethical obligation is to harness that capability while maintaining the standards your clients and the profession demand.

Start with Clause Labs’s free tier — 3 reviews per month, no credit card required — and build your ethical AI workflow on a platform designed to protect your professional obligations from the ground up.

Frequently Asked Questions

Does ABA Formal Opinion 512 require me to use AI?

No. Opinion 512 addresses how to use AI ethically — not whether to use it. However, Comment 8 to Rule 1.1 requires keeping abreast of relevant technology, which increasingly means understanding what AI tools are available even if you choose not to adopt them. The obligation is awareness, not adoption.

Can I be disciplined for using AI tools in my practice?

Using AI tools, per se, is not a basis for discipline. Disciplinary risk arises from how you use AI: sharing confidential client data without consent (Rule 1.6), relying on unverified AI output (Rule 1.1), failing to supervise AI-generated work product (Rule 5.3), or billing dishonestly for AI-assisted work (Rule 1.5). Follow the ethical framework in this guide and document your process.

For any task involving confidential client information — including contract review — purpose-built legal tools are strongly preferred. They are designed with Rule 1.6 compliance in mind, provide contractual data protection commitments, and produce structured legal analysis rather than general-purpose text generation. For non-confidential tasks like legal research on public matters, general tools may be appropriate with verification. For a comparison of available tools, see our AI contract review tools guide.

What should my firm’s AI use policy include?

At minimum: (1) approved AI tools, (2) prohibited AI uses, (3) data handling procedures, (4) required review workflows, (5) client disclosure requirements, (6) billing guidelines, (7) documentation requirements, and (8) training schedule. The policy should be reviewed quarterly as tools and guidance evolve. The Clio blog’s overview of AI ethics opinions provides a useful compilation of bar guidance to inform your policy.

Is using AI for contract review less risky than using it for litigation research?

In some ways, yes. Contract review AI tools provide structured analysis against defined criteria — the risk of wholesale fabrication (as in Mata v. Avianca) is lower because the tool is analyzing a document you provided, not generating citations from scratch. However, contract review AI can still miss critical provisions, misclassify risk levels, or fail to identify jurisdiction-specific issues. The verification obligation applies equally to both use cases. See our analysis of how to review contracts for red flags for the human judgment elements that remain essential.


This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation.

legal ethics,AI ethics,ABA Model Rules,ABA Formal Opinion 512,technology competence,state bar guidance,CLE,lawyer ethics

Try AI contract review for free

3 free reviews per month. No credit card required.

Start Free