Legal AI Ethics13 min read

AI and Attorney Competence: What Rule 1.1 Means for Contract Review

Featured image for: ai attorney competence rule 1 1

AI and Attorney Competence: What Rule 1.1 Means for Contract Review

Forty-two U.S. jurisdictions now require lawyers to understand technology as part of their competence obligation. That number was zero before 2012. The shift started with a single sentence added to ABA Model Rule 1.1, Comment 8: lawyers must “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” Fourteen years later, that sentence has become the legal foundation for AI adoption in the profession, and increasingly, the basis for arguing that ignoring AI may itself be a competence failure.

This article explains what Rule 1.1 technology competence actually requires, how it applies specifically to AI contract review, and what practical steps you can take to demonstrate compliance. If you are a solo or small firm lawyer evaluating AI tools, this is the ethical framework you need.

Try Clause Labs Free — start building your AI competence with a purpose-built contract review tool. 3 reviews per month, no credit card.

The Rule That Changed Everything

ABA Model Rule 1.1 states: “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”

The rule itself has not changed since adoption. What changed in 2012 was Comment 8, which now explicitly includes technology within the scope of competence. The ABA’s amendment clarified that keeping “abreast of changes in the law and its practice” includes understanding “the benefits and risks associated with relevant technology.”

Then in July 2024, ABA Formal Opinion 512 applied this principle directly to generative AI, stating that lawyers must “understand the capacity and limitations of GAI and periodically update that understanding.” This was the ABA’s first comprehensive ethical guidance on AI, and it specifically addressed competence, confidentiality, communication, candor, supervision, and fees.

The trajectory is clear. Technology competence is no longer aspirational. It is a professional obligation with enforcement teeth.

What “Technology Competence” Actually Means

Technology competence does not mean you must use every new tool. It does not mean you need to become a technologist. And it does not mean that failure to adopt AI is automatically an ethics violation.

What it does mean, based on the ABA’s framework and Formal Opinion 512, is a three-part obligation:

1. Awareness

You must know that AI contract review tools exist and understand, at a general level, what they can do. This does not require expertise. It requires the same level of awareness you would apply to any development in legal practice that affects how you serve clients.

The analogy: you did not need to use email the day it was invented. But at some point, not understanding what email is and why it matters to your practice became a competence issue.

2. Evaluation

You must assess whether AI tools are appropriate for your practice. This means looking at the tools available, understanding their capabilities and limitations, evaluating their security posture, and making a reasoned judgment about whether they would benefit your clients.

Critically, “we evaluated AI tools and decided they are not appropriate for our practice at this time” is a defensible position, as long as the evaluation actually occurred and was documented.

3. Implementation (If You Adopt)

If you do adopt AI tools, you must use them competently. This means understanding what the tool does, supervising its output, verifying its analysis, and maintaining your professional judgment as the final decision-maker.

Formal Opinion 512 is explicit on this point: competence “requires the lawyer to have a reasonable understanding” of the technology, not just access to it.

States That Have Adopted Technology Competence

As tracked by LawNext’s Tech Competence Scoreboard, the adoption landscape as of early 2026:

42 jurisdictions have adopted Comment 8 or equivalent language:

This includes 40 states plus the District of Columbia and Puerto Rico. D.C. adopted its amendment in April 2025. Puerto Rico went further with Rule 1.19, effective January 2026, which creates a standalone technology competence requirement that exceeds the ABA Model Rules.

States with Comment 8 PLUS AI-specific guidance:

Several states have gone beyond Comment 8 to address AI specifically:

  • California: Published practical guidance on AI, requiring competence assessment before use and disclosure when materially affecting representation
  • Florida: Opinion 24-1 addresses AI use with specific requirements for confidentiality and billing
  • Texas: Opinion 705 (February 2025) requires human oversight of AI-generated work
  • North Carolina: 2024 Formal Ethics Opinion 1 provides detailed AI guidance
  • Oregon: Formal Opinion 2025-205 addresses AI tools specifically

Remaining states without Comment 8:

A small number of states have not formally adopted the amended comment. However, their existing competence rules are broad enough that technology competence may be implied. As a practical matter, the direction is uniform: technology competence is expected everywhere.

For a comprehensive 50-state reference, see Justia’s AI and Attorney Ethics Rules survey.

How Rule 1.1 Applies to AI Contract Review

The technology competence framework maps directly onto the decision to use (or not use) AI contract review tools. Here is how each element of Rule 1.1 applies.

The Knowledge Requirement

You must understand what the AI tool does:

  • Clause identification: The tool reads the contract text and categorizes each provision (indemnification, limitation of liability, termination, etc.)
  • Risk scoring: The tool assigns risk levels based on standard practice for the contract type
  • Missing clause detection: The tool identifies provisions that are typically present in this contract type but absent from the document
  • Redline suggestions: The tool generates proposed edits to problematic provisions

You must also understand what the tool does not do:

  • It does not understand your client’s business objectives or risk tolerance
  • It does not evaluate enforceability in a specific court before a specific judge
  • It does not account for the parties’ prior course of dealing
  • It does not replace your professional judgment on how to advise your client

The Skill Requirement

You must be able to evaluate AI output critically:

  • Can you tell when the AI’s clause categorization is wrong?
  • Can you assess whether a flagged risk is actually significant in the context of this deal?
  • Can you determine whether a “missing clause” finding is a genuine gap or just a different structural approach?
  • Can you apply the AI’s suggestions strategically, knowing which battles to fight in negotiation?

This is where your legal expertise intersects with the AI tool. The AI provides the data. You provide the judgment. For a detailed framework on how to review AI-flagged issues, see our guide to reviewing contracts for red flags.

The Thoroughness Requirement

You must use AI as a supplement, not a substitute:

  • AI output must be reviewed before relying on it
  • AI analysis must be cross-referenced against the actual contract text
  • Client-specific context must be layered on top of AI findings
  • The final work product must reflect your professional judgment, not just the AI’s output

The ABA’s 2024 Legal Technology Survey found that 75% of lawyers cite accuracy as their top concern about AI. That concern directly supports the thoroughness requirement: you must verify, not just trust.

The Preparation Requirement

You must learn the tool before using it on client matters:

  • Test the tool on contracts you have already reviewed manually (so you can compare results)
  • Understand the tool’s strengths and weaknesses by contract type
  • Know how the tool handles edge cases and unusual provisions
  • Document your testing process

The Flip Side: Is NOT Using AI a Competence Issue?

This is the question that makes the legal profession uncomfortable. The argument is straightforward:

If AI can identify risks in a 50-page MSA that a manual review might miss… If AI can complete a risk analysis in 60 seconds that would take 3 hours manually… If the cost of AI review ($49/month) is trivial compared to the cost of missing an issue (malpractice claim, client loss, reputational damage)…

Then ignoring AI tools entirely may itself raise competence questions.

This is not hypothetical. The Redgrave LLP analysis of technology competence notes that the duty extends to “understanding what tools exist and evaluating them.” A lawyer who has never looked at AI contract review tools in 2026 has arguably failed the “awareness” prong of technology competence.

Important qualifiers: Not using AI is not malpractice. No lawyer has been disciplined for declining to adopt AI tools. But the trajectory is clear. As AI tools become standard practice, the bar for reasonable competence will shift. The lawyers who evaluated AI, tested it, and made informed decisions — whether to adopt or not — will be in a stronger position than those who simply ignored it.

Thomson Reuters’ 2025 report found that 78% of law firm respondents believe generative AI will become central to legal workflow within five years. If that prediction is even partially correct, the competence implications are significant.

Case Studies: Where Competence and AI Intersect

Scenario 1: The Missed Liability Cap

A solo lawyer reviews a 50-page MSA manually for a client. Under time pressure, she misses a provision burying the limitation of liability inside a definitions section. The cap is set at $10,000 for a $500,000 engagement. The client suffers $200,000 in damages from the vendor’s breach and can only recover $10,000.

Competence analysis: If a readily available, affordable AI tool would have flagged the buried liability cap — and the lawyer never evaluated such tools — there is a credible argument under Comment 8 that the lawyer failed the awareness and evaluation prongs of technology competence. The lawyer’s strongest defense would be documented evidence that she evaluated AI tools and reasonably concluded they were not appropriate for her practice.

Scenario 2: The Unsupervised AI Output

A lawyer uses an AI contract review tool to analyze an employment agreement. The tool flags a non-compete clause as potentially unenforceable. Without checking state-specific law, the lawyer advises the client that the non-compete is void. The client relies on this advice, takes a job with a competitor, and is sued. The non-compete was actually enforceable in their jurisdiction.

Competence analysis: The lawyer failed the thoroughness and skill requirements. The AI provided a general finding. The lawyer’s obligation was to apply jurisdiction-specific analysis — exactly the kind of contextual judgment that AI cannot provide. Using AI is not a defense when the lawyer failed to supervise the output. For more on the ethical framework for AI supervision, see our guide on using AI for contract review ethically.

Scenario 3: The Refusal to Learn

A client specifically asks their lawyer whether they should use AI tools to review the 15 vendor contracts their startup signs each quarter. The lawyer dismisses the question: “I don’t believe in AI for legal work.” The lawyer has never evaluated any AI legal tools, taken any CLE on AI, or read any bar guidance on AI.

Competence analysis: Under Comment 8, the lawyer has a duty to understand the “benefits and risks associated with relevant technology.” Dismissing AI without evaluation is different from evaluating it and concluding it is not appropriate. The former may violate the awareness prong. The latter does not. For specific examples of how AI handles different contract types, see our analysis of common NDA mistakes.

How to Demonstrate AI Competence: 7 Practical Steps

Whether or not you choose to adopt AI tools, these steps demonstrate technology competence under Rule 1.1:

1. Take a CLE course on AI in legal practice. Most state bars now offer AI-specific CLE programs. Complete at least one per year. Keep the certificates.

2. Read your state bar’s AI guidance. Justia’s 50-state survey is a starting point. Check your specific state bar’s website for adopted opinions.

3. Test AI tools on non-client work. Use sample contracts or your own engagement letters. Compare AI output to your manual review. This builds understanding without risking client interests. Clause Labs’s free tier provides 3 reviews per month for this purpose.

4. Document your AI evaluation process. Write down which tools you evaluated, what you learned, and your conclusions. Even a one-page memo to your file demonstrates the awareness and evaluation prongs.

5. Create an AI use policy for your practice. This does not need to be complex. Cover: which tools are approved, how output is verified, how client data is protected, and when AI is not appropriate.

6. Review AI output systematically. If you adopt a tool, develop a consistent verification process. Check every flagged risk against the contract text. Apply your judgment to every recommendation.

7. Stay current on AI developments. Follow LawSites/LawNext and the ABA’s technology resources. Review your AI use policy quarterly. AI is evolving faster than the ethics rules that govern it.

How Purpose-Built Tools Support Rule 1.1 Compliance

The right AI tool makes competence easier, not harder:

Transparency: Purpose-built contract review tools provide structured output (clause-by-clause analysis, risk scores with explanations, confidence indicators). You can see exactly what the AI analyzed and why it flagged specific provisions. This supports the knowledge requirement.

Verifiability: Structured output is easier to verify than freeform text. When a tool tells you “this is an indemnification clause rated High Risk because it is one-sided and uncapped,” you can check that assessment in seconds. This supports the thoroughness requirement.

Human-in-the-loop design: Tools built for lawyers assume the lawyer makes the final decision. They present findings and suggestions, not conclusions. This supports the skill requirement.

Testability: Free tiers and trial periods let you test the tool before using it on client matters. This supports the preparation requirement.

The ABA’s 2024 Legal Technology Survey found that AI adoption among lawyers nearly tripled from 11% in 2023 to 30% in 2024. Among firms with 500+ lawyers, adoption hit 47.8%. The gap between firms using AI and those that are not is widening, and it maps directly onto the competence divide. For a comparison of the tools available, see our best AI contract review tools guide.

If you are evaluating AI contract review tools for the first time, start with Clause Labs’s free analyzer — upload any contract and get a structured risk report in under 60 seconds. No signup required. It is the fastest way to see what AI contract review actually looks like.

Frequently Asked Questions

Can I be disciplined for using AI in my practice?

You can be disciplined for using AI improperly — specifically, for submitting unverified AI output, violating client confidentiality, or failing to supervise AI work product. Using AI itself is not an ethics violation when done within the framework of Rules 1.1 (Competence), 1.6 (Confidentiality), and 5.3 (Supervision). Formal Opinion 512 addresses this comprehensively.

Can I be disciplined for NOT using AI?

Not yet. No lawyer has been disciplined solely for declining to adopt AI tools. However, the competence trajectory is toward expecting lawyers to at least evaluate available technology. The safest position is documented awareness and evaluation, regardless of whether you ultimately adopt.

Do I need CLE credits specifically on AI?

Most states do not yet require AI-specific CLE. However, several states are considering it, and many CLEs on professional responsibility now include AI components. Taking AI-specific CLE voluntarily demonstrates competence and provides documentation.

How do I evaluate whether an AI tool is “competent”?

Apply the same due diligence you would to hiring an associate: What is the tool’s accuracy on the contract types you review? How does it handle edge cases? What are its known limitations? What security certifications does it hold? How responsive is support? Test it on contracts where you already know the answer, and compare the AI’s findings to your own.

What if my client objects to AI use?

Respect the client’s wishes. Rule 1.4 requires communication about the means by which the client’s objectives are to be accomplished. If a client specifically directs you not to use AI, document that instruction and comply. The competence obligation does not override the client’s right to direct the representation.


This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation.

Rule 1.1,technology competence,ABA Model Rules,AI adoption,legal ethics,CLE,attorney competence,state bar requirements

Try AI contract review for free

3 free reviews per month. No credit card required.

Start Free