Skip to main content
All Industries

Legal

GPT-4 scored in the 90th percentile on the bar exam. Lawyers have been sanctioned for citing AI-hallucinated cases in federal court. Harvey AI raised over $100M and partnered with BigLaw. CoCounsel was acquired by Thomson Reuters. The "robot lawyers" debate is live, the billable hour death spiral is real, and the firms that figure out new pricing models before their clients force the issue will define the next decade of legal services.

Legal industry
Overview

Legal is the industry where AI hallucination carries the harshest professional consequences. The Mata v. Avianca sanctions proved that AI-fabricated citations in federal court filings are not a theoretical risk — they have happened, attorneys were sanctioned, and it made the news. This is not an argument against AI in legal. It is an argument for engineering it with citation grounding and human review checkpoints that match the severity of the failure mode.

···

What AI Is Actually Changing

The near-term transformation is in research and document-intensive work. Harvey AI, CoCounsel, and the LexisNexis AI stack can surface relevant cases, identify conflicts, and draft research memos at a speed no human researcher can match — when grounded in real legal databases. Document review for eDiscovery can be triaged by AI in a fraction of the time required for manual review. Contract review and due diligence are following the same pattern in transactional practice, with Spellbook and Ironclad driving adoption in mid-market firms.

What is not changing quickly is the work that requires judgment, strategy, and client relationships. Depositions, trials, negotiations, and complex regulatory strategy are human work. The AI is handling the substrate — the research, the document analysis, the drafting of routine instruments — so that lawyers can spend more time on the work that actually requires a lawyer.

The Confidentiality Architecture Problem

Rule 1.6 creates data architecture requirements that most SaaS legal tools do not satisfy. Client confidentiality is not just a policy requirement — it is a legal doctrine with privilege implications. If client data from Matter A is accessible when processing Matter B, that is not just a data governance problem; it is potential privilege waiver exposure that can affect the client's legal position.

Rule 1.6 Engineering Requirements
  • Client data must be isolated at the storage layer — application-level access controls are not sufficient
  • AI inference must not leak information across matter boundaries — embeddings and vector indices need per-matter isolation
  • Audit logs must capture every AI interaction with client data — who, what, when, for which matter
  • Third-party AI tool vendors require informed client consent under most bar interpretations — the consent workflow must be built into the onboarding process
···

The Supervision Engineering Problem

Rule 5.3 requires lawyers to supervise the work of non-lawyer assistants. Courts and bar associations are extending this duty to AI systems. The ABA's Formal Opinion 512 is explicit: AI outputs in client-facing work require substantive review. "Defensible" means the review is logged, the reviewer is identified, and the review was substantive rather than rubber-stamp.

Designing Compliant Legal AI Workflows

01
Citation Grounding

All AI research outputs must be linked to verifiable primary sources in real legal databases. RAG pipelines with Westlaw or LexisNexis APIs, not general-purpose web search.

02
Confidence Scoring

Surface uncertainty explicitly. If the model is less than confident about a legal conclusion, the UI must make that visible to the reviewing attorney — not hide it.

03
Review Checkpoints

Build structured review stages into the workflow. AI drafts the memo; the attorney reviews and approves before it goes anywhere. The approval is logged with the attorney identity and timestamp.

04
Audit Trail Persistence

Every AI interaction with client data writes to an append-only log. This is not optional — it is the audit evidence for Rule 5.3 supervision documentation.

Domain Challenges
01

Rule 1.6 (confidentiality) creates data isolation requirements that most multi-tenant SaaS legal tools do not satisfy — client data cannot commingle across matters

02

The duty of supervision under Rule 5.3 cannot be delegated to software — every AI output in a legal workflow requires a defensible human review stage

03

AI hallucination in legal research is career-ending — in Mata v. Avianca, attorneys submitted ChatGPT-fabricated citations and were sanctioned by the court; this is a documented production failure mode

04

The billable hour creates structural misalignment: AI doing 10 hours of research in 10 minutes directly reduces firm revenue under traditional pricing, so firms are actively disincentivized to deploy efficiency tools

05

Large litigation document review (millions of documents, privilege review, relevance coding) requires accuracy that general-purpose models do not consistently achieve without fine-tuning

06

Bar association ethics opinions on AI use are jurisdiction-specific and still evolving — what is permitted in one state may be an ethics violation in another

What Sets It Apart

Attorney-client privilege is a legal doctrine with zero tolerance for data leakage — a single breach can waive privilege across an entire matter, exposing clients and the firm to significant liability

The Mata v. Avianca sanctions are a documented production failure — hallucinated citations are not a theoretical risk, they have ended careers and triggered court orders

The billable hour creates misaligned incentives that AI adoption has to navigate explicitly — firms need new pricing models before they can fully adopt efficiency-improving technology

Jurisdictional variation in AI disclosure rules means a multi-state firm needs different AI governance policies for different practice areas and courts

UPL concerns with AI tools used by non-lawyers are growing — tools that enable non-lawyers to perform legal work create regulatory exposure for the tool provider

Domain Insights
01

Hallucination Is the Only Bug That Ends Careers

The Mata v. Avianca case is the clearest example: attorneys submitted a brief citing six cases that did not exist, all generated by ChatGPT. The court sanctioned the attorneys personally. The technical response is not "prompt engineers more carefully" — it is citation grounding: AI research outputs must be linked to verifiable sources in real legal databases, confidence-scored, and reviewed against Westlaw or LexisNexis before being included in any filing. RAG pipelines with legal database grounding are now table stakes for any serious legal AI product. General-purpose LLMs used directly for legal research are a malpractice risk.

Legal — Hallucination Is the Only Bug That Ends Careers
02

The Billable Hour Death Spiral Is a Pricing Problem, Not a Technology Problem

Law firms that have deployed AI widely report that associates are struggling to meet billable hour targets because work is getting done faster. A partner who used to bill 30 hours of associate time for a complex research memo now faces a client who has read about Harvey AI and is asking why the bill is not 3 hours. The firms moving fastest on AI adoption are those that have shifted toward flat fees, subscription models, or value-based pricing — because those models do not penalize efficiency. Legal tech builders need to understand that billing model transformation is part of the product design problem. Tools that only work within hourly billing create institutional resistance that kills adoption.

Legal — The Billable Hour Death Spiral Is a Pricing Problem, Not a Technology Problem
03

eDiscovery Is Where AI Economics Are Most Unambiguous

A review set of one million documents that previously required a team of contract reviewers for weeks can be triaged by AI in hours, with humans reviewing only the flagged uncertain documents. The economics are unambiguous. The engineering challenge is privilege review accuracy — false positives (flagging non-privileged documents as privileged) are costly; false negatives (missing privileged documents in production) can be catastrophic. DISCO, Relativity, and the AI eDiscovery platforms have fine-tuned models for this specific task. General-purpose LLMs without legal fine-tuning will not achieve the accuracy bar that privilege review requires.

Legal — eDiscovery Is Where AI Economics Are Most Unambiguous
Common Pitfalls
01

Using general-purpose LLMs for legal research without citation grounding against real legal databases — the Mata v. Avianca sanctions are a documented outcome, not a hypothetical

02

Multi-tenant AI tools that commingle client data — violates Rule 1.6 and creates privilege waiver exposure

03

Automating legal workflows without human review stages — Rule 5.3 supervision duty cannot be engineered away

04

Building legal AI without jurisdiction-specific ethics opinion monitoring — what is compliant today may not be compliant after the next ABA formal opinion

05

Ignoring the billing model transition — deploying efficiency tools in firms that have not updated pricing models creates internal political resistance that kills adoption

Our Approach

We engineer legal technology that treats Rule 1.6 as an architecture requirement, not a policy checkbox. Client data isolation is enforced at the data layer — not just at the application layer. Our AI implementations include citation verification against real legal databases, confidence scoring, and structured human review workflows because the consequences of legal AI errors are uniquely severe: sanctions, malpractice exposure, and bar complaints. We build against the platforms law firms actually use (Clio, Relativity, iManage) rather than forcing workflow changes on practitioners who have spent years building their practice around existing tools.

Ready to build for Legal?

We bring domain expertise, not just engineering hours.

Start a Conversation

Free 30-minute scoping call. No obligation.