Photo by Microsoft Copilot on Unsplash

Last spring, a legal AI system called LexisNexis+ scored higher than 88% of human test-takers on the Multistate Bar Exam. No one was particularly shocked. We've grown numb to these milestones. Another AI beats humans at something. Scroll past. Move on.

But this particular achievement deserves more than a glance. Legal reasoning isn't pattern matching. It's not like identifying cats in photos. It requires understanding nuance, weighing competing arguments, and applying precedent to novel situations. It demands the kind of contextual judgment that we thought would be one of the last bastions of human expertise. Yet here we are.

The question isn't whether AI can argue a case anymore. The question is why it works so well—and what happens next.

The Unexpected Gift of Adversarial Training

Here's something counterintuitive: legal argument might be one of the easiest things for AI to master, precisely because law is so structured. Unlike medicine, where edge cases spawn new diseases, or engineering, where materials behave in unexpected ways, law operates within a bounded system. Laws are written. Precedents are catalogued. Arguments follow templates.

When you train a language model on millions of legal documents—court decisions, briefs, statutes, law review articles—you're giving it something special: a blueprint for how humans reason under constraint. A legal argument isn't creative brainstorming. It's strategic pattern assembly. Here's the relevant precedent. Here's how it applies to our case. Here's why the opposing counsel is wrong.

The AI doesn't truly understand justice the way a judge might. But it doesn't need to. It needs to recognize which arguments have worked before, and which new combinations of existing arguments might work now. That's something transformers excel at.

Start-ups are already capitalizing on this. Westlaw's AI-Assisted Research launched in 2022, and within two years, major law firms reported 30-40% time savings on legal research tasks. Harvey, a generative AI platform built specifically for lawyers, raised $20 million in funding because firms discovered their junior associates could be replaced by an LLM at a fraction of the cost.

Where AI Lawyers Actually Fail (And Why It Matters)

But—and this is crucial—AI legal systems have spectacular failure modes that humans rarely experience.

Take the case of Mata v. Avianca, Inc. In 2023, a lawyer used ChatGPT to help draft a legal brief. The AI confidently cited six precedents, all of them completely fabricated. They sounded plausible. They followed proper citation format. The judge caught it immediately. The lawyer faced sanctions.

This is the dirty secret no one likes discussing: AI hallucinations aren't bugs, they're inherent to how these systems work. An AI trained on human text learns to generate plausible-sounding text. Sometimes that text is accurate. Sometimes it's completely fabricated. The model has no way to distinguish between them because it has no actual knowledge of the world.

For a lawyer, this is catastrophic. A legal argument must cite real cases. Real statutes. Real precedents. You can't get partial credit for a well-reasoned argument based on fictional law.

The current workaround? Human lawyers use AI as a research assistant, not as an expert. You ask it to find relevant cases, then you verify everything yourself. The AI accelerates the grunt work. But the judgment—the actual legal reasoning—remains human.

For now.

The Emerging Two-Tiered Legal System

Here's where things get uncomfortable. We're creating a bifurcated legal world.

Big law firms with resources to implement AI systems can handle more cases faster and cheaper. A partner at a firm using LexisNexis+ can manage more junior associates' work because the AI pre-screens documents and identifies relevant precedents. The firm's hourly billing rate stays the same. Profit margins expand.

Meanwhile, solo practitioners and small firms either adopt the same tools (and compete on price) or get left behind. Legal aid organizations serving low-income clients can't afford sophisticated AI systems. So the people who most need efficient legal representation—because they can't afford traditional lawyers—are precisely the ones who won't benefit from this technology.

We've seen this movie before. When legal research moved from physical law libraries to Westlaw and LexisNexis in the 1980s, the barrier to entry for small firms didn't disappear, but it climbed. The same thing is happening again, just faster.

The American Bar Association estimates there are roughly 300 million Americans with unmet legal needs. Not because they don't have legitimate cases. Because they can't afford lawyers. AI legal systems might eventually lower costs enough to serve this population—but only if the systems are actually deployed with that goal. There's no market incentive to do so.

What Actually Requires Human Lawyers Now

So what's left for human lawyers? What can't be automated?

Judgment calls that lack clear precedent. Negotiations where you need to read a person's face and intentions. Building relationships with clients who are terrified and need empathy, not just legal advice. Understanding the politics of a local courthouse. Knowing which judge hates a particular argument style and which one respects aggressive advocacy.

These are the things that still require human intuition, experience, and emotional intelligence. But here's the problem: these are also the high-value aspects of legal work. These are the things that drive billable hours and client loyalty.

The lower-value work—research, document review, legal writing—is precisely what AI excels at. So the incentive structure pushes AI into the lucrative parts of legal practice, leaving humans to handle the parts that clients value but don't want to pay for.

The Unresolved Questions

We're still in the early innings. Supreme Court arguments remain a distinctly human affair. Client interviews require human judgment. Ethical dilemmas about how aggressively to pursue a case based on a client's circumstances—these aren't amenable to algorithmic solutions.

But we're heading toward a future where the routine legal work that represents 60-70% of a junior associate's job gets automated. Law schools are already struggling with applications declining. Bar passage rates are dropping. Some firms are quietly hiring fewer new graduates.

The legal profession will adapt. It always does. But the transition period will be messy for people pursuing law school today. And the quality of legal services available to people who can't afford premium representation will likely get worse before it gets better.

That's the real argument we should be having.