AI Agents and the Widening Divide in Legal with Zach Abramowitz

 


As artificial intelligence matures inside the legal profession, the conversation is evolving. In this episode of AI and the Future of Law, Jen Leonard and Bridget McCormack are joined by Zach Abramowitz for a 2026 legal market check-in on artificial intelligence. The conversation explores the shift from AI adoption to measurable ROI, the emergence of AI agents, and the growing divide between lawyers who have deeply integrated AI into their workflows and those who remain hesitant.

 

Zach discusses why 2026 marks a structural shift for law firms, how AI-first boutiques are creating competitive pressure on traditional firms, and why external capital and private equity conversations are resurfacing. The episode also examines how mindset—not just tool usage—determines whether lawyers thrive in this moment of rapid technological change.

Key Takeaways

  • From Adoption to ROI: The profession is moving beyond awareness and experimentation toward measurable impact on pricing, margins, and service design.
  • AI Agents Change the Frame: The shift from assistants to agents signals deeper workflow transformation.
  • The Divide Is Growing: Experience with AI compounds, widening the gap between superusers and skeptics.
  • AI-First Firms Create Pressure: New entrants using AI-enabled models are challenging traditional firm structures.
  • Capital Questions Return: Investment in AI and talent is reviving debates about external financing and ownership.

Final Thoughts

This episode highlights a pivotal moment for the legal profession. AI is no longer a novelty or optional productivity enhancer; it is becoming embedded in competitive strategy. The firms and individuals who move from experimentation to structural change—rethinking pricing, workflows, and capital allocation—will shape the next chapter of legal practice.

The divide is not simply technological. It is strategic and cultural, and it will define the profession’s trajectory in the years ahead.

Transcript

Jen Leonard: Hi, everyone, and welcome back to AI and the Future of Law, the podcast where we explore the accelerating developments in artificial intelligence and what they mean for the future of law, legal education, technology, and legal services.

I’m your co-host, Jen Leonard, founder of Creative Lawyers, here, as always, with the wonderful Bridget McCormack, president and CEO of the American Arbitration Association.
We’re so excited today to have Bridget’s other podcast co-host, Zach Abramowitz, joining us. So I’m going to turn it over to Bridget to introduce him to everyone.

Bridget McCormack: Thanks, Jen. It’s great to see you, and it’s so great to have Zach Abramowitz on the podcast today.

Zach is one of the sharpest—and frankly, most entertaining—voices at the intersection of law, technology, and disruption of the legal profession.

He’s the founder and president of Killer Whale Strategies, a consultancy he launched after a traditional legal career. He went to NYU Law School, worked at a big law firm, and was an M&A lawyer.

Now he helps law firms, in-house teams, legal tech companies, and investors make sense of—and capitalize on—the massive changes reshaping the legal profession.

He’s also a legal tech investor, advisor, writer, speaker, podcaster, and someone who has been thinking out loud and in public about the future of legal services long before it became fashionable to do so.

Full disclosure: I work with Zach and his team whenever I can. They’re incredibly helpful to us at the AAA. It’s a pleasure to have you here today.

And if our audience doesn’t yet know about all of your talents, they should absolutely check out your Hamilton spoof about the billable hour. I didn’t know you could sing until I saw that. I knew you were hilarious and creative—but I didn’t know you could actually sing.

AI Aha!

Bridget McCormack: So, Zach, we always start our podcast with an AI Aha!, which is one way either Jen and I—or our guests—are using AI. It doesn’t have to be professional. It could be a personal use. So I hope you showed up with an AI Aha! for us.

Zach Abramowitz: I use AI for everything.

In the early days of ChatGPT, a lot of people would ask me, “What’s the use case?” They wanted something very pointed—like, can it draft an NDA? Can it redline a contract?
And when I would tell people, early in the evolution of ChatGPT, I don’t know what the one use case is. The use case is everything. This is how I plan on thinking through things. This is how I plan on writing. This is going to be a huge part of my day, whether it’s professional or personal. And today, it is my go-to.

Now, what I’ve also found more recently is that there are some times that I don’t want to use AI. And I think that’s part of the evolution.

A few years ago, we might have been talking about AI as something that drafts your emails for you. But I don’t actually need it to draft my emails. And sometimes I feel like my emails are better when they’re very human—when there are some spelling errors, when there’s some punctuation.

On the other hand, just knowing that the AI is there has been my biggest shift.

I’ve realized recently that when I have a difficult topic, I don’t necessarily immediately go to the AI. But just knowing that I can, provides an extra level of confidence.

I’ve gone from AI not having any specific use case because I use it for everything, to now my use case being that even when I don’t use it, I’m using it.

I think there’s going to be confusion right now, because a lot of metrics may show reduced use of AI—or that people aren’t spending as much time in it. And I don’t think that means AI isn’t useful. I think AI can be very useful even if you use it more sporadically.

As our brains evolve and we begin to think differently, we’re going to find that there are certain places where we don’t need it anymore because our starting point is so much higher. And that happened because of AI—even if we don’t give it credit.

Bridget McCormack: That’s super interesting. I completely get it. I feel like it’s always on and around me, even if I’m not using it.

Legal’s 2026 Market Shift

Zach, one reason we wanted to have you on at the start of 2026 was to get a general legal market check underway.

Starting with a broad lens, how should we think about 2026 differently than 2025? What trends are you seeing, thematically, to help our listeners think about what’s happening in legal in the next few months?

Zach Abramowitz: I think that what’s happening in legal is an interesting framing.
I feel like the difference in this technology revolution, as opposed to prior ones, is that before, different verticals had different technology that was designed for them by different entrepreneurs. It was more siloed—AI or machine learning applied in specific industries.

Today, since we’re all working off the same basic group of models, the AI that legal has—is the AI that accounting has—is the AI that manufacturing has—we’re all working with those same tools.

So what I’ve found very useful for understanding what’s going on in legal is to follow what’s happening in the broader marketplace, because legal tends to be a slightly different manifestation of overarching principles. If you follow the whole market, you can see more clearly where legal is going.

When I think about the shift from 2025 to 2026, I definitely think there is a very clear vibe shift.

I actually started writing down a list—here’s the 2025 version, here’s the 2026 version.

So, for example: AI assistants were the key tool of 2025. In 2026, you’re really going to hear more about AI agents.

If you go back and look at every survey from 2025—whether it was about legal or about AI generally—the question every survey was asking was about adoption. How much are you adopting this?

Even in 2024, the metric wasn’t adoption yet. The metric was more awareness. Have you heard of this? What are your feelings about it? What are your thoughts?

We didn’t see real adoption—especially at work—in a pronounced way until 2025.

But I think in 2026, we’re not going to be looking at adoption anymore. We’re going to be measuring impact. We’re going to start measuring ROI.

So where personal productivity and vibes—and how does AI make you feel—were important metrics in 2025, I think we’re going to push much more into: okay, but how does this actually show up in profit margin? How does it show up in pricing and service design?
So I think we’re definitely seeing that kind of shift.

At the same time, one of the overarching themes to be aware of is feeling lost.

I think in 2025, people were saying, “I finally have the hang of this.” In 2026, there’s more of a feeling of, “I thought I had this, but now there’s so much that’s happened just in the last month alone. How can I possibly keep up?”
There was a great post on X from Andrej Karpathy about this—the guy who coined the term “vibe coding,” maybe the greatest expert on LLMs in the world today—saying that he feels lost, that he feels behind.

And I think this is something people are going to start getting used to. There’s just so much happening. It’s such a fast-moving space.

We thought DeepSeek was a big revolution at the beginning of 2025. That kind of upended us at the moment. I think we’re going to start feeling like that happens every week.

Bridget McCormack: Jen and I have been talking about this—and a lot of lab leaders have been talking about it—specifically around coding agents and the improvement they’ve made over the last six weeks or so. The coding agents have taken an enormous leap. 

Some people have called it basically the ChatGPT moment for coding. Most coders are now directing coding agents to code. And people like me, who don’t know how to code, can suddenly code by telling the LLM what to build.

In your view, the advancement of the models in that vertical — what does it mean for legal? Is it signal, or is it noise?

There are lots of lawyers who will say what we do is too bespoke. Every single case and every single matter has its own bespoke aspects. And so we’re not going to have a Karpathy coding “I’m behind” moment in legal with this technology.

What’s your view on that?

Zach Abramowitz: So first of all, when I made my list—my 2025 column and my 2026 column—in 2025 I had “prompt engineering.” In 2026 I had “vibe coding.”

Because I’ve been meeting with the folks at the AAA for a while, we were flagging vibe coding for a while now. You and I were telling anyone who would listen that the biggest deal to watch in 2025 was not any of the Harvey or the big funding rounds. It was the Base44 acquisition by Wix.com.

Because that was the vibe coding moment. You had a publicly traded website builder basically saying, we’re dead, we’re irrelevant—unless we adapt.

And for $80 million, less than 10% of our market cap, we can buy a company that is going to be our lifeline into the future.

What they bought with Base44 was a company that had been run essentially by a single person who had coded everything.

Now, he was a technical person. But what he was saying was that today, with all the coding advances, I don’t have to delegate. I don’t have to have a team of 25 developers working under me for a single function.

There’s a concept now at Wix being developed called the “X engineer.” And the idea is you’re no longer a DevOps or a frontend or a backend engineer. You are doing everything. You’re responsible for the feature end to end.

So number one, I think this is a huge theme. It’s going to have massive impact.

But I also think that if you were paying attention to large language models and you had the proper framing for understanding why this was such a massive advance, it’s not surprising.
Dharmesh Shah, the co-founder of HubSpot—the person who bought chat.com and then flipped it to OpenAI—was saying in 2023 that if you wanted to work with the most powerful AI models in the world, you had to know how to write code.

And he said that with large language models, you no longer need to know how to code. You just need to know what you want and how to speak—any language.
If you understood that framework—that we created an intelligence that legitimately understands human language and conceptual reasoning and can perform that reasoning—then it shouldn’t be surprising that this would apply to all languages, including code.

Some people were surprised by how effective it’s been.

You asked how this applies to legal. First of all, you’re seeing in the last few months a real shift of lawyers beginning to write applications.

There’s a lawyer in Hong Kong who’s gone on LinkedIn using Google AI Studio to try to replicate features in Harvey and other big legal tech companies. And it’s working.

And now other lawyers are going on LinkedIn and posting their own applications and saying, “Jamie inspired me. I had to try this.”

We’re following all of these lawyer vibe coders now.

Now, on the bespoke point—lawyers say, “It’s too bespoke. AI can’t possibly do that.” I actually think that’s the reason vibe coding is going to take off within legal.
Because so many law firms say the work we do is so bespoke and custom that the products we’re buying—whether it’s ChatGPT, Copilot, Harvey, or others—we can’t customize them enough to how we work. I hear that a lot.

But if I can take Claude Code and build something very specific around what we do and how we work, that might be more effective.

Now, build versus buy can create analysis paralysis. You spend so much time wondering should we build or buy that you do neither.

But I do think you’re going to see a lot more lawyers and law firms vibe coding solutions—especially at the fringes or edges of legal work. Things that you wouldn’t just be able to do out of the box with a foundation model or a legal application.

Jen Leonard: It’s so weird and disorienting sometimes to talk with people like you and Bridget who are super users of AI and at the bleeding edge. And then, over the last weekend, I was with friends — two of them are in legal, one is in media — and I was using AI for everything all weekend.

A doorknob was broken. I was like, let’s check with ChatGPT. I needed to write something. I showed them how to make Google NotebookLM Slides.
And all three of them had objections, for different reasons, to even touching AI — ethical reasons, security reasons, fear-of-humanity reasons.

So I wonder how to orient ourselves to where we are in the change when people are all over the map with respect to their awareness levels.

I hear the point about moving beyond attitudes and awareness. But when you talk to people who don’t think about this at all, that’s still what it feels like — feelings about it.

Bridget McCormack: Did you see Kevin Roose’s tweet this weekend? He said he’s never felt more disoriented between the super users and the non-users. He feels like people in San Francisco and Silicon Valley are letting armies of Claude bots loose all over their lives, and then there are people who are still asking if this is a fad.

The distance feels wide.

Zach Abramowitz: I absolutely flagged that post.

I think this is one of the themes of 2026 — you’re going to start to see a widening divide between those who have been using AI and those who haven’t.

Now you might say, well, can’t you just get on and start using it now? Can’t you catch up?
And the truth is, I don’t think AI is entirely about knowing how to press the buttons.

It’s about a mindset.

And I think what you’re feeling, Jen, is that mindset. Wrapping your brain around AI has compounding interest. It’s not something you develop in a day.

I said before, now I know better what AI is not for. But my brain is still operating in AI mode.
If I think, “This is going to be a long night,” I also think, “I’ve got a partner I can work on this with.” And that framing isn’t so simple to create.

That’s why I’ve told lawyers from the beginning: just start using this.
Not so that you can learn how to enter a prompt or attach a document or click the Deep Research button.

It’s about getting your brain wrapped around it.

Jen Leonard: I crossed a Rubicon some time ago where I just can’t imagine not having it.
It’s there as a companion all day to help me do a lot of different things. And my mindset is always: what can I use to help me do this?

And experimentation and learning just become part of how you operate.

It just gives me a little bit of whiplash when I talk with people whose views I really respect — super intelligent people — and they’re not Luddites. They have modern mindsets. But they raise issues they feel strongly about in compelling ways.

And when I talk with you and Bridget, we don’t have those same conversations before talking about AI.

So it’s confusing when you’re trying to help other people understand — especially because there are legitimate concerns people have.

Zach Abramowitz: I think the moment before most people try AI is riddled with anxiety, because there are three what-ifs — and they’re all bad.

The first is: what if it doesn’t work and I’ve wasted my time?

The second is: what if it works and now I need to make a life change?

And the third — maybe the worst one — is: what if it doesn’t work not because it doesn’t work, but because I’m too old to get it?

That anxiety gets worse every day for someone who continues to bury their head in the sand.

The Law360 survey showed this beautifully last year — positive sentiment toward AI increases the more you use it. The people who like it the least are those who haven’t used it.
I also think a lot of people, when they think about software, are thinking about a database or Excel or a calculator — where two plus two equals four every single time.
If you’ve been told about hallucinations without context, you might get frustrated and say, “Oh, they’re just faking it.”
Without appreciating how they actually work — which is, yes, they hallucinate. And so do I. 
I think I’ve become much more aware of my human hallucinations as a result of working with AI. Others just focus on pointing at the tool.

There’s definitely a widening gap. You have startups generating enormous revenue with very small teams on the one hand. And on the other hand, you have larger companies where it’s not having as much visible impact.

There was a Wharton study recently that showed the smaller your company is, the more you’re able to measure the impact and ROI of AI. I think that’s deeply true.

But I also think it’s important to show hesitant people that there are a lot of them. And the best thing you can do is start now.

Waiting until it’s finally “good enough” and then starting — this is a lot for the human brain to wrap around.

This isn’t like matter management software.

Matter management didn’t challenge what you thought about intelligence, or what makes someone human versus AI.

AI hits close to home because it challenges how people think about the world.
Jen Leonard: Speaking of startups, we’re very curious about what’s going on in legal AI startup land.

Generative AI was just a tsunami that hit the legal tech landscape. We saw all of these wild valuations. The last time I looked at the LegalTech Hub poster, there were between 400 and 500 independent legal tech startups. You may know more than that. That might be an undercount.

Zach Abramowitz: I think it’s actually in the thousands at this point.
The good news is that not all of those are created equal.

There’s a lot of activity in our space. And I think what it really comes down to is something Jonathan Levy at Y Combinator said.

He and his wife, Carolynn — who’s head of legal at Y Combinator and famously created the SAFE document that most early-stage startups use to raise money — they’re very influential.
I asked Jonathan last year if Y Combinator was investing in more legal tech companies because they were more bullish on the vertical.

And he said, no, we’re investing in more legal tech companies, but we’re totally vertical agnostic. It’s just that there are more top 1% founders building companies in the legal space right now. That, to me, is a huge change.

I think that’s exactly why you’re seeing so much venture capital investment going into legal tech. It’s not because investors suddenly love legal. It’s because entrepreneurs see legal as a place where they can build generational companies.

And I think the reason that’s true is that legal often has high demand and sometimes low supply of intelligence and reasoning.

Take a company like TrialKit, which sells mostly to criminal defense attorneys.

Criminal defense attorneys traditionally didn’t buy tech. The founder told Bridget and me that when he went to conferences, the only other vendors there were selling suits.

That’s not a group that historically bought a lot of tech.

But that was old tech. That was like a legacy database system that wasn’t very useful to them. That’s not what they actually needed.

What they need is intelligence and reasoning. They need that in the worst way. They’re understaffed. The courts do not have enough intelligence and reasoning capacity to function the way they need to — to make the trains keep running.

We have a massively high demand for intelligence. And I don’t think people realize how intelligence-constrained we are.

I think what’s going to happen is that people are going to look back at the valuations we’re seeing in startups — which look crazy — and realize what people miss is just how much more valuable AI is than tech. That’s why I’m trying to get away from the word “tech” and focus more on AI. 

AI is simply more powerful and more unlock than previous technology was. I think at the end of the day, we’re going to find that these startups are providing significantly more value than we’re giving them credit for right now.

The only comparison we have is to other cloud-based technology — like Salesforce — what we’ll probably look back on as the database technology era.

Most of that technology was about storing your content in a cloud-based system that everyone in your organization, and maybe customers, could access when needed. That’s effectively what it was. This is much more powerful.

And I think we’ll look back and say those valuations were actually fairly low for the amount of value being created.

Now, at the same time, I stopped investing in startups when ChatGPT launched. Because my concern was: what’s the moat anymore? Can anyone do this? I also have a second fear, which is that the venture capital model might just be dead.

The idea of investing a huge amount of money to build a software company — with vibe coding and everything we just talked about — the cost of developing software and the need to staff with big teams is just not what it used to be.

Do you really need venture capital financing to build and scale companies? I think that’s a legitimate question. And I think it’s a structural risk that exists with any startup you invest in.

As someone put it to me: if you’re not up at night thinking about how Claude or OpenAI is going to steamroll you, then you’re just lying to yourself. That risk is real.

But at the same time, there are so many talented teams right now who have decided they’re going after legal.

I’ve never seen anything like it.

Bridget McCormack: What are you seeing inside law firms?

We’ve seen the beginnings of new law firm models — AI-native law firms. But putting those aside, what are you seeing inside the old white-shoe firms — firms that have had a business model that’s worked for a very long time but is clearly impacted by this technology?

Zach Abramowitz: Mixed results.

There are definitely firms that have adopted AI tools, and they did it in smart ways, and they’ve had great traction. And now they’re sort of moving to, okay, what next?

We got our folks onto a personal AI assistant. Now we have to figure out how we’re actually going to make this work so that the client isn’t paying as much, or isn’t waiting as long to get their work back, or is being kept more updated — so that the service is better.

And I can tell you there are other firms where they’re canceling subscriptions because it felt like it didn’t work.

There are firms that have had a heck of a time with training, and the way they did the training just absolutely didn’t take.

I think this goes back to the potential widening of the gap.

At the same time, the AI-first firm is probably the scariest thing law firm leaders have seen.

You won’t get a lot of their attention just by showing them a demo of Harvey. They all know about it at this point.

But if you show them slides about AI-first firms, if you talk to them about the ethos of those firms, if you tell them about the number of lawyers who are leaving their firms right now — not to raise massive venture rounds, not to pour significant R&D into something — but to say the tools that exist today, out of the box, effectively enable me to start a firm that competes directly with my old firm — that’s troubling to them. That model is going to keep the heat on.

So I don’t think you’re going to see firms take their foot off the gas, even if they’ve had mixed results up to this point, because they’re too aware of real threats to their model.

Whether that threat ends up being AI-first firms or traditional firms that execute better — that remains to be seen.

I think it’s interesting not just to note what firms are doing, but what actually drives them — what pushes them to say, we better take this seriously.

Jen Leonard: One thing Bridget and I have seen in presentations that we give — is that AI in and of itself isn’t scary anymore.

But private equity — and its ability to capture the benefits of AI — is really scary to them.
We started folding that content into presentations last fall, and I was surprised at how few lawyers in the room were even aware that this was happening in the landscape.

But it seemed to strike fear in the hearts of firm leaders.
So I’m curious how you think AI and private equity could play out in the law firm landscape.

Zach Abramowitz: What’s interesting is hearing it from people who are in private equity.
To me, that’s part of what AI is forcing us to revisit. One of the trends I wrote down in my 2025 versus 2026 list was this: in 2025, we were talking about the AI-first firm. In 2026, we’re going to revisit the concept of external non-lawyer ownership in law firms.

I think that’s where this trend is headed — renewed discussion about whether law firms can or should have external financing. In particular, when so many firms want to invest more in AI, external capital starts to look like a lifeline.

I can tell you that a lot of firms want to spend money on AI. They want to budget for it. They want to invest.

But they’re also saying, we have to spend more money because Kirkland, Latham, Paul Weiss — they’re taking our most talented attorneys. We have to put money into that.

So it’s not that they’re saying AI isn’t worth the investment. It’s that they feel like they can’t only invest in AI. They have to invest in talent as well.

I think external financing presents a really interesting opportunity for a lot of these firms. It remains to be seen whether it works at Big Law scale.

But I can say that toward the end of 2025, when I was in New York at a legal innovators conference, the biggest topic being discussed wasn’t AI.

It was whether the chairman of McDermott Will & Schulte was suggesting that the firm might create an MSO and take in external financing. That was the hot topic that week.

You’re already seeing it in the personal injury space. You’re seeing it in AI-first firms. I do think it comes in, and I do think it becomes a renewed discussion.

I like to think about it this way: are there cultural conversations we had five or ten years ago where we landed in one place — and now AI is going to make us rethink those?
What conversations are we going to have again and say, actually, now with AI, this makes sense?

Bridget McCormack: I want to close with one last question.

Who’s right about AGI — Dario or Demis? And if Dario is right and it’s a year or two away, what should we lawyers be thinking about or doing about that?

Zach Abramowitz: I think the very existence of the term “AGI” reflects a constant moving of the goalposts when it comes to AI.

I think the single biggest development in AI was the launch of ChatGPT.

You could argue that ChatGPT itself is maybe the 2025 version, and that Claude Code or Gemini might be the 2026 versions.

But I still believe the most significant advance was the release of 3.5. That was the moment where I thought, okay, this is real. This feels like a Turing test moment.

Now, once you’ve seen it and gotten used to it, you can often tell the difference. You look at something and say, okay, that was created by AI. But that’s the constant moving of the goalposts. We’re already in this.

To say AGI is the moment when humans get replaced — I don’t see it that way.

As we continue to progress, at every stage there’s a human reaction: okay, if AI does that, then what do I do next?

Think about podcasting. Twenty-five years ago, did you know anyone whose job involved podcasting? Today, a lot of people’s jobs involve recording, producing, editing, or writing about podcasts in some way.

Or think about the number of people who earn money by dancing on camera. That wasn’t considered a job once upon a time. AI is going to continue to raise the bar and set new starting points.

And what humans do is adjust. By the time we get to whatever point people are calling AGI — whether that’s five years from now or sooner — the human brain will have adjusted, and there will be new things we do.

I’ve been using AI for three years. I’m not working less.

Bridget McCormack: Well, thank you so much for joining us. This has been such a fun conversation, as we knew it would be.
And we’ll follow all of your predictions and have you back to talk about how they went next year.

Thanks so much for joining us, Zach.

Zach Abramowitz: Thank you so much. I really enjoyed it — both of you.

Jen Leonard: Thank you so much, Zach. And thanks to everyone out there listening and learning with us.

And thank you to everyone for joining us on this episode of AI and the Future of Law. We look forward to being with you next time, when surely the world will have changed again, and we’ll talk about what that means for lawyers. Until then, be well.

February 24, 2026

Discover more

Private Equity and the Future of Law Firms: Investing in AI

Harvey’s Gabe Pereyra on the Future of Legal AI

How Creative Lawyers Are Rewriting the Rules With AI