What Just Happened? Google, OpenAI, Anthropic, and the AI Firehose

Summary

In the latest episode of 2030 Vision: AI and the Future of Law, Jen Leonard and Bridget McCormack delivered an engaging and insightful recap of a week that saw explosive developments across the AI landscape. With major announcements from Google, Anthropic, Microsoft, and OpenAI, the hosts decided to forego their usual format and dive headfirst into “What Just Happened?” — an exploration of cutting-edge news and what it means for the legal profession.

Framed as both a wake-up call and a source of inspiration, this episode also highlighted the increasingly personal integration of AI into daily life — from digital planning and productivity tools to voice-driven AI assistants and automated travel itineraries. The message was clear: these technologies are not only here, but they are advancing faster than many lawyers realize — and the profession must adapt accordingly.

Key Takeaways

1. AI is Already Reshaping Legal Practice

•    Lawyers are using general-purpose AI tools, such as ChatGPT, far more frequently than specialized legal AI software.
•    Tools such as Microsoft Copilot, Gemini, Claude, and Grok are being integrated into professional workflows for strategic planning, research, and delegation of repetitive tasks.
•    Jen and Bridget emphasize that these tools are not just add-ons—they are becoming essential collaborators.

2. Personal AI Use Is Accelerating

•    Bridget’s “Second Brain,” built with ChatGPT and Microsoft OneNote, illustrates how AI can support complex personal and professional planning.
•    Jen’s voice-based research and on-the-go interactions with AI demonstrate how integrated and versatile these tools can be—even in non-work settings like planning a birthday or navigating a language barrier at a bar.

3. Major Industry Shifts: A Roundup

•    Google: Gemini AI is now embedded across Google services, signaling a new era of AI-first design. Their latest video model has reached the point where real vs. generated content is indistinguishable—raising implications for law and digital evidence.
•    Anthropic: Claude Opus 4 is touted as a powerhouse for reasoning and sustained tasks. Researchers discussed AI systems as “grown, not built,” a concept that resonates with the unpredictability and complexity of human development.
•    Microsoft: The company focused on “connecting the plumbing”—helping AI tools work across disparate data sources and systems, which could unlock enormous value in public-sector and legal applications.
•    OpenAI & Jony Ive: OpenAI’s $6.5 billion acquisition of Jony Ive’s hardware startup hints at a future with AI-native devices designed to be unobtrusive and seamlessly embedded in daily life—perhaps wearables that eliminate the need for screens and keyboards.

4. Democratization Over Gatekeeping

•    A central theme of the episode was accessibility. Lawyers don't need exceptional credentials to engage with AI. Tools are publicly available and increasingly user-friendly.
•    The hosts encouraged listeners to adopt the mindset of “we’ll solve for that” — a phrase borrowed from AI researchers — as a way to counter the legal profession’s instinct to resist innovation in the face of imperfection.

5. Cultural Shifts Are Needed in Law

•    The legal profession’s tendency to control, gatekeep, and delay adoption for fear of error or reputational risk is no longer sustainable.
•    Instead, lawyers should embrace experimentation, responsible usage, and continual learning.
•    This isn’t just about technology—it’s about rethinking how legal work gets done and by whom.

Final Thoughts

Leonard and McCormack deliver a compelling case for urgency and optimism. The future isn’t waiting for the legal profession to catch up — it’s here. The episode frames this moment as the legal world’s “World Wide Web” moment: a fundamental transformation rather than a passing trend.
Rather than viewing AI as a threat, the hosts urge listeners to see it as an opportunity to enhance human capability, streamline legal services, and increase access to justice. Whether you're a seasoned litigator, a solo practitioner, or a law student just starting, now is the perfect time to jump in.
“You’re arriving at exactly the right moment,” Jen assures listeners. “We played around with the early stuff. Now you can jump right in — and it’s much, much better.”

Transcript

Jen Leonard: Hi, everyone, and welcome back to 2030 Vision: AI and the Future of Law, the podcast where we talk about developments in artificial intelligence and what they mean for lawyers and the legal profession. My name is Jen Leonard, I’m the founder of Creative Lawyers, and I’m thrilled as always to be joined by the phenomenal Bridget McCormack, President and CEO of the American Arbitration Association. Hi, Bridget.

Bridget McCormack: Hi, Jen. It was so good to see so much of you in person last week that we're going to have to figure out how to do that again.

Jen Leonard: I know, you must be sick of me.

Bridget McCormack: No, it was so much fun. We got to hang out in many cities. It was awesome.

Jen Leonard: I'm excited to talk to you today because last week was a wild week in AI developments. People who've listened to the podcast before know we usually have a main topic after we go through updates from the broader AI realm — usually something related to law. But there was so much that happened last week that we thought it might be overwhelming — certainly for me — to cover all the developments and then do a main topic.

So instead, we’re covering a summary of the major AI announcements in our What Just Happened? segment. We’re also framing it as a message to lawyers: whether or not you're paying attention to AI, these developments are important and they’re coming your way. These tools are ones you will soon be able to use in your practice and, if you agree with us, your ethical obligations may soon require that you figure out how to use them to benefit your clients and your work.

That said, there’s a lot to be excited about, and we’re going to dive in. But first — we’ll start the way we always do, with our AI Aha!’s, the segment where we share what we’ve been using AI for in our everyday lives.

AI Aha!

Jen Leonard: Bridget, what’s your AI Aha! this week?

Bridget McCormack: As always, I use it for a lot of things, but I thought I’d share something that feels like two steps forward, one step back and I bet others have had this experience. I don’t have a great system for keeping track of everything I’m working on: talks I need to give, essays I’ve promised, podcast prep, general planning. I’ll read something and think, “Oh, that should go into that talk for Whatchamacallit on Whatchamacallit,” or “That belongs in the article I said I’d write for XYZ.” And then it just sits in my inbox or on my phone.

So I decided to work with ChatGPT again. I’ve done this before, asking about systems for organizing research projects and life planning. And this time, I told it: I need a structure that integrates with Microsoft tools, since we’re a Microsoft shop at AAA. I use a ThinkPad. It doesn’t make sense to find a solution that can’t talk to the rest of my systems.
We started developing something that ChatGPT named “Bridget’s Second Brain.” It’s based on Microsoft’s notebook platform, and we’ve been iterating on how to structure it so I can figure out where to put things. At one point it even gave me code to connect it to another tool so it could “see” my notebook. That didn’t go so well so a little two steps back but we’re recovering. This is like James Clear’s 30-day habit concept, for me it’s closer to 90-days. 
But I’m on day four. But now I keep that notebook open all the time. And when I get frustrated, I go back to the ChatGPT conversation I’ve been using. I even started organizing my chats into different tabs, each one for a different “project” so I can jump back in when needed. Sometimes ChatGPT will give me advice, and I’ll say, “This isn’t working,” and it’ll help troubleshoot. It's been more like having a strategic partner or intern helping me set up a system for my life.

Also, ChatGPT feels very strongly that I should not have separate notebooks for personal and professional projects. It insists everything should live in one place.

Jen Leonard: Which is consistent with all the productivity experts out there, right?

Bridget McCormack: It's like the bullet journal theory: don't separate your to-do lists.

Jen Leonard: But how do you think about the execution step? Like, do you have those strings of conversations where you ask something, get a brilliant answer and then never look at it again?

Bridget McCormack: Absolutely. Some I never return to. But others, I do. I’ll go back to a saved thread when it’s time to focus on that particular task again. Sometimes I’ll ask a question and get a great output, more than a starting point and even if I can’t execute right away, I’ll revisit it (sometimes). It’s an amazing idea generator.
Jen Leonard: I’ve wished for a humanoid robot so many times something that could just take what ChatGPT gave me and go finish it while I’m off doing something else. Maybe that’s the future of work: conducting and coordinating across different AIs. You become a generalist in using AI rather than doing one narrow thing.

Bridget McCormack: We’re definitely heading there. I’ve been using Microsoft Copilot more lately too. I mostly use ChatGPT, Claude, Gemini, and even Grok when I’m on X. But I started testing Copilot’s agents because they connect with other tools I already use, like monday.com, our project management platform.

So I connected those agents, and now I’m seeing if I can delegate more. I want them working on a task while I’m off giving a presentation, so I’m not stuck staying up late doing something repetitive. We'll see how far that goes.
Jen Leonard: Same here. We’re a Google Workspace team, so I’m looking at Gemini and how that integrates. Funny how we each gravitate toward the ecosystem that already runs our lives. But I still use ChatGPT probably 85% of the time.

Bridget McCormack: As does every lawyer I know. The Law360 survey showed that lawyers are using ChatGPT 10x more than even the dedicated legal AI tools. It’s stunning.

Jen Leonard: Yeah and we’ll have a whole episode coming up about how to use these tools, even if your firm doesn’t have fancy enterprise software. In fact, there are some good reasons to use the general-purpose tools.

Bridget McCormack: I have a few more ideas for that future episode. Where’s my second brain when I need her?

Jen Leonard: Put it in your second brain. I want you to have a little hologram that floats beside you and takes notes.

Bridget McCormack: Oh my God. All right — what was your AI Aha this week?

Jen Leonard: So I’m like you. I use it all day, every day, and sometimes I have a hard time remembering specific moments for this segment. But one thing that stood out is something you actually mentioned before — managing multiple projects at once.

Now I feel kind of like a Bond villain, or maybe some kind of hero in a sci-fi movie. I’ll open three or four deep research threads, or reasoning prompts, before I know I’ll be unavailable like when my kids are getting home and I have to make dinner. I just open everything up, then walk away. When I come back, I have dossiers ready to go. It’s incredibly satisfying.

This week, I also used voice mode in the car. I had a long drive, and I wanted to prep for a podcast segment, so I started interacting with ChatGPT via voice while driving. Then, when I came back to the car later, it gave me all the results I needed. It was like handing off the research and getting a full download when I returned.
And then, we took my son to New York for his 10th birthday last Friday. We bought Amtrak tickets, made dinner reservations, and got Broadway show tickets. I screenshotted all three confirmations from my email and uploaded them to ChatGPT. I just said, “Tell me the perfect way to move through this experience.” And it did.

It read the screenshots and created a full itinerary. It told me what time we should leave the house in Philly, how much buffer to build in for traveling with kids, and even let me know that the restaurant we booked does a sparkler birthday song so I should ask for that in advance. And they actually did it for him. It was awesome.

Then, in a totally different use case, I took ChatGPT with me to the nail salon. I never get my nails done, so I asked it, “What are the most popular OPI nail colors for Summer 2025?” It went through blog posts, gave me a list, and even recommended one. But when I got there, I realized for the first time that I couldn’t read the bottom of the polish bottles anymore because my eyesight is going so I did all this research and couldn’t use any of it. But I use it for all sorts of situations, both big and important and smaller insignificant tasks too.

Bridget McCormack: That’s amazing. I was in an Uber this morning from LaGuardia to the office, and a friend of mine, also a lawyer. She happened to be on the same flight from Chicago so she hopped in the Uber with me. She just started using ChatGPT and she’s still in that stunned phase, where everything it does feels like magic. She said, “I wonder if it knows the restatements?” And I said, “Let’s ask!”

So I pulled out my phone, asked it a question about tort law and restatement defenses, and even though it hasn’t ingested the restatements directly, I assume because they’re paywalled, but it got the answer right anyways. We both know that area really well, and it nailed it. Even when it hasn’t read the source material, it often triangulates from other information it has ingested and gets it right.

Then we saw graffiti on a bridge while stopped in traffic, and she said, “I wonder what that means?” I took a photo, uploaded it, and asked ChatGPT. She was amazed that you could do that. It was like a live demo in the backseat.

Jen Leonard: That reminds me, when I was in New York a couple weeks ago, I went out to dinner alone and got that enormous plate of pasta. While I was eating, there was a couple next to me who clearly didn’t speak English. The bartender asked for their ID, but they didn’t understand. He asked what language they spoke, they said French but no one around them could help.

So I typed into ChatGPT, “How do you say, ‘Can we see your ID so you can order a drink?’ in French?” It translated it, I showed it them, and they were so relieved. Everyone around us was so impressed.

It’s funny too, when you do workshops with lawyers and they’ll ask you things like, “How did it know that?” or “Why did it go in that direction?” or “How do I get it to do something else?” And just like with your friend in the cab, my answer is always the same: don’t ask me — ask the AI.

And they’re kind of stunned. Like, “Wait, I can just ask it that?” And they do — and it tells them everything. It’s just such a strange way for them to interact with software. But once they try it, they realize they don’t need an intermediary. It’s right there, ready to help.

Bridget McCormack: And she hadn’t seen O3 yet or its reasoning work. So she was literally watching everything it was doing to answer this torts question and said, “This is unbelievable.”

Jen Leonard: The reasoning process in O3 or Gemini 2.5, it’s just fun to watch. It’s like watching the smartest, most neurotic intern you’ve ever had just going off. It’s Tracy Flick in AI form.

We’re using it all the time. We’re talking to people in taxis and bars about it and helping them realize they don’t need us as intermediaries.

What Just Happened? Major AI news from Google, Anthropic, Microsoft, and OpenAI

Google News

Jen Leonard: So maybe that brings us to What Just Happened. People you and I interact with are stunned by what the models can already do, and last week felt like a firehose of announcements from all the frontier labs. Want to kick us off with the Google news, Bridget?

Bridget McCormack: Sure. All the major companies made announcements, so we decided to make What Just Happened the main event this week. Google held its big event, I think it's officially called I/O, but I’ve been calling it Demo Day. It's where they show off what they’ve been working on and what’s coming next.
There was so much, you could do a whole podcast just on that. But one of the biggest takeaways was integration. Gemini is now embedded throughout the entire Google suite. If you’re just googling something, you’ve probably seen the AI summaries that appear above the results. If you use Gemini, you can already substitute it for traditional search. And now they’re adding a third option that’s basically an all-AI search right from Google. I haven’t tried that one yet, but it sounds promising.

They’re also bringing Gemini into other products like their mixed reality glasses. It felt like a move to go all in on Gemini across the board, which makes sense. Google has so much information about us. I mean, even though I use Microsoft for work, I still have a personal Gmail, a personal calendar, and tons of docs with you and others. So for Google to unify that with Gemini is smart. Search is still their biggest business, but it's changing fast.

They also launched a new video model. I haven’t used it yet, but I’ve seen a bunch of demos floating around on social media. I think Ethan Mollick posted that we’ve officially reached the point where you can't tell whether a video is AI-generated. It even includes audio, although apparently the audio still gets a bit gibberish-y at times.

That part does raise concerns for people in law. I’ve thought a lot about how judges and arbitrators will need to evaluate whether something is deepfaked. I actually worry more about documents than videos, because there’s far more documentary evidence in cases. But even video is getting harder to verify.

Jen Leonard: I listened to Demis Hassabis from Google DeepMind on Hard Fork. Sergey Brin made a surprise appearance during the announcements and said publicly that DeepMind aims to beat the other labs in achieving AGI. And Demis has always had a 5 to 10 year timeline for AGI, which is longer than what other researchers have suggested. So now he’s on the hook to deliver.

But something he said really stuck with me. Early on, he was against releasing these models publicly. He thought they should stay in academic settings where they were “safe.” Now he’s changed his mind. He believes letting millions of people use them, experiment, and find the flaws actually accelerates progress.
And that got me thinking about law. So many lawyers have this mindset that we have to control everything. We debate and deliberate until it’s “safe” to release. But AI is already here. It’s democratized. We won't be able to contain it. So instead of trying to hold it back, maybe we should be teaching people how to use it responsibly.

Bridget McCormack: I had the same thought. I was also thinking about how terrible legal research platforms have always felt to me. Maybe it’s just that I’m not very good at legal research, but I found the way information is organized in legal systems so unintuitive.

I started to think about what it would mean if someone started fresh. They could build something far more accessible and useful for people who didn’t go to law school. And frankly, I did go to law school, and I still found it frustrating.
Jen Leonard: Same. There was always this cultural thing too. Like, if you were a Westlaw researcher, you were a “real lawyer” because you could master headnotes, even if that didn’t translate to actually helping clients. It’s very on-brand for law, for lawyers to be competitive in all areas.

That was one thing I thought of when I heard Demis talking about opening the models up. But it’s also been really interesting to watch some of the great American tech companies of the last quarter-century shift from being innovators to trying to capture regulation to protect what they already have.
And now they’re being forced to innovate again. That pressure is finally making them figure out what their lane is, and that’s fun to watch.

Bridget McCormack: Was there anything else about the Google releases that you were excited about?

Jen Leonard: I mean, there were a hundred different announcements. But I really liked the way they framed it all as human-first technology. I know they’re a huge company and obviously in it for profit, but that framing really resonated with me. It’s what we talk about all the time, AI as a tool to enhance human experience.

Bridget McCormack: It resonated with a lot of people I follow, even folks who track this more closely than we do. Everyone agreed the theme felt genuine. That Google really sees this technology as human-enhancing. So yeah, I thought the whole day had a very optimistic tone.

Anthropic News

Bridget McCormack: That brings us to Anthropic, they released Claude Opus 4. We knew it was coming, and we’d heard it was going to be a killer coding model. Apparently, it is.

I still don’t know how to code. But I want to. I’ve actually started thinking about blocking off time this summer to figure out “vibe coding.” Not because I want to build apps, but because I want to understand what the coders are doing.

Jen Leonard: Put it in your second brain.

Bridget McCormack: Exactly. I think it’s worth learning, at least at a high level. One of the reasons we’re seeing the models do so well at coding and math and science is because that’s what the engineers building them are interested in. There aren’t a bunch of lawyers in the labs asking the models to write better briefs.

But Claude Opus 4 is apparently excellent at complex tasks. It can sustain its performance over extended periods. It has deep reasoning, and it supports long-running tasks that enhance agent capabilities. There was a lot of chatter on social media about safety, some people said it would “turn you in” if you tried to do something immoral or illegal. It wasn’t clear what that meant exactly — who it would report you to or how it made that decision. A lot of that felt like more talk than reality.

That said, it does sound like there was real internal concern. That might be why it took us longer to see this model from Anthropic than similar ones from other labs. They were reportedly focused on how the model handles oversight and safety, which makes sense for a company that has always branded itself as prioritizing safety first.

Jen Leonard: It seems like that whole release stirred up deeper questions about Anthropic’s founding. Was it really created as a safer alternative to OpenAI, or was that just the public story? Some interviews later suggested the goal was just to move faster and build better products.

And now, it feels like the company is navigating internal pressure, some people want to go slow and stay safe, others want to compete more directly with ChatGPT. It must be a tough culture to lead.

Bridget McCormack: Yeah, and that focus on safety has always been their lane. But they’ve also put out some of the best research I’ve seen, maybe because it’s more accessible or maybe because they’re just good at explaining why the models behave the way they do. They really try to understand their tools, not just optimize them.

I think we’re about to discuss the Dwarkesh interview with Trenton Bricken and Sholto Douglas, the part where one of them said these AI systems are “grown, not built.” That stuck with me. Especially since I’ve raised kids. That line landed so hard for me. I’ve raised a few humans. They had the same food, the same rules, the same bedtime, the same expectations. They all had to play instruments, they all had to do sports. And yet they’re completely different. I can’t explain it. So when he said these models are grown, not built, I got it immediately. That’s why it’s so hard to fully understand them or predict how they’ll behave.

Jen Leonard: It’s such a helpful frame. Why don’t you set up the rest of that interview for us?

Bridget McCormack: So I listen to Dwarkesh’s interviews whenever I can because I think he’s brilliant. But I’ll admit, I often have to rewind and re-listen because the guests are just so smart, and the topics are complex. This one was a long conversation with Trenton Bricken and Sholto Douglas from Anthropic, and it ranged from model capabilities to what’s coming next.

One part that really stood out to me was their discussion about how to advise a young person starting out in their career. Or even someone already well into a career. If the job you’re headed for is going to be changed dramatically by this technology, what do you do with those sunk costs?

It was such a thoughtful conversation. These two researchers were incredibly smart, but also deeply reflective. And then Sholto said something that I texted you about, this one line just stuck with me.

He said, “Even if algorithmic progress stalls out and we never figure out how to keep scaling, the current suite of algorithms is sufficient to automate white-collar work, provided you have enough of the right kinds of data. Compared to the TAM of salaries for all of that work, it is trivially worthwhile.”

And his point was, the real bottleneck isn’t the model capabilities anymore. It’s just getting enough of the right data and focusing humans on solving very specific problems.

Coding came first because it was low-hanging fruit. It brings immediate value to the labs. If you can get AI to help write code, then you’ve got AI teams building AI faster. But legal hasn’t been tackled in the same way yet. 

Jen Leonard: I loved the part where they were joking about whether an AI could do your taxes. Everyone laughed, but then Dwarkesh said, “Okay, but could it do your taxes by the end of 2026?” And they all basically agreed, yes, it absolutely could, if someone just focused on solving that problem.

Bridget McCormack: Right. It’s not about whether it’s possible, it’s about whether someone chooses to make it a priority. And that’s what excited me about the conversation. There’s so much low-hanging fruit, and the models are capable now. We just need more people to apply them.

Jen Leonard: I had the same reaction. We were talking earlier about how some lawyers still haven’t tried a frontier model. But the tools are already here. If they never improved from this point forward, they would still be capable of so much.

The bottleneck is just human bandwidth, how many people can we get to tackle these problems? And that’s exciting. Because we all have things in our own lives and work that we wish we could automate.
Bridget McCormack: Yes. And for any listeners who feel like they’re late to the game, you’re not. If you start now, you can catch up in a matter of weeks. That’s how fast this all moves. You’ll be where everyone else is, trying to figure out what actually helps you.

Jen Leonard: And just to put a fine point on something we were saying earlier, you don’t need us as intermediaries. You can go into ChatGPT right now and say, “What is a GPT and how would I build one?” And it will tell you.
We’ll do a future episode on how to do that safely and ethically. But the barrier to entry is incredibly low. You just need curiosity.

And I think there was another powerful insight in that interview. I’m paraphrasing, but the researchers talked about how chaotic it is inside the labs. From the outside, we see these slick product announcements and think, “Why can’t they get their act together? Why is this rollout so clunky?” But internally, it’s hundreds — maybe thousands — of people all doing original research, pushing ideas forward, and learning in real time.

They’re not sitting around with a clean commercialization strategy. They’re not coordinated in the way we imagine a product team would be. They’re just trying to figure out what’s possible. It’s messy, but that’s how innovation happens.
That gave me so much empathy for what’s happening on the inside. Because you and I, and so many others, have looked at companies like Google and said, “Why haven’t they unified everything yet?” But it turns out they’re merging multiple labs, building new infrastructure, figuring out org charts, and trying to connect researchers with product managers, all while the rest of the world is watching and expecting a seamless user experience.

Bridget McCormack: Exactly. And in the middle of that, you hear researchers on these podcasts openly wondering, are we going to need a new scaling law to reach AGI, or do we already have all the tools we need? Some of them say the tech is already here. That it's just about directing more human energy toward specific problem areas like law, or healthcare, or climate.

It really does feel like we’re in the middle of something enormous. This isn’t a tech update. It’s a transformation. 
Jen Leonard: But seriously, this really is our World Wide Web moment. Like that Jane Pauley segment where she introduced the internet and we were all just stunned that it existed. We’re living through that, again.
But it was kind of a lesson in innovation theory and technique generally. I think they were talking about somebody, who's a famous coder, and they were saying his hit rate of success is like five percent. Like, almost nothing he does works. But he just does so many different things that he has a higher probability of coming up with a new idea that does work.

That mindset is so radically different from what we see in the legal profession. During the conversation, they talked about certain roadblocks, like paywalled content, or regulatory complexity. And instead of getting discouraged, they just said, “We’ll solve for that.”

Jen Leonard: That line, “We’ll solve for that,” stuck with me. Because in law, that’s almost never the response. We get one negative signal and we say, “Well, that’s it. Too many hallucinations. It’s biased. It’s not trustworthy.” And then we stop. We don’t even let ourselves imagine what it could be if we kept going.

Bridget McCormack: This will be fun to talk about when we focus on how lawyers and law firms that don't have enterprise solutions can think about how to use the technology. I've had a couple of conversations with smaller law firms recently, and they're really focused on the latest hallucination stories. You know, it is what everyone is focused on lately. It's like we're having a hallucination surge of concern.
Jen Leonard: And if we could just sort of channel the engineering vibe of like, “We’ll solve for that,” and when we do, here’s all the amazing things that we can do, but we sort of don’t even get to peek over that hilltop because we just want to stay... and it’s our training, right? It’s what we’re trained to do.

Microsoft News

Jen Leonard: So Microsoft also joined the chorus. They were early in the week, so they sort of got drowned out by the end of the week. But what did Microsoft announce, Bridget?

Bridget McCormack: So I think Microsoft basically announced that they want to excel at figuring out how to solve the problem that I care most about. So you might think they're boring. I love them for this. They want to figure out how to connect the plumbing.

Like, all of their announcements were kind of around connecting your data and empowering you to be able to scale solutions across the different places you store your data. So they announced these Copilot enhancements and these new AI agents. And as I told you, I've already started exploring them and trying to figure out if I can get use. There’s so many, I can only pick a couple a day.

But I do like this effort from Microsoft to say: it doesn’t matter how good the tech is if it’s stuck in a little box over here and it can’t work across the data that you have in all these different places. So I’m summarizing, but I think that was the gist of Microsoft’s announcements. Does that seem right to you?

Jen Leonard: Yeah. I mean, I tend to be hard on Microsoft for different reasons. But I have one hope — because you and I have both worked in government. And in my time in government, I found that well-meaning people who were working in the public interest would frequently get bamboozled by people who were building custom, bespoke solutions that serve only the residents of your city.

And it sounds amazing if you just don't have the experience to know that once you enter into that contract, you're with that vendor for the rest of your life. And it's going to be costly to upgrade. It's not going to interact with anything.
One of the biggest challenges in court systems and government systems — and I know this is a more complicated issue because there’s a lot of concern about connecting data across federal agencies — but there are places in local government where the problem in serving citizens is that the data do not talk to one another.

And it’s so hard to fix that problem. And it’s hard as a member of the public to understand. And it undermines the confidence we have in our government.

So I’m hoping that Microsoft invests some of its resources into going in and helping the public interest organizations figure out how to use this to unlock a lot of value for regular people.

Bridget McCormack: That would be an enormous contribution, honestly. I mean, we're seeing that up close in building some online dispute resolution solutions for trial courts — in caseloads where ODR makes all the sense in the world.
But the court data has to connect to any other court data. You need to know if there's any other open cases. And then it has to connect to Secretary of State data if it's a traffic product. And every one of those is so, so complicated.
And I feel like if we could just solve for all of that, you could really produce some seamless experiences — both for court users and court staff — that everybody would be thrilled with.

So I don't know. I'm not saying Microsoft's announcements last week tell me we're there. But I do like the focus on the plumbing. I'm really into the plumbers right now.

OpenAI & Jony Ive

Bridget McCormack: All right. So the most exciting announcement was the OpenAI acqui-hire of Jony Ive. So tell our listeners what's happening, and then we’ll dive in.

Jen Leonard: Sam Altman, who is the head of OpenAI, announced a landmark partnership with renowned designer Jony Ive by acquiring his hardware startup, IO. The deal was reportedly worth $6.5 billion, the largest acquisition OpenAI has made to date. I think it was an all-stock, no-cash deal.

The collaboration is designed to develop AI-native devices that are unobtrusive and seamlessly integrate into our day-to-day lives. The goal is to move beyond traditional screens. Jony Ive, of course, is famous for designing Apple’s most iconic hardware.

The idea is that they want to figure out how to win the hardware race, especially at a time when Apple is not on this list of announcements. And Apple really seems to be falling behind, which is a shame. Because I thought Apple would be the winner here.

But I was most excited about this announcement because, even as a grumpy Gen Xer, I feel like the technology that we have frequently feels cumbersome to engage with. We’ve talked about Elon Musk’s creepy quote before — that you always have to have your “meat sticks,” which is his adorable way of saying fingers, between what you want to do and the technology you want to do it on.

And I hate email so much. I hate sitting down with a physical keyboard and a screen to get things done. So I don’t know what it’ll look like. I don’t know what it’ll be. I don’t think they know yet. But given his background and the fact that OpenAI really wants to design something totally revolutionary, I was really excited about this announcement.

Bridget McCormack: Yeah, I was too. Sam Altman said really specifically, “I don't like that we now have this magical intelligence available to all of us, but I have to open up my computer, and then open up a browser, and connect to the internet, and log into my account.”

And it’s like five steps before I can do the thing that I just wanted to do — And even on your phone, you’re pulling up the app, making sure you're logged in. And he was saying, “I don't like my relationship with my technology right now.” I don't like how much I'm in it, all the time, and opening it. I'd rather be able to get the benefit of this new intelligence in some other way than this bad relationship I’ve formed with my devices.

I think you use voice mode this way and I do think that's like one of the great use cases for the learning mode, which I do a lot with ChatGPT when I just want to learn about something, especially if I'm walking or driving. But I think it could be better.

They definitely made it sound like it’s not going to be a phone. I don't know. I mean, I think they wouldn’t commit to what it's going to be. And they said, “We’ll learn next year.” But it definitely sounds like it's going to be some kind of wearable. I mean, it has to be a device. It has to be someplace. Some input. So you're either wearing it around your neck or wearing it on a pin.

But I think the point is, things are moving really quickly. And now they're trying to figure out how to connect a lot of the parts of your life and your work life and your data.

You and I have said this before, I do feel like lawyers and law firms are really much more in the game this year, in 2025, than maybe in previous years. But if you're not yet, like, please join us. There’s too much good stuff ahead.

Jen Leonard: You're arriving at exactly the right moment. We played around with the early stuff. Now you can jump right in and it’s much, much better.

So next time we’ll talk about how to use some of these technologies if you are in a firm that doesn’t have firm-sanctioned software, because there are lots of ways, I think, that you could be trying it out already so that you’re ready when you do have access to safer tools.

Bridget McCormack: Yeah, this was fun. Great to see you.

Jen Leonard: You too. And we look forward to seeing everybody next time on the next episode of 2030 Vision: AI and the Future of Law. Take care.