Over the past few weeks, Microsoft has been rolling out an update to Copilot which they are calling Wave 3. I think it might be a lot more important than people realize, because this isn't just another incremental update. It's a shift toward multi-model AI, agent-like capabilities, and something called Copilot Cowork, thanks to their partnership with the pioneers over at Anthropic (Claude). This is going to enable Copilot to leap beyond what has felt like just tips or tricks, and into actual execution of human+agent workflows in enterprise settings. What's even more interesting to me is what it says about where Copilot is heading, and how we may have been thinking about it wrong.
Read more about Microsoft Copilot Wave 3 -> Click here
A Year Ago, the Reaction Made Sense
About 6–12 months ago, it felt like a lot of people quietly wrote Copilot off. And honestly, I got it. If you were just comparing chat experiences, tools like ChatGPT and Claude were clearly ahead. Copilot felt inconsistent at times, and in a space that was moving as fast as AI, "good enough later" wasn't very compelling when something better was available right now. So people moved on.
But looking back, I think we may have been evaluating Copilot a little too narrowly. Most of us treated it like a chatbot. Microsoft has been building something closer to an enterprise AI layer. Those are not the same thing, and that distinction is starting to matter more.
The Gap That Drove the Narrative
Early on, the conversation was dominated by one question:
"Which AI is better?"
And if that was the lens, Copilot didn't always win. That wasn't just perception, but rather real differences in experience. Responses felt weaker at times. The interaction model wasn't as fluid. And compared side-by-side with ChatGPT or Claude, it often didn't hold up. So, the conclusion people reached was reasonable: Copilot just isn't as good. But that conclusion assumed something important, which was that Copilot was competing in that dimension.
What's Becoming Clear
Fast forward to now, and a few things are shifting. Copilot is leveraging the latest OpenAI models. Microsoft is also expanding into multi-model experiences, including deeper integration of Claude into the Copilot ecosystem. That doesn't mean everything is instantly equal across the board, but it does mean the gap people felt a year ago is starting to close in meaningful ways. And once that gap narrows, the original comparison starts to matter less. Because if Copilot is "good enough" from a model capability standpoint, then the question changes.
It's no longer just: "Which AI is better?"
It becomes: "Which AI actually fits how work gets done?"
Copilot Was Never Just About the Model
This is the part I think a lot of us (myself included) underweighted. Copilot isn't just a place you go to chat. It's embedded in the tools organizations already trust and rely upon — like Outlook, Teams, SharePoint, Word, Excel. It understands identity, permissions, and organizational context out of the box. This kind of trust and reliability is vital for an enterprise to adopt a brand new tool. Not to mention it's hard to avoid when it becomes a button that stares us in the face from the tools we use daily.
Which is what brings us to the most important point I want to call out. For most companies, the challenge isn't experimenting with AI anymore. It's operationalizing it. It's integration into the enterprise. And that's where things get harder.
- How do we make sure the right people see the right data?
- How do we stay compliant?
- How do we integrate this into existing workflows without creating chaos?
Those aren't model questions. Those are platform questions. Those are enterprise questions. This is where Copilot starts to look different.
The Enterprise Reality
In practice, most organizations are not asking: "What's the absolute best model available?"
They're asking: "What can we actually roll out, trust, and scale?"
That's a much more pragmatic lens. Security, governance, and compliance don't guarantee success, but they remove friction. And in large organizations, reducing friction is often what determines whether something gets adopted at all.
Copilot benefits from something that's easy to overlook in comparison to other AI tools like ChatGPT or Claude. It's already inside an ecosystem companies trust, understand, and pay for. For many organizations, adopting Copilot doesn't feel like introducing a brand-new tool. It feels like extending an existing one. And that lowers the barrier to entry in a very real way.
This Doesn't Mean Copilot "Wins"
To be clear, I don't think this is a foregone conclusion. There are still real challenges:
- Cost and licensing complexity
- Driving meaningful adoption beyond early use cases
- Proving sustained ROI
- Competing with fast-moving, best-of-breed tools
- Users actually seeing the value in using Copilot
Those things matter, and they'll shape how this plays out. But I do think a few things are changing. The conversation is shifting, the role of Copilot in the AI landscape is becoming clearer, and AI integration is table stakes.
A Different Way to Look at It
If the first phase of AI adoption was driven by standalone tools and model comparisons, the next phase is going to be shaped by platforms that are embedded into how work already happens. And in that world, Copilot doesn't need to be the best model. It just needs to be close enough, while being easier to adopt, govern, scale, and trust. That's a very different competitive position than it had a year ago.
Final Thought
I don't think it was wrong to question Copilot six months ago. But I do think it might be a mistake to evaluate it the same way today — especially in direct comparison to ChatGPT or Claude. Because if you're still judging it purely as a chatbot, you're probably missing the bigger picture.