I once watched a senior leader spend four minutes in a meeting explaining to his team why they needed to "embrace the AI revolution."
He read it off a slide. The slide had been written by ChatGPT.
Nobody said anything.
That moment contains basically everything you need to know about where most organisations are right now with AI adoption. Everyone performing confidence. Nobody quite sure what's real.
The actual leadership challenge of 2026 isn't keeping up with AI. It's leading a team where everyone is at a completely different point on the learning curve, and pretending otherwise is already the company culture.
The team you're actually managing
Let me paint the picture, because every manager I talk to recognises this immediately.
On one end: the person who still prints emails. Lovely human. Twenty-three years of institutional knowledge. Currently using AI tools in the same way they use the office scanner: reluctantly, occasionally, and only when someone younger sets it up for them.
On the other end: the person who has automated half their job, lives inside three different AI tools simultaneously, and last Tuesday fed a sensitive client brief into a free platform without a second thought about data security, because it gave better outputs and honestly who has time to check the terms.
And in the middle: everyone else. Trying to figure out which of these two people they're supposed to become.
Your job, as the leader, is to hold all of that at once. While also having opinions about the quarterly numbers.
The problem nobody talks about at conferences
Here's the thing the "AI transformation" keynotes don't mention.
The people who learned fastest have their own problem now.
I've been watching it happen. Someone integrates AI deeply into how they work. They get faster, sharper, genuinely more productive. And then they hit their token limit, or the tool goes down, or the company restricts access, and they are completely paralysed. They sit there staring at a blank document like the rest of us used to stare at a blank document before autocomplete existed.
Over-reliance is real. And it's coming for the early adopters first.
Which means the learning curve isn't a line. It's more like one of those climbing walls where you think you've reached the top and then you turn around and there's another wall.
The goal was never to master AI. It was always to master the relationship with it.
What actually works
A few things I keep seeing make a real difference, none of which involve a transformation day or a forty-slide deck:
- Make experimentation visible and communal. The person who found a prompt that saves them two hours a week should be sharing it, not hoarding it. That only happens if sharing is normal, celebrated, and doesn't require a business case.
- Let the hierarchy invert sometimes. The youngest person on the team is probably the most fluent. Building in moments where that's an asset rather than an awkward dynamic is one of the fastest ways to accelerate collective learning. Swallow the pride. It's worth it.
- Make learning tiny and regular. One question at the end of a weekly team meeting: "what's one thing you tried with AI this week?" That's it. Small and consistent builds more than any single training event. It also normalises the admission that everyone is still figuring it out.
The part that actually matters
People feel something when an algorithm can do in five seconds what they spent five hours on.
Sometimes it's relief. Sometimes it's a quiet grief for a skill they spent years building. Sometimes it's a low-level panic about what they're actually good for now.
The leader's job in that moment isn't to explain ROI. It's to help people understand their value was never the task. It was always the judgment behind it, the questions they knew to ask, the things they noticed that the tool didn't.
That doesn't happen in a one-off training. It happens in small conversations where someone feels safe enough to say what they're actually thinking.
There is no finish line
The leaders handling this moment best are not the ones who figured out AI. They're the ones who stopped waiting to figure it out and started building teams that are genuinely okay with the permanent state of not-quite-knowing.
Not paralysed by the token limit. Not pretending to have read the slide they didn't write.
Just curious enough to keep going. That turns out to be enough. For now.