Storming with AI: Why Most Teams Aren’t Ready for the Conflict AI Brings
Artificial intelligence is showing up in teams everywhere. Leaders are encouraging people to experiment with tools, explore possibilities, and see what tasks AI might make easier or faster. On the surface, it feels like an exciting moment—new capabilities, new efficiencies, new ways of working.
In practice, though, many teams are discovering how AI affects team dynamics in a substantial way.
Not because the technology itself is inherently disruptive, but because it’s exposing something that was already fragile in many workplaces: how teams handle the storming stage of team development.
When teams introduce AI, they quickly find themselves in conversations that feel messy, uncertain, and tense. And for teams that aren’t used to navigating that kind of tension together, they aren’t ready for it.
The Storm Most Leaders Didn’t Plan For
Bruce Tuckman’s model of team development describes four stages: forming, storming, norming, and performing. The storming stage is the point where the real work of alignment begins. Assumptions get challenged, expectations collide, and teams have to figure out how they will actually operate.
It’s uncomfortable, but it’s also necessary. Teams have to work through this stage if they ever want to get to performing.
AI isn’t causing the storm. But it is creating the weather pattern that keeps these storms churning.
Suddenly, teams are asking questions they haven’t had to discuss before, much less come to a resolution on:
- Is it okay for us to use AI for this?
- What’s our standard for quality?
- Is this efficient, or is it just AI-generated slop?
- Should we try every new tool that appears?
- Who decides what’s appropriate?
Here’s the thing. These questions aren’t really about AI.
They’re about how the team makes decisions, defines standards, and handles disagreement. And if those dynamics were already shaky, AI puts them under a spotlight.
How AI Is Surfacing Cracks in Team Dynamics
I worked with Elena recently, who was asked to explore AI tools and report back to her team. It sounded straightforward. Go check things out and bring some ideas back to the team.
When she started looking into it, the number of tools and possibilities was overwhelming. She got excited with some possibilities, but she also recognized she was just scratching the surface. Eventually, she decided on a couple of ideas to discuss with the team.
It was an awkward conversation. Some people didn’t want to engage with the conversation at all. Others wanted to know if they were supposed to take action. And others started discussing the ethical considerations of certain AI tools. Elena didn’t feel particularly attached to the ideas herself, so she wasn’t advocating for them. She also wasn’t sure how she was supposed to lead this conversation.
Her boss told her to “keep working on it,” but that direction wasn’t clear either. Elena knew it would be on the agenda again, and the conversation would be exactly the same because this wasn’t the first time a team meeting had gone in this direction.
On the surface, the issue seemed to be about AI.
In reality, the team was struggling with much deeper dynamics:
- Are we aligned on what we are trying to accomplish?
- Do people feel comfortable saying, “I don’t know”?
- How do we have a conversation where we disagree?
- Who actually has the authority to decide?
These are common questions during the storming stage. They aren’t easy questions, but they are the ones teams need to tackle if they are going to get to the other side of the storm.
When teams avoid the storming stage, they often misidentify the source of the tension. The tool becomes the problem when the real issue is the team’s ability to navigate uncertainty and disagreement.
An Opportunity to Practice Storming with AI on Teams
It’s easy to look at these moments and think something has gone wrong. But this is actually an opportunity.
AI is surfacing the kinds of conversations many teams have been avoiding for years. Questions about standards, authority, experimentation, and accountability were already sitting beneath the surface. AI is simply giving them a reason to show up.
And when teams are willing to engage those questions directly, something important happens: they get better at storming.
Storming isn’t just something teams survive once on their way to performing. It’s a capability. The ability to navigate disagreement, clarify expectations, and work through ambiguity together is what allows teams to handle the next challenge that inevitably comes along.
If a team can learn to storm well around AI, they are also practicing how to storm around strategic shifts, new initiatives, resource constraints, and difficult decisions down the road.
In other words, this isn’t just about figuring out how to use AI tools. It’s about strengthening the team’s ability to work through complexity together.
Here are three examples of the kinds of storming conversations teams can start having right now.
1. What Problem Are We Actually Trying to Solve?
It starts when someone says, “We should probably be using AI more.”
Heads nod. Someone like Elena is asked to explore tools and others offer to try things out. Before long, the team is talking about prompts, platforms, and new features.
But no one has answered the most important question: What problem are we trying to solve?
Without that clarity, teams end up in the AI equivalent of wandering through a hardware store. Every tool looks interesting. Some seem powerful. A few people start picking things up. But no one is entirely sure what they came in to build.
Some people get excited about experimenting. Others question whether this is the best use of time. A few quietly wonder how any of this connects to the team’s priorities.
What the real issue could be:
- Is the team aligned on what we are working toward?
- Are the actions we are taking aligned with the bigger strategy?
- Do we actually agree on what success looks like here?
Where else this can show up:
- New initiatives get launched before the team agrees on the goal.
- Teams jump to solutions before defining the problem.
- People work hard, but in slightly different directions.
Questions the team can discuss about AI:
- What part of our work actually needs improvement right now?
- Are we trying to increase speed, improve quality, reduce workload, or generate new ideas?
- Where could AI meaningfully help our team do better work?
If you really want to lean into the storm here:
- What are the top outcomes this team is responsible for right now?
- Where are we currently spending time that doesn’t actually move those priorities forward?
- When new opportunities show up, how do we decide whether they are worth pursuing as a team?
2. What Counts as Good Work?
AI has introduced a new tension on many teams.
Some people see it as a powerful efficiency tool. If AI can produce a draft in seconds, why spend hours starting from scratch? Others read the same output and think, This is not good.
Some team members start using AI heavily. Others hesitate because they aren’t sure what the standards are. A few quietly edit or rewrite AI output but never talk about it.
Soon, the team is reacting to each other’s work without ever having agreed on what “good” actually looks like.
What the real issue could be:
- Does the team share the same definition of quality?
- Do we value speed, depth, or originality most in this work?
- How much human judgment do we expect before something is considered finished?
Where else this can show up:
- Drafts circulate that feel rushed or inconsistent.
- Team members quietly redo each other’s work, whether AI is involved or not.
- People spend time debating style or polish instead of the substance.
Questions the team can discuss about AI:
- When AI tools contribute to a piece of work, what level of human review do we expect before it’s shared?
- What kinds of work feel appropriate to use AI tools with, and what should still start with human thinking?
- How do we give feedback if something feels too “AI-generated”?
If you really want to lean into the storm here:
- What does excellent work look like on this team?
- Where are we willing to be fast, and where do we expect deeper rigor?
- How do we give feedback when work doesn’t meet our standards?
3. Who Gets to Decide?
AI has a way of forcing decisions that teams didn’t realize they had to make.
Should people experiment freely? Should tools be approved first? Should the team move together or individually?
If those questions haven’t been discussed, decisions still happen but quietly, one person at a time.
What the real issue could be:
- Who has the authority to decide how new approaches get used on this team?
- How do we balance individual experimentation with team consistency?
- When do we want flexibility, and when do we need shared guardrails?
Where else this can show up:
- People adopt new tools without discussing the impact on others.
- Teams wait for leadership to decide something that leadership assumed the team would figure out.
- Decisions get made informally, leaving some people out of the conversation.
Questions the team can discuss about AI:
- Who decides what AI tools are appropriate for our team to use?
- Where do we want to encourage experimentation with new tools?
- When should a tool move from personal experimentation to a shared team practice?
If you really want to lean into the storm here:
- How do we typically make decisions when something new shows up?
When does the team have authority to decide, and when does leadership need to weigh in? - How do we make decisions in a way that people are still on board, even if they disagree with the outcome?
Don’t Try to Wait Out the Storm with AI
Some teams are hoping the AI conversation will settle down once the tools become clearer or the rules get established. That’s unlikely.
AI isn’t a single rollout or a one-time change. It’s an ongoing shift in how work gets done. New tools will keep appearing. Expectations will keep evolving. Teams will keep having to renegotiate how they work.
Which means the storm isn’t going away.
The real question for leaders isn’t how to calm the storm. It’s whether their team knows how to move through it together. The teams that will navigate AI well aren’t necessarily the ones with the best tools or the most advanced policies. They’re the ones who can have the conversations.
The ones willing to pause and ask:
- Are we clear on what we’re trying to achieve?
- Do we agree on what good work looks like?
- Do we know how decisions get made?
The more comfortable a team becomes asking these types of questions (about AI or any other issue), the faster they move toward clarity, alignment, and ultimately performance.
AI happens to be the storm in front of us right now. But the real opportunity for teams right now is learning how to weather any storm that’s on the horizon.
Want to avoid common mistakes when introducing AI?
Check out our webinar: Making AI Part of the Team: Practical Steps for HR Leaders
Looking for more information about team development? Be sure to check out our Leader’s Guide to Building Stronger Teams.



