Most AI initiatives fail in one of two ways. They stall at the top, or they fizzle at the bottom.
Leadership has strategy without execution, or individuals have experiments without coordination. Its critical to build a cross-functional approach to build AI Champions on your team.
The firms that break through tend to have one common element: a deliberately structured Champions Team that spans the organization.
The two failure modes
Top-down failure looks like this:
- executive decides “we need AI”
- selects a tool or hires a consultant
- mandates adoption
- team complies minimally or resists
- nobody uses tools effectively
- initiative fades quietly
Why it fails: leadership does not have enough workflow fidelity to pick the highest-value applications, and mandates create compliance, not enthusiasm.
Bottom-up failure looks like this:
- individuals experiment
- some find useful applications
- knowledge stays siloed
- inconsistent approaches spread
- no standards, governance, or scaling
- the organization gains little
Why it fails: individuals lack authority to change processes, budget to scale, and visibility to spread learnings.
The key insight is important: AI adoption is not “select vendor, deploy, train.” It’s continuous capability-building that needs strategic direction and grassroots engagement at the same time.
The Champions Team model
A Champions Team is a cross-functional group tasked with:
- learning and testing AI applications
- identifying opportunities inside their domains
- piloting solutions and measuring results
- spreading adoption across the organization
- providing feedback on what works and what does not
This is not a committee that meets monthly to discuss AI. It is an operating team.
They use AI in real work, daily.
They are empowered to experiment and authorized to change processes, and they stay connected to leadership for resources and to the front line for practical truth.
Team composition that works in real life
The quote to anchor this is:
“Form a team of people who are interested as well as a good cross-section. So not only like executive leadership, but you want middle layers, supervision, and then also your daily individual contributors, and across a variety of departments.”
You need vertical representation and horizontal representation.
Vertical means hierarchy coverage:
- 1 executive sponsor for budget and air cover
- 2 to 3 middle managers who can change process inside their domains
- 3 to 5 individual contributors who know the real workflows, not the documented ones
Horizontal means functional coverage:
engineering, project management, operations, admin, and IT in a supporting role, plus sales if applicable.
Team size guidance is clear:
“Anywhere between maybe four to eight is a pretty good starting number. If you’re a bigger company, you’re probably going to be in the 6 to 12 range.”
Who belongs on the team, and who does not
Must-haves:
- genuine interest, curiosity, willingness to experiment
- credibility with peers
- deep workflow knowledge, including workarounds and informal steps
- communication ability across technical and non-technical audiences
Red flags:
- the over-enthusiast who alienates skeptics
- the reluctant volunteer who does minimum compliance
- the IT-only lens that optimizes tools over workflow value
- the executive who cannot delegate and becomes a bottleneck
How the team operates
Run it like a delivery team, not a discussion forum.
The phases in the skeleton are strong:
Phase 1 (Weeks 1 to 2): hands-on foundation training, shared vocabulary, daily usage starts immediately, document quick wins.
Phase 2 (Weeks 2 to 4): workflow discovery, pain points, interviews, initial list of use cases.
Phase 3 (Week 4): prioritize by ROI and feasibility, select 2 to 4 pilots, assign owners and success metrics.
Phase 4 (Weeks 5 to 8+): implement pilots, track time and quality impact, document what works, prep rollout.
Phase 5 (ongoing): scale wins into standard practice, train peers, add the next opportunities to the queue.
Cadence:
- weekly during active phases
- bi-weekly at steady state
- ad-hoc for pilot support
Common pitfalls, and how to avoid them
These are predictable failure modes, and they are avoidable.
Pitfall: all managers, no implementers
Fix: ensure at least 50% individual contributors.
Pitfall: IT takeover
Fix: keep IT supporting, keep workflow owners leading.
Pitfall: pilot purgatory
Fix: set timelines and decision criteria up front.
Pitfall: single champion dependency
Fix: distribute ownership across multiple people.
Pitfall: no executive air cover
Fix: sponsor actively removes obstacles.
Pitfall: shadow IT creep
Fix: proper enterprise licenses from day one.
Measuring success
Early leading indicators:
- champions using AI daily
- growing list of opportunities
- colleagues asking champions for help
- pilots launching on schedule
Later lagging indicators:
- documented time savings
- processes officially changed
- non-champions adopting tools
- second wave of pilots initiated
Quant targets from the skeleton:
- each champion saving 3+ hours per week within the first month
- 20 to 50 use cases identified within the first four weeks
- at least 2 pilots showing measurable ROI within 8 weeks
- 50%+ of non-champions using AI weekly within 6 months
Closing: start with the AI Champions team
AI tools are commodities. The differentiator is adoption and integration.
The Champions Team is the mechanism that makes that real.
The ROI math is straightforward:
8 champions x 4 hours/week x 4 weeks = 128 hours invested. If they find one workflow that saves 10 hours/week across the company, payback is a few months, then ongoing gains.

