As part of the Obnovit Roadmap Ramp offering, we work directly with technical and operational teams to identify where there are AI opportunities to actually deliver value. Not abstract innovation. Measurable outcomes tied to real workflows.
One of the most reliable ways we have found to do this is by running a structured ICE scoring session.
ICE stands for Impact, Confidence, and Ease. It is a simple prioritization framework, but when facilitated correctly, it creates clarity fast. It helps teams move from scattered ideas and vendor noise to a ranked list of AI initiatives worth investing time, effort, and resources into.
Below is a concrete, end-to-end way to run an ICE session specifically for AI opportunity discovery and prioritization inside a business.
Step 1: Define the Scope and the Definition of Success
Before anyone enters the room, the frame must be clear.
Start by choosing one focused problem space. Examples include:
- Reduce proposal effort by 50 percent
- Reduce time spent finding local, state, federal ordinances and code requirements on a project
- Reduce error rates in engineering deliverables
Avoid broad goals like “use AI across the business.” That guarantees shallow ideas and poor prioritization.
Next, define what success looks like for the session itself. A strong outcome is:
- A ranked list of 5 to 10 AI initiatives
- Each initiative has a clear owner
- Each initiative has a defined next step
This keeps the session decision-oriented rather than exploratory.
Step 2: Prepare the Right Group and the Right Materials
ICE works best with a small, cross-functional group.
Invite 4 to 8+ participants representing:
- The business or operations side
- IT or data, where applicable
- One executive or sponsor who can anchor priorities and constraints
Prepare a simple shared worksheet or board with the following columns:
- Idea
- Description
- Impact
- Confidence
- Ease
- ICE Score
- Notes
The goal is transparency and speed, not documentation polish.
Step 3: Align on ICE Before Brainstorming
Spend the first 10 to 15 minutes aligning language and scoring rules.
Explain the three ICE dimensions clearly:
- Impact: If this works, how strongly does it move the target metric or outcome?
- Confidence: How much evidence do we have that this will work in our environment?
- Ease: How simple is it to implement given data, integrations, skills, and change management?
Confirm the scoring scale, typically 1 to 10 for each dimension.
Make it explicit that the ICE score is calculated as:
Score = Impact × Confidence × Ease
This multiplication matters. A low score in any one area should materially affect the ranking.
Step 4: Brainstorm AI Use Cases Without Scoring
Separate idea generation from evaluation.
Present the problem statement and any non-negotiable constraints, such as:
- No net new headcount
- Must deliver value within 90 days
- Must use existing systems
Run a 30-45 minute brainstorming session where each participant submits their top ideas (4-8 ideas per person target).
Have participants submit ideas individually first, then share the full list.
Prompt specifically for AI-driven use cases, such as:
- Prediction or forecasting
- Classification or tagging
- Content generation
- Routing and triage
- Summarization
- Quality checking
- Decision support
This prevents the session from drifting into generic automation ideas.
Step 5: Cluster and Clean the Idea List
Once ideas are on the board, clean them up.
Group similar ideas into themes like:
- Sales enablement assistants
- Document automation
- Forecasting and planning
- Customer support triage
Merge duplicates, remove items that are clearly out of scope, and rewrite each remaining idea as a one-line outcome tied to a metric.
If an idea cannot be stated as an outcome, it is not ready to be scored.
Step 6: Score Impact as a Group
Score Impact first, and do it together.
For each idea, ask:
If this succeeds, how much does it move our key metric compared to the others?
Have participants score privately on a 1 to 10 scale, then reveal the scores.
If scores differ by more than two points, discuss why. The goal is not consensus through compromise, but shared understanding.
Set one agreed-upon Impact score per idea.
Step 7: Score Confidence Based on Evidence
Confidence must be grounded in evidence, not optimism.
Ask:
What evidence do we have that this will work in our business, with our data, under our constraints?
Use a simple rubric:
- 1 to 3: Untested or speculative
- 4 to 7: Some analogous internal or external proof
- 8 to 10: Strong evidence from pilots, prior implementations, or close analogs
This step often reshuffles rankings more than teams expect.
Step 8: Score Ease With Technical Input
Ease is where feasibility becomes real.
Ask:
How easy is this to implement in the next 3 to 12 months given data availability, integrations, skills, and change impact?
Engineering, IT, process owners, and/or an AI advisor should weigh in heavily here. Dependencies, security concerns, and adoption friction all matter.
Again, land on one Ease score per idea.
Step 9: Calculate ICE and Rank the List
Now calculate the ICE score for each idea:
Score = Impact × Confidence × Ease
Sort the list from highest to lowest score.
Typically, the top three to five ideas become the “Now” candidates. The next tier becomes “Next” or “Later.”
This ranking often reveals that some exciting ideas are better deferred, while less flashy ones deliver faster ROI.
Step 10: Decide, Document, and Assign Ownership
End the session with decisions, not just numbers.
Pressure test the top candidates against strategy, risk, and capacity. Confirm which initiatives move forward immediately and which remain in the backlog.
Capture a short decision log that includes:
- The selected AI initiatives
- Their ICE scores
- Key assumptions and risks
- Named owners
- Next milestone dates
If it is not documented and owned, it will not happen.

