Picking a prioritization framework
Six frameworks built in. Different decisions need different lenses.
Roadmap OS ships with 6 prioritization frameworks built in. Different decisions need different lenses — pick the one that matches your situation.
The 6 frameworks
1. RICE
Reach × Impact × Confidence ÷ EffortBest for:
- Early-stage product when you have rough estimates
- Comparing initiatives where reach varies a lot
Weakness: confidence multiplier hides bias. Easy to game by lowering confidence on competing initiatives.
2. ICE
Impact × Confidence × EaseBest for:
- Rapid-fire prioritization (hackathons, brainstorms)
- When you don't have data; gut-feel scoring is okay
Weakness: less rigorous than RICE. Fine for triage, not for committing.
3. MoSCoW
Must / Should / Could / Won'tBest for:
- Stakeholder alignment
- Forcing explicit "Won't" decisions
- Scoping a release ("what must be in v1, what can wait?")
Weakness: not really prioritization — it's bucketing. Doesn't tell you order within a bucket.
4. Kano
Basic / Performance / DelighterBest for:
- Balancing must-have vs differentiating features
- Customer research-driven prioritization
- Product-market-fit work
Weakness: requires user research to score correctly; not a desk exercise.
5. Cost of Delay (CD3)
Urgency × Value ÷ Job DurationBest for:
- Date-sensitive opportunities
- When delaying an initiative has measurable revenue / risk cost
Weakness: requires estimates of revenue impact. Most teams don't have this data; the framework rewards rigor that smaller teams can't afford.
6. Value vs Effort matrix
2D scatter: Value (Y) vs Effort (X)Best for:
- Visual review with stakeholders
- Quick triage of 20+ initiatives
- Identifying Quick Wins, Big Bets, Fillers, Money Pits
Weakness: oversimplifies. A 2x2 hides nuance.
How to use frameworks in Roadmap OS
Prioritization → New Scoring Round → Pick FrameworkFor each framework, score initiatives across the relevant criteria:
- RICE / ICE — fill in numbers (1-10 typically)
- MoSCoW — drag initiatives into Must / Should / Could / Won't columns
- Kano — score on Basic / Performance / Delighter
- Cost of Delay — fill in urgency, value, effort
- Value-Effort — drag dots on the 2D matrix
The killer feature: compare across frameworks
Same initiative, scored across multiple frameworks, often gets different rankings.
Prioritization → Frameworks tab → select 2-3 frameworksExample:
| Initiative | RICE rank | MoSCoW | Cost of Delay | Where they disagree |
|---|---|---|---|---|
| Mobile app v2 | #1 | Must | #4 | RICE high, CoD low — investigate |
| Hardware cert renewal | #5 | Must | #1 | CoD high — likely the urgent one |
| Marketing site redesign | #3 | Could | #8 | RICE inflated by reach; CoD says it's not urgent |
When NOT to score everything
Don't run RICE on a 200-item backlog. Score the top 15-20 — anything below that is noise.
Don't re-score every quarter just because. Re-scoring is useful when:
- The strategic context shifts (market change, new constraint)
- Customer data changes (NPS drop, support ticket spike)
- A team change reshuffles capacity
Recommended cadence
- Quarterly: RICE or Cost of Delay across top 20 initiatives
- Per major decision: MoSCoW or Kano workshop with stakeholders
- Weekly: quick ICE on backlog grooming
- Per release: Value vs Effort matrix to triage what makes the cut
Next steps
Was this helpful?
If something is unclear, missing, or wrong — please email hello@pmroadmapper.com. We update help docs based on real questions.