Quick Answer (TL;DR)
RICE is a prioritization framework that scores features using four factors: Reach (how many users are affected), Impact (how much each user is affected), Confidence (how sure you are of your estimates), and Effort (how much work it takes). The formula is (Reach x Impact x Confidence) / Effort = RICE Score. Higher scores indicate higher priority. It was popularized by Intercom and is one of the most widely adopted quantitative prioritization methods in product management.
What Is the RICE Prioritization Framework?
The RICE framework is a scoring model that helps product teams make objective decisions about which features, projects, or initiatives to pursue. Developed and popularized by Sean McAllister at Intercom, RICE replaces gut-feel prioritization with a structured, repeatable formula that considers both the potential upside and the cost of each initiative.
RICE stands for:
The beauty of RICE lies in its simplicity. By reducing prioritization to a single numerical score, it gives teams a common language for comparing wildly different initiatives -- from a small UX tweak to a major platform overhaul.
The RICE Formula Explained
The core formula is straightforward:
RICE Score = (Reach x Impact x Confidence) / Effort
Let's break down each component with precise definitions so your team scores consistently.
Reach
Reach measures how many users or customers will be affected by an initiative within a defined time period (typically one quarter). Use real data wherever possible.
How to estimate Reach:
Examples:
| Initiative | Reach Estimate | Source |
|---|---|---|
| Redesign onboarding flow | 5,000 new signups/quarter | Signup analytics |
| Add CSV export | 800 users requesting/quarter | Support tickets + feature requests |
| Mobile app push notifications | 12,000 active mobile users/quarter | Mobile analytics |
| Enterprise SSO integration | 50 enterprise accounts/quarter | Sales pipeline |
Always express Reach as a number of people or accounts per time period. Avoid vague terms like "a lot" or "most users."
Impact
Impact measures how much this initiative will move the needle for each person reached. Since individual impact is harder to quantify than reach, RICE uses a standardized scale:
| Score | Label | Meaning |
|---|---|---|
| 3 | Massive | Transforms the user experience or eliminates a critical blocker |
| 2 | High | Significant improvement that meaningfully changes behavior |
| 1 | Medium | Noticeable improvement |
| 0.5 | Low | Minor improvement |
| 0.25 | Minimal | Barely noticeable |
Guidelines for scoring Impact:
Tie Impact to a specific metric you're trying to move: activation rate, retention, NPS, revenue, or time-to-value.
Confidence
Confidence is a percentage that reflects how sure you are about your Reach and Impact estimates. This is the factor that keeps teams honest -- it penalizes wishful thinking.
| Score | Label | Criteria |
|---|---|---|
| 100% | High | Backed by quantitative data (analytics, A/B test results, large sample research) |
| 80% | Medium | Supported by qualitative data (user interviews, surveys, competitive analysis) |
| 50% | Low | Based on intuition, anecdotal feedback, or very small sample sizes |
Rules of thumb:
Effort
Effort is measured in person-months (or person-weeks, or story points -- just be consistent across all initiatives). This is the total effort across all disciplines: engineering, design, QA, data science, marketing, and anything else required.
How to estimate Effort:
Examples:
| Initiative | Engineering | Design | QA | Total Effort |
|---|---|---|---|---|
| Redesign onboarding | 2 months | 1 month | 0.5 months | 3.5 person-months |
| CSV export | 0.5 months | 0.25 months | 0.25 months | 1 person-month |
| Push notifications | 1.5 months | 0.5 months | 0.5 months | 2.5 person-months |
| Enterprise SSO | 3 months | 0.5 months | 1 month | 4.5 person-months |
Step-by-Step: How to Run a RICE Scoring Session
Step 1: Prepare Your Candidate List
Gather all features, projects, and initiatives being considered. Aim for 10-25 items -- too few and you don't need a framework, too many and the session becomes exhausting.
Step 2: Align on Definitions
Before scoring, ensure everyone agrees on:
Step 3: Score Each Initiative
Work through each initiative as a team. For each one:
Step 4: Rank and Discuss
Sort all initiatives by RICE score from highest to lowest. Then have a critical discussion:
Step 5: Make Decisions
Use the RICE scores as a strong input to your prioritization, not the final word. Adjust for strategic considerations, dependencies, and team capacity.
Real-World RICE Scoring Example
Imagine you're a product manager at a B2B SaaS company with 10,000 active users. Your team has four initiatives to compare:
| Initiative | Reach | Impact | Confidence | Effort | RICE Score |
|---|---|---|---|---|---|
| Smart search with filters | 6,000/quarter | 2 (High) | 80% | 3 person-months | 3,200 |
| Bulk action toolbar | 4,000/quarter | 1 (Medium) | 100% | 1 person-month | 4,000 |
| Dashboard customization | 8,000/quarter | 1 (Medium) | 50% | 4 person-months | 1,000 |
| Slack integration | 2,000/quarter | 2 (High) | 80% | 2 person-months | 1,600 |
Calculations:
The bulk action toolbar wins despite having lower reach and impact than some alternatives because it's fast to build and the team has high confidence in the estimates. Dashboard customization, despite reaching the most users, ranks last because the low confidence score and high effort drag it down.
When to Use RICE (and When Not To)
RICE Works Best When:
RICE Is Less Effective When:
RICE vs. Other Prioritization Frameworks
| Factor | RICE | MoSCoW | Weighted Scoring | Kano Model | Value vs. Effort |
|---|---|---|---|---|---|
| Quantitative | Yes | No | Yes | Partially | Partially |
| Accounts for reach | Yes | No | Optional | No | No |
| Accounts for confidence | Yes | No | No | No | No |
| Ease of use | Medium | Easy | Medium | Hard | Easy |
| Best for | Feature backlogs | Release planning | Complex criteria | Customer delight | Quick triage |
| Stakeholder buy-in | High (data-driven) | High (simple) | Medium | Low | Medium |
| Handles strategic alignment | No | Somewhat | Yes (custom criteria) | No | No |
Common Mistakes and Pitfalls
1. Inflating Impact Scores
Teams consistently overestimate Impact because they're emotionally attached to their ideas. Combat this by requiring a written justification for any Impact score of 2 or 3, tied to a specific metric and evidence.
2. Ignoring the Confidence Factor
Some teams set Confidence to 100% for everything, which defeats the purpose. Enforce the rule: if you don't have quantitative data, you can't score above 80%. If you don't have qualitative data, you can't score above 50%.
3. Inconsistent Effort Estimates
One team measures Effort in story points, another in weeks, another in "t-shirt sizes." Pick one unit and stick with it. Person-months is the most universally understood.
4. Scoring in a Vacuum
Never let one person score all initiatives alone. RICE works best when engineers estimate Effort, data analysts inform Reach, and product managers calibrate Impact. Cross-functional input reduces bias.
5. Treating RICE Scores as Gospel
The score is an input to your decision, not the decision itself. A feature with a RICE score of 500 might still be the right thing to build if it's strategically critical. Use RICE to inform, not to dictate.
6. Not Revisiting Scores
Conditions change. A feature you scored six months ago may have very different Reach, Impact, or Effort numbers today. Re-score your top candidates at the start of each planning cycle.
Best Practices for RICE Implementation
Calibrate as a Team
Before your first scoring session, score 3-5 past features that have already shipped. Compare the predicted RICE scores to actual outcomes. This calibration exercise helps the team develop shared intuitions for what "Impact: 2" or "Reach: 5,000" actually means.
Document Your Assumptions
For every initiative, record why you chose each score. "Reach: 6,000 because our funnel shows 6,000 users hit the search page per quarter" is far more valuable than just "6,000." When you revisit scores later, you'll know whether the assumptions still hold.
Use a Spreadsheet or Tool
RICE scoring is best done in a shared spreadsheet or purpose-built tool like IdeaPlan where everyone can see the inputs, challenge assumptions, and track scores over time. Transparency builds trust in the process.
Set a Minimum Confidence Threshold
Establish a rule: no initiative with Confidence below 50% goes into the final ranking. Instead, those items go onto a "research needed" list. This creates a healthy pipeline where discovery work feeds into prioritization.
Combine RICE with Strategic Themes
RICE optimizes for incremental value. To ensure you're also investing in long-term bets, layer strategic themes on top: allocate 70% of capacity to high-RICE items and 30% to strategic initiatives that might not score well on RICE but are critical for your long-term vision.
Review and Iterate
After shipping a high-scoring feature, compare predicted Reach and Impact against actual results. Did 6,000 users really use the new search? Did activation increase as expected? This feedback loop makes your future RICE estimates more accurate over time.
Getting Started with RICE Today
RICE won't solve every prioritization challenge, but it will give your team a shared vocabulary and a repeatable process for making better decisions. The framework's real power isn't in the formula itself -- it's in the structured conversations it forces your team to have about reach, impact, confidence, and effort.