Guides20 min read

Product Management Frameworks Compared: Which One to Use When

A practical comparison of 8 major PM frameworks — RICE, ICE, MoSCoW, Kano, HEART, Jobs-to-be-Done, North Star, and OKRs — with guidance on when each one works best.

By Tim Adair• Published 2026-02-12

Product management has a framework for everything. Prioritization, measurement, discovery, goal-setting, strategy — there is an acronym and a 2x2 matrix for each one. The problem is not a lack of frameworks. The problem is knowing which one to use when and which ones to ignore.

This guide compares the 8 frameworks you are most likely to encounter, with honest assessments of when each one works, when it breaks down, and what it is actually good for.

Quick Answer

No single framework covers all of product management. You need different tools for different jobs: a prioritization framework for deciding what to build next, a measurement framework for tracking success, a discovery framework for understanding user needs, and a goal-setting framework for aligning the team. Start with RICE + HEART + JTBD + OKRs. Adapt from there.

Key Frameworks by Purpose:

  • Prioritization: RICE, ICE, MoSCoW
  • Measurement: HEART, North Star
  • Discovery: Jobs-to-be-Done
  • Goal-setting: OKRs, North Star
  • Feature analysis: Kano Model
  • Best For: PMs who want to choose the right framework for their situation instead of using the same one for everything


    The Comparison Table

    FrameworkPurposeSpeedBest ForWorst For
    RICEPrioritizationMediumBacklog ranking, stakeholder debatesEarly-stage discovery
    ICEPrioritizationFastQuick decisions, small teamsComplex trade-offs
    MoSCoWScopingFastFixed-deadline projectsOngoing prioritization
    KanoFeature analysisSlowUnderstanding feature value typesSprint-level decisions
    HEARTUX measurementMediumTracking user experience qualityRevenue/business metrics
    JTBDDiscoverySlowUnderstanding user needsExecution prioritization
    North StarStrategy/measurementMediumCompany-wide alignmentTeam-level decisions
    OKRsGoal-settingMediumQuarterly planning, alignmentDay-to-day prioritization

    RICE: The Workhorse Prioritization Framework

    Score = (Reach x Impact x Confidence) / Effort

    The RICE framework was developed at Intercom and has become the most widely used prioritization framework in product management. Use the RICE Calculator for quick scoring.

    How It Works

  • Reach: How many users will this affect in a given time period? Use actual numbers (not "high/medium/low").
  • Impact: How much will this change behavior for each user? Scale: 3 = massive, 2 = high, 1 = medium, 0.5 = low, 0.25 = minimal.
  • Confidence: How sure are you about your reach and impact estimates? 100% = validated with data, 80% = strong evidence, 50% = educated guess.
  • Effort: How many person-months of work? Includes design, engineering, QA, and any other required effort.
  • When RICE Works Best

  • You have a backlog of 20+ items and need to rank them
  • Stakeholders are debating priorities based on opinion rather than evidence
  • You need a defensible, data-informed answer to "Why are we building this instead of that?"
  • Your team has at least basic data on user reach and past feature impact
  • When RICE Breaks Down

  • Early-stage products with no usage data: You cannot estimate Reach or Impact accurately when you have 50 users. RICE is most useful when you have thousands of users and historical data.
  • Strategic bets: A new product line might score poorly on RICE (low confidence, high effort) but be the right strategic investment. RICE optimizes for incremental improvement, not breakthrough innovation.
  • Comparing apples to oranges: A performance improvement and a new feature are hard to score on the same scale.
  • Real Example

    Intercom used RICE to prioritize their messenger product backlog. A feature that reached 80% of users (Reach: 50,000/quarter), with medium impact (1), high confidence (80%), and 2 person-months of effort scored: (50,000 x 1 x 0.8) / 2 = 20,000. This beat a feature reaching only 5% of users even though the smaller feature was more exciting to the team. RICE depersonalized the decision.

    For a deeper look at the framework, see the RICE comparison guide.


    ICE: RICE's Faster, Rougher Cousin

    Score = Impact x Confidence x Ease

    ICE was popularized by Sean Ellis (GrowthHackers) for growth teams that need to move fast.

    How It Works

  • Impact: How much will this move the target metric? Scale 1-10.
  • Confidence: How sure are you? Scale 1-10.
  • Ease: How easy is this to implement? Scale 1-10 (10 = trivial, 1 = massive effort).
  • Multiply the three scores. Higher total = higher priority.

    When ICE Works Best

  • Growth teams running rapid experiments (test 10 ideas per week)
  • Small teams where everyone has shared context and does not need detailed justification
  • Early prioritization passes where you need to quickly separate promising ideas from long shots
  • When ICE Breaks Down

  • Stakeholder presentations: ICE's 1-10 scores are arbitrary and hard to defend. "I gave this a 7 on Impact" invites "Why not an 8?" debates that never converge.
  • Comparing across teams: One team's "8 Impact" is another team's "5." ICE scores are not calibrated across groups.
  • Anything requiring nuance: The difference between a 6 and a 7 on any dimension is meaningless, but it changes the ranking.
  • ICE vs. RICE

    DimensionICERICE
    SpeedFaster (intuitive scoring)Slower (requires data)
    RigorLower (subjective 1-10 scales)Higher (actual reach numbers, calibrated impact)
    DefensibilityWeak (hard to justify scores)Strong (data-backed)
    Best forGrowth experiments, small teamsBacklog prioritization, stakeholder alignment

    Use ICE for quick triage. Switch to RICE when you need rigor.


    MoSCoW: The Scope Control Framework

    Must Have / Should Have / Could Have / Won't Have

    MoSCoW was developed by Dai Clegg for DSDM (Dynamic Systems Development Method) in 1994. It is a scoping tool, not a ranking tool.

    How It Works

    Categorize every item into one of four buckets:

  • Must Have: The release is broken without this. Regulatory requirements, core functionality, blockers.
  • Should Have: Important but the release can ship without it. Significant value, but there are workarounds.
  • Could Have: Nice to have. Will be included if there is time, cut if there is not.
  • Won't Have (this time): Explicitly out of scope. Important for managing expectations.
  • When MoSCoW Works Best

  • Fixed-deadline projects (conference launch, regulatory deadline, contractual commitment)
  • MVP scoping — what is the smallest thing that works?
  • When the team is struggling with scope creep and needs clear boundaries
  • When MoSCoW Breaks Down

  • Ongoing prioritization: MoSCoW does not rank items within categories. If you have 15 "Must Haves" and can only build 10, MoSCoW does not help you choose.
  • Stakeholder inflation: Everyone wants their feature to be a "Must Have." Without clear criteria for what qualifies, MoSCoW degenerates into a negotiation.
  • Continuous delivery: MoSCoW assumes a fixed release scope. If you ship continuously, the concept of "this release" does not apply.
  • Real Example

    When the UK Government Digital Service built GOV.UK, they used MoSCoW extensively to scope each phase. Must Haves included accessibility compliance (legal requirement) and basic navigation. Could Haves included personalization features that would be added post-launch. This kept a massive government project from expanding indefinitely.


    Kano Model: Understanding Feature Value Types

    The Kano Model was developed by Professor Noriaki Kano in 1984. It classifies features based on how they affect customer satisfaction. Use the Kano Analyzer for interactive analysis.

    The Five Categories

  • Must-Be (Basic): Expected by customers. Their presence does not delight; their absence causes frustration. Example: a login page on a SaaS product. No one is excited about login, but everyone is frustrated if it breaks.
  • Performance (One-Dimensional): More is better, linearly. Faster load times, more storage, lower pricing. Customer satisfaction scales proportionally with the feature's quality.
  • Attractive (Delighters): Features customers did not ask for but love when they discover them. Example: Slack's custom emoji. No one requested it, but it became a core part of the product's appeal.
  • Indifferent: Customers do not care whether it exists or not. These are features you should not build.
  • Reverse: Features that some customers actively dislike. Example: auto-playing videos. Some users tolerate them; many find them annoying.
  • When Kano Works Best

  • Deciding what to include in a new product or major release
  • Understanding why some features drive satisfaction and others do not
  • Preventing over-investment in Must-Be features at the expense of delighters
  • When Kano Breaks Down

  • Requires survey data: Proper Kano analysis involves structured questionnaires with functional/dysfunctional question pairs. This takes time and effort.
  • Categories shift over time: Today's delighter is tomorrow's Must-Be. Auto-save was a delighter in 2005; it is expected now. Kano categorization has a shelf life.
  • Does not help with prioritization within categories: Knowing that 5 features are all "Attractive" does not tell you which one to build first.
  • Real Example

    When Apple designed the original iPhone, their must-be features were phone calls, contacts, and messaging. Their performance features were browser speed and screen responsiveness. Their delighters were multitouch gestures (pinch to zoom, swipe to scroll). The delighters are what made the iPhone iconic, but the must-be features had to work first.


    HEART: Measuring User Experience

    Happiness / Engagement / Adoption / Retention / Task Success

    The HEART framework was developed by Kerry Rodden at Google to measure user experience at scale.

    The Five Dimensions

    DimensionWhat It MeasuresExample Metric
    HappinessUser satisfactionNPS score, CSAT, satisfaction survey
    EngagementDepth of interactionSessions per user per week, features used per session
    AdoptionNew user uptakeNew signups, percentage using a new feature
    RetentionUsers coming backD7/D30 retention rate, churn rate
    Task SuccessAbility to complete goalsTask completion rate, time to complete, error rate

    When HEART Works Best

  • You need a structured way to measure product quality beyond revenue metrics
  • You are evaluating the impact of a redesign, new feature, or UX improvement
  • You want to balance user satisfaction metrics against business metrics
  • Product reviews where you need to show how the user experience is trending
  • When HEART Breaks Down

  • Measuring everything at once: HEART has 5 dimensions, and tracking all of them for every feature is excessive. Pick 2-3 dimensions that matter most for your current focus.
  • B2B products with low user volume: Happiness surveys need sample size. If you have 200 users, NPS results are noisy.
  • Revenue-focused conversations: Executives care about MRR and churn. HEART metrics need to be connected to business outcomes to resonate with leadership.

  • Jobs-to-be-Done: Understanding User Needs

    Jobs-to-be-Done (JTBD) is a theory of customer behavior developed by Clayton Christensen and refined by Bob Moesta. The core idea: customers do not buy products — they "hire" products to do a job in their lives.

    How It Works

    Instead of asking "What features do users want?", ask "What job are users trying to get done?"

    The canonical example: A fast-food chain wanted to sell more milkshakes. Traditional research said: make them cheaper, add flavors, make them bigger. JTBD research discovered that most milkshakes were sold before 8 AM to commuters who "hired" the milkshake to make a boring drive interesting and keep them full until lunch. The job was not "eat a milkshake" — it was "make my commute bearable."

    The JTBD Interview

    JTBD research uses structured interviews focused on recent purchases or product adoptions:

  • What triggered the search? "I was frustrated with [old solution] because..."
  • What was the old way? "Before this product, I used to..."
  • What was the moment of decision? "I decided to switch when..."
  • What jobs is this product doing? "I use this to..." (functional), "It makes me feel..." (emotional), "People see me as..." (social)
  • When JTBD Works Best

  • Discovery: understanding what to build before you prioritize
  • Product positioning: crafting messaging that speaks to the real job
  • Competitive analysis: understanding why customers switch between products
  • Innovation: finding opportunities that traditional feature research misses
  • When JTBD Breaks Down

  • Execution prioritization: JTBD tells you what jobs matter. It does not rank specific features. Pair it with RICE or ICE for execution.
  • Requires skilled interviewers: JTBD interviews are harder than standard user interviews. The interviewer needs to probe for the "job" without leading the participant.
  • Slow process: Proper JTBD research takes 15-20 interviews and weeks of synthesis. It is a discovery activity, not a sprint activity.

  • North Star Framework

    The North Star framework asks every product team to identify a single metric that best captures the value they deliver to customers. All other metrics ladder up to this one.

    How It Works

  • Identify your North Star Metric: A single number that reflects the core value your product delivers. Examples:
  • - Spotify: Time spent listening

    - Airbnb: Nights booked

    - Slack: Messages sent in organizations with 2,000+ users

    - Facebook: Daily Active Users (in its growth phase)

  • Identify input metrics: The 3-5 metrics that drive the North Star. For Spotify, input metrics might be: new playlist creations, new podcast subscriptions, daily recommendation engagement.
  • Align teams: Each team owns one or more input metrics. Their roadmap priorities should move their input metrics, which in turn move the North Star.
  • When North Star Works Best

  • Company-wide alignment: when teams are pulling in different directions
  • Product-led companies where the core metric is clear (engagement, transactions, content creation)
  • Communicating product success to executives and the board
  • When North Star Breaks Down

  • Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure." Teams optimize for the metric at the expense of broader product health. Facebook's DAU obsession contributed to engagement-farming content.
  • B2B enterprise products: A single metric often does not capture the value of a complex enterprise product. Net Revenue Retention might be better.
  • Multi-product companies: Each product might need its own North Star, making company-wide alignment harder.
  • Learn more about the North Star framework and use the North Star Finder tool to identify your metric.


    OKRs: The Goal-Setting Standard

    Objectives and Key Results, popularized by Andy Grove at Intel and John Doerr at Google. The most widely adopted goal-setting framework in tech.

    How It Works

  • Objective: A qualitative, inspiring goal. "Make our onboarding the best in the industry."
  • Key Results (2-4 per objective): Quantitative outcomes that prove the objective is achieved. "Increase 7-day activation rate from 40% to 55%." "Reduce support tickets about onboarding by 30%."
  • When OKRs Work Best

  • Quarterly planning: aligning what teams will focus on
  • Cross-functional alignment: connecting engineering, design, marketing, and sales efforts
  • Accountability: creating measurable commitments without micromanaging tactics
  • When OKRs Break Down

  • Output disguised as outcomes: "Ship the new onboarding flow" is an output, not an outcome. Good Key Results measure results, not activities.
  • Too many OKRs: If a team has 5 objectives with 4 key results each, that is 20 targets. No one can focus on 20 things. Limit to 2-3 objectives per quarter.
  • Cascade overload: Company OKRs cascade to department OKRs cascade to team OKRs. By the time it reaches the team, the connection to company strategy is lost in translation.
  • Day-to-day prioritization: OKRs set quarterly direction but do not tell you which ticket to work on today. Pair with RICE or a similar framework for sprint-level decisions.
  • Learn more about OKRs and how they connect to product strategy.


    Framework Selection Guide

    By Situation

    Your SituationUse This Framework
    "What should we build next?"RICE or ICE
    "What can we cut from this release?"MoSCoW
    "What features will delight vs. what is table stakes?"Kano
    "How do we measure UX quality?"HEART
    "What do our users actually need?"JTBD
    "What is our single most important metric?"North Star
    "What are our goals for this quarter?"OKRs
    "Why should we build X instead of Y?"RICE (with stakeholder data)

    By Team Maturity

    Early-stage (1-3 PMs, pre-PMF):

  • Start with JTBD for discovery and ICE for fast prioritization
  • Skip RICE (not enough data), HEART (not enough users), and OKRs (too much process)
  • Growth-stage (3-8 PMs, post-PMF):

  • Add RICE for rigorous prioritization, HEART for UX measurement, and OKRs for alignment
  • JTBD continues for discovery on new product areas
  • Scale-stage (8+ PMs, multiple products):

  • Full framework stack: RICE + HEART + JTBD + OKRs + North Star
  • MoSCoW for fixed-deadline projects
  • Kano for periodic feature value analysis

  • Key Takeaways

  • No single framework does everything — you need different tools for prioritization, measurement, discovery, and goal-setting.
  • RICE is the best default prioritization framework — it balances rigor and practicality.
  • JTBD is the most underused framework — most teams skip discovery and jump straight to prioritization, which means they are efficiently building the wrong things.
  • OKRs work best in moderation — 2-3 objectives per quarter. More than that defeats the purpose.
  • Frameworks are tools, not religions — use them when they help, modify them when they do not, and drop them when they become bureaucratic overhead.
  • The best PMs adapt frameworks to their context — the framework police will not come for you if you modify RICE or skip one dimension of HEART.
  • Frequently Asked Questions

    Which prioritization framework should I use?+
    For most teams, start with RICE. It is structured enough to depersonalize debates, flexible enough to adapt to different contexts, and familiar enough that stakeholders will accept it. Switch to ICE for speed, MoSCoW for fixed-scope projects, or Kano when you need to understand which features create delight vs. which are table stakes.
    How many frameworks should a PM know?+
    Know 3-4 well enough to use in practice: one prioritization framework (RICE or ICE), one measurement framework (HEART or North Star), one discovery framework (JTBD or Opportunity Solution Trees), and OKRs for goal-setting. Knowing about all 8 frameworks in this guide is useful for conversations and interviews, but you do not need to use all of them simultaneously.
    Can you combine frameworks?+
    Yes, and most teams do. A common combination is: OKRs for quarterly goal-setting, RICE for backlog prioritization, HEART for measuring user experience, and JTBD for discovery research. The key is that each framework serves a different purpose — you are not running the same decision through 4 frameworks.
    Free Resource

    Want More Guides Like This?

    Subscribe to get product management guides, templates, and expert strategies delivered to your inbox.

    No spam. Unsubscribe anytime.

    Want instant access to all 50+ premium templates?

    Start Free Trial →

    Put This Guide Into Practice

    Use our templates and frameworks to apply these concepts to your product.