Product Management13 min

Prioritization in Practice: How Three PMs Approach It Differently

Three PM profiles — early-stage, growth, enterprise — and how their prioritization approach differs. Context determines the right framework.

By Tim Adair• Published 2025-11-20• Last updated 2026-02-12
Share:

The Framework Is Not the Answer

Every PM conference has a session on prioritization frameworks. RICE, ICE, MoSCoW, Weighted Scoring, Value vs. Effort — the options are endless. But here is the thing: the framework matters far less than the context.

A RICE score that makes perfect sense at a growth-stage SaaS company is useless at a pre-revenue startup. A weighted scoring model that serves an enterprise PM well would paralyze a team of four. The right prioritization approach depends on your stage, your constraints, and the type of decisions you are actually making.

To make this concrete, here are three fictional PMs — based on composites of real people I have worked with — and how they each approach prioritization in their specific context.

Profile 1: Maya — Early-Stage Startup PM

The context

Maya is the first PM at a 12-person startup. They have $2M in seed funding, 8 months of runway, and about 200 beta users. The product is a collaboration tool for design teams. Maya reports directly to the CEO-founder and works with 4 engineers and 1 designer.

Her prioritization problem

Maya does not have too many features to choose from. She has too many directions the product could go. Should they double down on the file-sharing workflow that beta users love? Build the review-and-approval flow that three enterprise prospects asked about? Or invest in integrations with Figma and Sketch that seem like table stakes?

At this stage, the question is not "which feature next?" It is "which bet gives us the best chance of finding product-market fit before the money runs out?"

Her approach: The One-Metric Focus

Maya uses a single metric as her prioritization filter: weekly active users who complete at least one design review. She chose this metric because it represents the core value prop — if users are reviewing designs in the product weekly, the product is working.

Every feature candidate gets one question: "Will this measurably increase weekly active reviewers within 6 weeks?"

  • File sharing improvements: Maybe. Current users like it, but it does not drive new activation.
  • Review-and-approval flow: Yes. This is the specific workflow that makes users come back weekly.
  • Figma integration: Maybe later. Nice-to-have but not blocking anyone from using the product.
  • Maya does not use a scoring framework. She does not need one. With 4 engineers and 8 months of runway, every sprint is a strategic bet. She runs rapid hypothesis-driven development cycles: build the smallest version, ship it to beta users, measure whether it moves the one metric.

    What makes this approach work at this stage

  • Speed over precision. Spending 2 hours scoring features with RICE is 2 hours not building. At this stage, the cost of analysis is higher than the cost of a wrong bet (because you can recover from a wrong bet in 2 weeks).
  • Founder alignment. One metric that the PM and CEO agree on eliminates daily prioritization debates.
  • Focus. Early-stage teams that try to do three things at once do none of them well.
  • What would break this approach

    If Maya had 50 engineers and 500 feature requests, a single metric would not help her allocate across multiple teams and timelines. She needs a framework that works at her scale. This is a stepping stone, not a permanent system.

    Profile 2: Jordan — Growth-Stage PM

    The context

    Jordan is one of 6 PMs at a B2B SaaS company with 150 employees, $30M ARR, and 2,000 customers. Jordan owns the "onboarding and activation" product area with a team of 8 engineers and 2 designers. The company has product-market fit and is focused on efficient growth.

    His prioritization problem

    Jordan has more validated ideas than capacity. His backlog contains 40 items: improvements to the onboarding flow, new activation triggers, integrations requested by sales, accessibility improvements, and technical debt from three years of rapid building. His team can realistically ship 12-15 things per quarter.

    The question is not "what should we build?" It is "in what order should we build the things we already know are valuable?"

    His approach: Modified RICE with Team Input

    Jordan uses the RICE framework as a starting point, but with two modifications:

    Modification 1: Engineers contribute to the Effort estimate collaboratively. Instead of the PM guessing effort, Jordan runs a quarterly estimation session where engineers t-shirt size every item in the backlog. This takes 90 minutes and produces dramatically more accurate estimates than PM guesswork.

    Modification 2: Confidence is earned, not assumed. Jordan requires specific evidence for each confidence score:

    Confidence levelEvidence required
    High (100%)Customer research + quantitative validation (A/B test, analytics data)
    Medium (80%)Customer research OR quantitative data, but not both
    Low (50%)PM intuition or single anecdote

    This prevents the common RICE failure mode where PMs give everything 80% confidence because it feels reasonable. Try scoring a few features yourself to see how confidence adjustments change the ranking:

    The quarterly planning ritual:

  • Jordan scores the top 30 backlog items using RICE (2 hours)
  • The team reviews the ranked list together (1 hour)
  • They identify items where the ranking feels wrong and discuss why (30 minutes)
  • They commit to the top 12-15 items, grouped into 3 monthly themes (30 minutes)
  • What makes this approach work at this stage

  • Transparency. When a stakeholder asks "why didn't you build X?", Jordan can show the RICE score and explain the trade-off.
  • Team ownership. Engineers who participate in estimation feel ownership over the plan, not just assignment to tasks.
  • Calibration over time. After each quarter, Jordan compares predicted RICE scores against actual outcomes. This feedback loop makes the scoring more accurate each cycle.
  • What would break this approach

    If Jordan were managing across multiple teams or stakeholder groups with competing priorities, RICE alone would not resolve conflicts. Political alignment and executive decision-making would be needed alongside the framework. RICE is a tool for within-team prioritization, not for cross-organizational negotiation.

    Profile 3: Priya — Enterprise PM

    The context

    Priya is a Senior PM at a public enterprise software company with 3,000 employees. She manages a platform area that other product teams build on. Her "customers" are both external users (large enterprises) and internal teams (5 product teams that depend on her platform). She has 20 engineers and 4 designers.

    Her prioritization problem

    Priya's prioritization challenges are multi-dimensional:

  • External customers need reliability, compliance features, and specific integrations.
  • Internal teams need new platform capabilities to ship their own features.
  • Engineering needs time for technical debt and infrastructure upgrades.
  • Leadership has strategic initiatives tied to company OKRs.
  • She cannot optimize for one dimension without under-serving the others.

    Her approach: Portfolio Allocation + Weighted Scoring Within Buckets

    Priya uses a two-level system:

    Level 1: Portfolio allocation (quarterly)

    She allocates her team's capacity across four buckets:

    Bucket% of capacityDecision maker
    Customer-facing features40%Priya + customer advisory board
    Internal platform work25%Priya + dependent PM teams
    Technical health20%Engineering lead
    Strategic initiatives15%VP of Product

    These percentages are negotiated each quarter based on business context. If the company is pushing for enterprise compliance certifications, the customer-facing bucket grows. If platform stability is degrading, the technical health bucket grows.

    Level 2: Weighted scoring within each bucket

    Within each bucket, Priya uses a weighted scoring model with criteria specific to that bucket:

    Customer-facing features are scored on: revenue impact (30%), customer breadth (25%), competitive necessity (20%), effort (15%), strategic alignment (10%).

    Internal platform work is scored on: number of teams unblocked (40%), effort (30%), architectural impact (30%).

    Technical health items are scored by the engineering lead using their own criteria (severity, blast radius, fix difficulty).

    What makes this approach work at this stage

  • Stakeholder management. Each bucket has a clear owner and clear criteria. When the sales VP asks why their request is not prioritized, Priya can show the customer-facing scoring and explain the trade-off against higher-scoring items.
  • Engineering autonomy. The 20% technical health bucket is fully delegated to the engineering lead. This builds trust and ensures technical debt is addressed systematically, not as an afterthought.
  • Cross-team fairness. Internal teams can see the platform backlog and understand why their request is at position 7, not position 2.
  • What would break this approach

    At a startup. The overhead of maintaining four scoring models, running quarterly allocation negotiations, and managing multi-stakeholder input would consume more time than the decisions warrant. This system makes sense when you have 20 engineers and 4 dimensions of demand. It is overkill for a team of 5.

    The Meta-Lesson

    The right prioritization approach is determined by three factors:

    1. Team size

    Team sizeApproach
    1-8 engineersSingle metric focus or simple rank-ordering
    8-20 engineersScoring framework (RICE, ICE, weighted)
    20+ engineersPortfolio allocation + scoring within buckets

    2. Decision frequency

    If you are making prioritization decisions weekly (early stage), you need a lightweight system. If you are making them quarterly (enterprise), you can afford a heavier process because the decision has to hold for 3 months.

    3. Stakeholder complexity

    Solo PM with one team and one founder: just decide. PM with multiple stakeholder groups: you need a system that creates transparency and perceived fairness.

    How to Choose Your Approach

    Ask yourself:

  • How many items am I choosing between? If fewer than 10, rank them. If 10-50, score them. If 50+, categorize first, then score within categories.
  • Who needs to buy into the decision? If just you and the team, any system works. If multiple stakeholders, the system needs to be visible and defensible.
  • How long do decisions need to hold? If you will re-evaluate in 2 weeks, be fast and imprecise. If the decision sets the quarter's work, invest more in the analysis.
  • The worst prioritization approach is the one you do not actually use. A simple system applied consistently beats a sophisticated system applied sporadically. Start with the lightest-weight approach that works for your context, and add complexity only when the lightweight approach starts to fail.

    T
    Tim Adair

    Strategic executive leader and author of all content on IdeaPlan. Background in product management, organizational development, and AI product strategy.

    Free Resource

    Enjoyed This Article?

    Subscribe to get the latest product management insights, templates, and strategies delivered to your inbox.

    No spam. Unsubscribe anytime.

    Want instant access to all 50+ premium templates?

    Start Free Trial →

    Keep Reading

    Explore more product management guides and templates