Why Most OKRs Fail
Andy Grove invented OKRs at Intel in the 1970s. John Doerr brought them to Google in 1999. Since then, thousands of companies have adopted the framework, and most of them get it wrong.
The failure mode is almost always the same: teams rename their task lists as "objectives" and their deadlines as "key results." The result is a project plan wearing an OKR costume. Nothing about the team's behavior actually changes.
This guide covers how to write OKRs that shift a product team's focus from output to outcomes, with real examples, templates, and the specific mistakes to avoid.
What Makes a Good Objective
An objective answers one question: what do we want to be true at the end of this quarter that is not true today?
Good objectives share five traits:
Before and After: Fixing Weak Objectives
| Weak Objective | Problem | Stronger Objective |
|---|---|---|
| Increase revenue | Too vague; does not guide the team | Make self-serve upgrades the primary revenue driver |
| Launch the new dashboard | Output, not outcome | Give users real-time visibility into their most important metrics |
| Improve performance | No specificity | Make every page load feel instant |
| Be more data-driven | Unmeasurable aspiration | Ensure every product decision is backed by quantitative evidence |
Notice the pattern: strong objectives describe a future state that someone could take a photo of. "Launch the new dashboard" is an activity. "Give users real-time visibility into their most important metrics" is a world you can picture.
How to Write Key Results That Measure What Matters
Key results answer: how will we know we achieved the objective?
Every key result needs three components:
The format is simple: [Metric] from [baseline] to [target].
The Output Test
Before finalizing any key result, run this test: could the team hit this KR and still have failed at the objective?
If a key result passes the output test, keep it. If not, rewrite it.
How Many Key Results Per Objective
Two to four. Fewer than two suggests the objective is too narrow. More than four means the team will lose focus on what actually matters. Stripe reportedly caps objectives at three key results to enforce ruthless focus.
Types of Key Results
Not every KR has to be a product metric. The best OKR sets mix different types:
Pick KRs that triangulate on the objective from different angles. If the objective is "Make onboarding so good that new users succeed on day one," you might measure activation rate (did they reach the aha moment?), time-to-first-value (how fast?), and onboarding support tickets (did they need help?).
Real OKR Examples by Team Type
Growth Team
Objective: Make free-to-paid conversion predictable and scalable
>
KR1: Increase free-to-paid conversion from 3.2% to 5.5%
KR2: Reduce median days from signup to first payment from 14 to 7
KR3: Grow organic signups from 2,000/month to 3,500/month
Why this works: Each KR attacks a different dimension -- conversion efficiency, speed, and volume. The objective is specific enough that the team knows paid acquisition campaigns are out of scope.
Platform / Infrastructure Team
Objective: Make reliability a product differentiator, not just a maintenance task
>
KR1: Reduce P1 incidents from 4/month to 1/month
KR2: Increase uptime from 99.5% to 99.95% during business hours
KR3: Reduce mean time to recovery from 45 minutes to 12 minutes
Why this works: Platform teams often struggle with OKRs because their work feels like "keeping the lights on." This objective reframes reliability as a strategic advantage. Spotify's platform team used a similar framing when they set OKRs around developer experience in 2019.
B2B SaaS Product Team
Objective: Give customers the insights they need to make decisions without leaving our product
>
KR1: 40% weekly active usage of reporting among paid accounts within 8 weeks of launch
KR2: Reduce "export to Excel" events by 50%
KR3: Reporting feature NPS of 40+ (minimum 100 survey responses)
Why this works: The team is not measured on shipping the feature -- they are measured on whether customers actually use it, whether it replaces their current workflow, and whether they find it valuable.
Consumer Product Team
Objective: Make the mobile app the way most users experience the product
>
KR1: Increase mobile DAU/MAU from 28% to 42%
KR2: Reach mobile task completion parity with desktop (currently 64% vs 89%)
KR3: Grow mobile-first signups from 35% to 55% of total
Cascading OKRs Without Creating Bureaucracy
OKRs connect product strategy to team execution. The cascade works like this:
COMPANY OBJECTIVE
"Become the default tool for mid-market product teams"
|
+-- PRODUCT TEAM
| "Reduce time-to-value so new teams see results on day one"
| KR: Activation rate from 23% to 45%
|
+-- ENGINEERING TEAM
| "Make the product fast enough that speed is a competitive edge"
| KR: P95 page load from 3.2s to 0.8s
|
+-- SALES TEAM
"Win mid-market deals by proving faster time-to-value"
KR: Mid-market win rate from 18% to 30%
Three Rules for Cascading
1. Align, do not copy. Team OKRs should contribute to company OKRs, not duplicate them. The product team does not need "increase revenue" as an objective. The product team's objective should describe the product change that enables revenue growth.
2. Limit depth to two levels. Company to team is enough for most organizations. Company to department to team to individual creates overhead that kills velocity. Linear reportedly uses just company and team-level OKRs with no individual OKRs.
3. Allow bottom-up input. Leadership provides direction; teams propose how they will contribute. Then negotiate. The people closest to the work write the best key results because they know what is actually measurable.
Not everything needs to cascade. Some team OKRs address team-specific problems (technical debt, tooling, hiring) that do not map to a company objective. That is fine. Aim for 60-70% alignment, not 100%.
The Quarterly OKR Cycle
Weeks -2 to -1: Draft and Align
Week 1: Kick Off
Share finalized OKRs with every person on the team. Set up tracking -- a shared doc, a dashboard, a Notion page. The format does not matter. Visibility does.
Weeks 2-11: Execute and Check In
Weeks 12-13: Score and Reflect
Grade each KR, run a retrospective, and feed learnings into next quarter's draft. The retrospective should answer three questions: What drove success? What blocked progress? What will we do differently?
Grading: Keep It Simple
Google uses a 0.0 to 1.0 scale where 0.6-0.7 is considered success (because they set intentionally aspirational targets). This works at Google. It confuses most other teams.
For most product teams, a three-level system is clearer:
| Grade | Range | What It Signals |
|---|---|---|
| Hit | 80-100%+ of target | Targets may have been too safe -- push harder next quarter |
| Progressed | 40-79% of target | Healthy result -- ambitious target with real effort |
| Missed | Below 40% | Needs a retrospective -- wrong OKR, wrong tactics, or wrong priority |
Two critical rules about grading:
Never tie OKR scores to compensation. The moment KR scores affect bonuses, every PM in the building will set easy targets. Google explicitly separates OKR grading from performance reviews.
Never use scores to assign blame. A missed OKR should trigger a learning conversation: "What did we learn?" not "Whose fault is it?"
Eight Common Mistakes
1. Writing output OKRs. "Ship feature X by March" is a project milestone, not an OKR. Always ask what user or business outcome the feature creates.
2. Setting too many objectives. More than five per team means you have not prioritized. If everything is important, nothing is. Use a framework like RICE to force-rank before setting OKRs.
3. Skipping baselines. "Improve activation to 45%" is meaningless if you do not state that you are starting from 23%. Baselines make progress visible and targets credible.
4. Setting and forgetting. Writing OKRs in January and checking them in March is filing paperwork, not managing outcomes. Weekly check-ins take five minutes and keep OKRs alive.
5. Cascading too aggressively. Individual-level OKRs create bureaucracy and gaming. Keep OKRs at the team level. Individuals contribute through their sprint work, not through personal OKR scores.
6. Confusing OKRs and KPIs. "Maintain 99.9% uptime" is a KPI -- an ongoing health metric you monitor continuously. An OKR would be "Improve uptime from 99.5% to 99.9%." OKRs drive change; KPIs monitor steady state.
7. Writing fuzzy key results. "Improve customer satisfaction" is not a KR. "Increase NPS from 32 to 45" is. Every KR needs a number.
8. Quarterly OKRs for annual problems. Platform migrations, market entry, and major pivots take multiple quarters. Use annual objectives with quarterly milestones as key results, or accept that one quarter's OKR will only cover a phase of the larger effort.
Connecting OKRs to Your Product System
OKRs do not exist in isolation. They sit between strategy and execution:
When the system works, it looks like this: strategy sets direction, OKRs define quarterly targets, prioritization selects the work, and the team ships against those priorities with clear success criteria.
Getting Started
If your team has never used OKRs, do not roll them out company-wide. Start with one team for one quarter. Use three objectives, each with two to three key results. Run the full cycle: draft, align, execute with weekly check-ins, grade, retrospect.
After one quarter, you will know what works for your team's context. Then expand. Google did not start with company-wide OKRs in 1999 -- they started with one team and iterated.
For a ready-to-use starting structure, see our OKR template.