Products6 min read

Revenue-Weighted Feedback Scorecard for Prioritizing Feature Requests

by Alex

Revenue-Weighted Feedback Scorecard for Prioritizing Feature Requests

Build a scorecard that reflects demand and revenue, not just volume

Feature request backlogs usually fail for one reason: they treat every “+1” as equal. In practice, a request from a 500-seat customer who will churn without it is not the same as a request from a free user who is simply curious. The Revenue-Weighted Feedback Scorecard is a lightweight method to rank requests by two things product teams can defend in planning: segment demand and ARR impact.

This approach works whether you manage feedback in spreadsheets, a CRM, or a dedicated tool. The goal is to translate messy qualitative input into a repeatable scoring system that still leaves room for judgment.

What “Revenue-Weighted Feedback” means in plain terms

A Revenue-Weighted Feedback score is the product of two signals:

  • Demand by segment: how strongly each customer segment is asking for the capability.
  • Revenue impact: how much ARR is at risk or available if you build (or don’t build) the feature.

It’s deliberately practical. It doesn’t require perfect forecasting. It requires consistent definitions and disciplined tagging of requests by segment and account value.

The common failure mode it avoids

Many teams prioritize based on raw vote count. That rewards broad-but-shallow requests and penalizes requests that only appear in a smaller segment (for example, enterprise) where each account is worth more and where requirements are less “vote-y” and more contractual.

Step 1: Define segments you can actually operationalize

Start with 3–6 segments that reflect how your business sells and retains revenue. Over-segmentation makes the method fragile; under-segmentation hides meaningful differences.

Common segment options include:

  • Plan tier (Free / Pro / Business / Enterprise)
  • ARR band (e.g., <$2k, $2k–$10k, $10k+)
  • Industry (if needs vary sharply)
  • Use case / job-to-be-done (when it affects workflow requirements)

Choose the dimension you can reliably tag for most accounts. If you can’t tag it, you can’t score it.

Step 2: Capture demand in a way that respects duplicates and intensity

Demand is not just “number of tickets.” It’s the combination of unique accounts requesting the feature and how strongly they need it. A practical compromise is to track two fields per feature request:

  • Requesting accounts by segment (unique logos, not total comments)
  • Intensity (a simple 1–3 scale: Nice-to-have / Important / Blocking)

If you already collect feedback from multiple channels (support, sales calls, in-app portals), you’ll quickly run into duplicates and fragmented context. A platform like Canny helps centralize requests, deduplicate them, and keep the request tied to the underlying accounts and segments so your scorecard doesn’t drift over time. When you want a single place to operationalize this scoring, Canny is built for exactly that workflow.

Step 3: Quantify ARR impact with two numbers

ARR impact sounds intimidating, but you can keep it simple. For each request, estimate:

  • Retention ARR at risk: ARR that may churn (or downgrade) if the feature isn’t addressed.
  • Expansion / new ARR potential: ARR likely to be won if the feature exists (sales-assisted or self-serve).

You don’t need precision; you need consistency. A good rule is to use bands (e.g., $0, <$5k, $5k–$25k, $25k+) and keep the criteria visible so sales and CS can apply them the same way.

Use “confidence” to avoid false certainty

Add a confidence multiplier (for example 0.6 / 0.8 / 1.0) based on evidence quality:

  • 1.0: written commitment (contract language, signed order form dependency)
  • 0.8: consistent signal across multiple accounts and channels
  • 0.6: anecdotal or single-thread request

This keeps “big ARR” claims from overpowering the ranking when the underlying signal is shaky.

Step 4: Calculate the Revenue-Weighted Feedback score

Here’s a straightforward formula that works for most teams:

  • Segment Demand Score = sum over segments of (accounts requesting in segment × intensity × segment weight)
  • ARR Impact Score = (retention ARR at risk + expansion ARR potential) × confidence
  • Revenue-Weighted Feedback Score = Segment Demand Score × ARR Impact Score

Segment weights are where you encode strategy. If enterprise retention is your near-term focus, you may weight enterprise requests higher than self-serve. The key is to agree on weights quarterly rather than tuning them ad hoc to justify a favorite initiative.

Example (simplified)

Suppose “SAML + SCIM improvements” gets requests from 6 enterprise accounts (intensity 3) and 2 mid-market accounts (intensity 2). If enterprise weight is 1.5 and mid-market weight is 1.0:

  • Segment Demand Score = (6 × 3 × 1.5) + (2 × 2 × 1.0) = 27 + 4 = 31
  • ARR Impact Score = ($120k at risk + $60k expansion) × 0.8 confidence = $144k
  • Revenue-Weighted Feedback Score = 31 × 144 = 4464 (relative ranking value)

You can keep ARR in dollars or convert to an indexed scale; what matters is that all requests are scored the same way.

Step 5: Add guardrails so the score doesn’t become the decision

A scorecard should surface tradeoffs, not replace product judgment. Add a few check fields that don’t directly multiply into the number but influence final prioritization:

  • Time-to-value: how quickly customers benefit after launch
  • Engineering effort: small / medium / large (or t-shirt sizing)
  • Strategic fit: does it reinforce your positioning?
  • Risk: security, compliance, reliability implications

A high score with “large effort” might still lose to two medium-score items that are small and unblock a roadmap milestone.

Operational tips for making it stick

Make segment tagging part of the intake process

Any feedback entry that can’t be tied to an account (or at least a segment proxy) will degrade the score over time. Treat “segment + ARR band” as required fields for sales and CS-submitted requests.

Review weights and confidence definitions on a cadence

Quarterly is usually enough. If weights change every month, stakeholders will stop trusting the system.

Close the loop with public-facing context

When you publish what you’re building, include the “why” in plain language: who it helps, what it unlocks, and what’s changing.

Where this method fits best

The Revenue-Weighted Feedback Scorecard is most useful when you have:

  • Multiple segments with different revenue profiles
  • High feedback volume from many channels
  • A need to justify prioritization to sales, CS, and leadership with shared math

It’s less useful for very early-stage products with little revenue diversity, where speed of learning can outweigh formal scoring.

Vertical Video

FAQ