RICE Model - Your Guide to RICE Scoring and Prioritization

Learn the basics of the RICE model for prioritization. Also find out how to calculate the RICE score to prioritize initiatives in this blog.


What is the RICE Scoring model?

The RICE acronym stands for Reach, Impact, Confidence and Effort. Together, they form the RICE model for prioritization. It is a framework used by product managers to prioritize various initiatives.

Given the number of tasks that keep coming up, it is necessary to choose which ones are worth working on. As such, a prioritization model such as RICE proves to be handy. Using this, product managers make well-informed decisions. Any bias in the process is also removed. Also, in case they have to defend priorities against stakeholders, they can show exactly why they went ahead with decisions in a certain way.

This is important as you may be tempted to go ahead with ideas that come up without verifying if it's really worth it. However, using a framework ensures that you keep the project's goals in mind at all times. Your actions and the initiatives you take up must align with the overall project aim. Quite a few different frameworks are available for this purpose.

In the below sections, let's understand more about the RICE model for prioritization. We shall see how it came into being and how it works in practice. Also, find the RICE score calculation formula broken down.

Origins of the RICE Scoring Model

Intercom, the American software company and a messaging-software maker introduced the RICE model. The purpose of this model initially was to aid Intercom's internal decision making process. At the time, the company's product team was aware of a number of other frameworks and used them. But, these didn't work well for prioritizing Intercom's unique set of ideas for projects.

Hence, to overcome this challenge, the team worked on developing their own prioritization model. They used the four factors, Reach, Impact, Confidence and Effort. The team came up with a formula to use these factors together and quantify them to see how each decision ranks. Given the input data of the various factors, the formula gives a single score for each idea.

This score is then compared and initiatives accordingly picked up. It serves as an objective way of rating decisions.

How does the RICE model work?

rice scoring model

The RICE model combines the four factors, Reach, Impact, Confidence and Effort. Let's understand each of these in detail.

Reach

The first factor, Reach, shows the number of paying customers or users who would be directly affected by the new feature during a set period. Or, how many people you estimate the initiative to reach in a given time.

To get data, you first have to decide what you mean by the term 'reach' and the time-frame you want to consider. For example, reach could mean the number of customer sign-ups or number of people availing a free trial. The time-frame could be anything as you decide - a week, month, quarter, and the like.

Then, to calculate the score for reach, you'd have to estimate how much you expect. Meaning, you have to draw a rough estimate of the number of customers you expect from an initiative in a given period.

Taking a specific example, let's say you have a new free trial of a certain program in an existing app on your list. If you take a three month period and expect about 4000 users to sign up for the trial, your Reach score is 4000.

Impact

Impact, the second factor in the RICE model, shows the contribution of a feature to your product. The benefit the users of your product will get from a certain feature represents its impact. It can be quantitative, such as how many new conversions you'll get for your product when users encounter it. Customer satisfaction could also be considered in impact.

Calculation of impact isn't very easy as there could be a number of other things that lead to it. What this means is, if you have a new feature, and users buy the product, there's no definite way of saying it was because of the feature. You can't separate this new feature as the reason for the signups, and it may not even be one of the reasons.

Estimating Impact can then be quite tough. So, a number of scales are available for this. Intercom developed the following:

  • .25 - minimal impact
  • .5 - low impact
  • 1 - medium impact
  • 2 - high impact
  • 3 - massive impact

Choosing an impact score from this is an unscientific way, but it is helpful in calculating the final score.

Confidence

The Confidence metric is how confident you are about the estimations you have made so far. You may believe that some features will have a low impact but not have any proof to back up your data. In such a case, when your team is relying on intuition for some factor, you assign a confidence percentage.

Similar to the scale assigned for impact, Intercom also has a percentage scale for confidence. The following options are available:

  1. 50% - low confidence
  2. 80% - medium confidence
  3. 100% - high confidence

If your confidence score for an initiative is not even 50%, it's best to scrap it entirely. Anything below 50% is equivalent to taking a shot in the dark, and could be a waste.

Effort

Effort is the amount of work your team would have to put in to make a feature. In the formula we will be seeing shortly, effort is the denominator. This is because the effort is what you put in, while the others are proportional to the value you obtain from an initiative.

To quantify effort, you'd have to estimate the resources needed and measure in person-month. Person-month is the amount of work one person can do in one month. So, suppose you will take 3 months for one project working alone, the score would be 3 person-months. Accordingly, you'd estimate the effort for any project.

How to calculate the RICE score

Once you've determined your various scores, calculating the RICE model score at the end is quite easy.

The following formula is used for this:

RICE Score = (Reach * Impact * Confidence)/Effort

After you have the score for each initiative, you'd take them up in the order of highest score first.

0
Comments