# How to sort rated content

If you are a web developer, or at least a webmaster, and if you have rateable content on your website, you probably almost once faced this common problem: how should I sort my content, using the collected rating data?

As a common problem, it has common but different solutions, most of which are wrong. We will analyse two cases: the up/down rating, and the 5-star rating.

#### Up/Down rating case

This is the simplest case. Everyone can positively or negatively rate the content. So, every content will have:

• Np = number of positive ratings
• Nn = number of negative ratings
###### Solution #1 (Definitely wrong)

The first attempt is: let’s use the raw difference between positive and negative ratings.

`rating = Np - Nn`

This is a fairly used approach, but still it is wrong. Suppose you have two items:

1. Item 1 has 500 positive ratings and 200 negative ratings
2. Item 2 has 150 positive ratings and 10 negative ratings

Just analysing the data you come to think that Item 2 is more appreciated by the public. But this algorithm will compute:

1. Item 1 rating = 300
2. Item 2 rating = 140

So, Item 1 is valued better than Item 2. This is not what we expected.

###### Solution #2 (Quite wrong)

This solution is widespread through the web. It consists on computing the arithmetic average score for each item.

`score = Np / (Np + Nn) = Am (Arithmetic mean)`

If we consider the same previous situation, we’ll have:

1. Item 1 score = 500/700 = 71,43%
2. Item 2 score = 150/160 = 93,75%

This result seems a little bit fairer. But let’s face this other situation:

1. Item 1: Np = 90, Nn = 10, score = 90%
2. Item 2: Np = 9, Nn = 1, score = 90%

The score is the same for the two items, even if we probably all think that the first item has a more precise valuation that the second.

Sadly, even with this second solution the results are too biased to be acceptable.

###### Solution #3 (best?)

If we wanted to computed a completely unbiased scoring, taking into account just the item’s ratings, we have to face some probabilistic mathematical aspect for the question.

What we are trying to do is the following: we want to compute the best estimation of the item’s real score considering that we can have a variable incertainty for the measures. The incertainty decreases when the number of ratings grows. Each rating contributes to the definition of the real score of the item, which is our unknown value, and with each new rating the estimated score gets closer to the real score.

The fundamental question is: “Given the amount of data that I have, what is the least scoring for the item that has a minimum 95% probability of being the real one?

So, welcome the Lower bound of Wilson score confidence interval for a Bernoulli parameter, that is to say:

In this formula, we have:

• s = estimate of the real scoring
• p = Np/(Np+Nn) = fraction of the positive ratings
• z = z1-α/2 = the (1 – α/2) percentile of the standard normal distribution
• n = Np + Nn = number of samples (ratings)

To get the lower bound for the estimate, we have to use the minus sign before the square root. If we want to get the 95 percentile, z will be equal to 1.96.

This formula has good results even for a very low number of available ratings, as well for a largely rated item. Its implementation is very simple too. A Ruby method implementation could be this one (warning: it requires the abscondment-statistics2 gem):

```require 'statistics2'

def lower_bound_estimate(np, n, power)
if n == 0 { return 0 }

z = Statistics2.pnormaldist(1-power/2)
p = (np/n).to_f
(p + z*z/(2*n) - z * Math.sqrt((p*(1-p)+z*z/(4*n))/n))/(1+z*z/n)
end```

#### 5 star rating – General case

The previous 3 solutions are useful for up/down ratings, but poorely adapt for the more common 5 star rating approach. In this latter case we should find a solution that doesn’t cost too much in terms of computational time, is not too biased, and is rating-scale independent. That is, we have to find something that could adapt to the up/down case as well as to the 5 star case.

###### Solution #4 (a fair good compromise)

This solution is one currently used by IMDB to sort the top 250 rated titles. It is a Bayesian estimate of the score of the item. We have to define:

• Am = arithmetic mean for the item
• N = total number of votes
• m = minimum number of votes for the item to be taken into account
• ATm = arithmetic total mean when considering the collection of all the items

The Am parameter, for the simplest case, has to be computed as shown for Solution #2. For the 5 star case, we will use this formula:

`Am = Σ(ratings) / N`

Eg. if an item has 3 scores, let’s say 1, 4 and 5, Am will be: (1+4+5) / 3 = 3.33.

Using these parameters we define the weigthed scoring as follows:

`Ws = (N / (N + m)) × Am + (m ÷ (N + m)) × ATm`

We are answering this question: “What is the score of the item, given all the ratings I collected till now, for this item and for the others?” (= posterior expected value)

How does this work? Well, if we put m = 0 we will have Ws = Am, that’s what we just analysed in Solution #2. If we put m >> N or N ~= 0, we have Ws = ATm, that’s to say that every item’s scoring is equal to the global mean score.

Using a fair value for m, and that surely depends on the average number of rating per item (IMDB currently uses something around 3000), we will have that every item’s score will be biased around the global mean rating. Items that have few rating will have a weigthed rating very close to ATm, while items with lots or rating will tend to have Ws ~= Am.

This is actually an acceptable solution, since items with low number of rating will have a scoring that is coherent with the whole collection rating, avoiding the situation described in the last paragraph. But, still, this is a biased solution. The best amongst the biased solutions, I’d say.

#### Conclusions

Every problem has its own best solution. If you have the simplest up/down rating implementation, probably Solution #3 will be your best choise. For every other case I’d suggest to use an implementation of Solution #4, since it’s the one that performs best for various rating systems, and for low or high number of ratings.

## 10 thoughts on “How to sort rated content”

1. Great job, great article!

2. marzapower

Great job, great article!

Thank you! Travelling by train gives you plenty of time to research on the internet …

3. MgSaimon:

Great job, great article!

Thank you! Travelling by train gives you plenty of time to research on the internet …

Especially when there are delays, very commons with FS 🙂

4. Rodrigo Bortolon

Hi,

Great article !! Congratulations !!

Sorry but I don’t understand how to calculate the value of ATm.

ATm = arithmetic total mean when considering the collection of all the items (??)

How to obtain this value ? Could you show me an example, please ?

Thank You,
Rodrigo Bortolon

• marzapower

Rodrigo, you can compute the ATm considering the whole collection of objects you have, rather than the single one. Eg. you compute the ATm for all the posts in the blog (considering all the scores for all the posts), while you compute the Am for a single posts (considering all the scores for that only post).

Eg. Post 1 has two scores, 1 and 3, Post 2 has one score, 4. The Am for Post 1 is 2 (= (1+3)/2), whilst the ATm (for all the posts) will be 2.66 (= (1+3+4) / 3).

Hope this helps!

5. Rodrigo Bortolon

Thank you marzapower for the answer.

I made some calcs here to understand and now I can use into my project.

Best regards,
RB

6. alanna

I am trying to work out this calculation for a product I am working on. Currently each item has a very low amount of ratings. If I make m=0 then there’s obviously no point in doing this method. Is there a point of making m=1? Will that really improve the quality of my scores?

Thank you!

• marzapower

Hi Alanna,
you’ve exactly stated that putting m=0 is useless (Ws = Am in that case). Also, as seen in the article, you do not want m to be >>N or N~=0. In your case I assume than it’s not N~=0, but we are close (N is the “total” number of votes, eg. the total votes for all the items you are sorting), so you can choose to try a couple of different small values for m, so that m is not greater than the average number of votes you have for your items. I think that m=1 or m=2 should be fine. The trick is to try different values and then see if the results are what you expect them to be. Obviously, with much more votes it’ll be easier to test the rating system.
Let me know if m=1 or m=2 work fine.

• VanYen

Hi marzapower,
In formula Lower bound of Wilson score confidence interval, If we want to get the 97 or 90 or … percentile, how can i could find, can you help me?i don’t know where i can find the information about it. please help me with detail information, i am a beginer. if you have any document about it, please give me.thank you very much

• marzapower

Hi VanYen,
if you are just looking for values, this website could be a great help for you! If you are looking for a formula, things are a little bit more complicated.

http://www.measuringusability.com/zcalcp.php

If you put 0.95 as the percentile, you will find a z-score of 1.96.
For 0.90 you have 1.6449.
For 0.97 you have 2.17, and so on.