Recall, Recall rate@k and precision in top-k recommendation

Luisa Hernández picture Luisa Hernández · Nov 13, 2015 · Viewed 22.7k times · Source

According to authors in 1, 2, and 3, Recall is the percentage of relevant items selected out of all the relevant items in the repository, while Precision is the percentage of relevant items out of those items selected by the query.

Therefore, assuming user U gets a top-k recommended list of items, they would be something like:

Recall= (Relevant_Items_Recommended in top-k) / (Relevant_Items)

Precision= (Relevant_Items_Recommended in top-k) / (k_Items_Recommended)

Until that part everything is clear but I do not understand the difference between them and Recall rate@k. How would be the formula to compute recall rate@k?

Answer

Luisa Hernández picture Luisa Hernández · Nov 24, 2015

Finally, I received an explanation from Prof. Yuri Malheiros (paper 1). Althougth recall rate@k as cited in papers cited in the questions seemed to be the normal recall metrics but applied into a top-k, they are not the same. This metric is also used in paper 2, paper 3 and paper 3

The recall rate@k is a percentage that depends on the tests made, i.e., the number of recommendations and each recommendation is a list of items, some items will be correct and some not. If we made 50 different recommendations, let us call it R (regardless of the number of items for each recommendation), to calculate the recall rate is necessary to look at each of the 50 recommendations. If, for each recommendation, at least one recommended item is correct, you can increment a value, in this case, let us call it N. In order to calculate the recall rate@R, it is neccesary to make the N/R.