I am plotting ROCs and measuring partial AUC as a metric of ecological niche model quality. As I am working in R, I am using the ROCR and the pROC packages. I'll settle on one to use, but for now, I just wanted to see how they performed, and if one met my needs better.
One thing that confuses me is that, when plotting a ROC, the axes are as follows:
ROCR
x axis: 'true positive rate' 0 -> 1
y axis: 'false positive rate', 0 -> 1
pROC
x axis: 'sensitivity' 0 -> 1
y axis: 'specificity' 1 -> 0.
But if I plot the ROC using both methods, they look identical. So I just want to confirm that:
true positive rate = sensitivity
false positive rate = 1 - specificity.
Here is a reproducible example:
obs<-rep(0:1, each=50)
pred<-c(runif(50,min=0,max=0.8),runif(50,min=0.3,max=0.6))
plot(roc(obs,pred))
ROCRpred<-prediction(pred,obs)
plot(performance(ROCRpred,'tpr','fpr'))
To confirm, you are right in that true positive rate = sensitivity and false positive rate = 1 - specificity. In your example, the order in which you plot components of the ROCR performance object from the ROCR
package is key. In the last line, the first performance measure, true positive rate, 'tpr' gets plotted on the y-axis measure = 'tpr'
and the second performance measure, false positive rate, is plotted on the x-axis x.measure = 'fpr'
plot(performance(ROCRpred, measure = 'tpr', x.measure = 'fpr'))