Skip to content

ROCR - best sensitivity/specificity tradeoff?

3 messages · Christian Meesters, David Winsemius, Claudia Beleites

#
Hi,

My questions concerns the ROCR package and I hope somebody here on the list can help - or point me to some better place.

When evaluating a model's performane, like this:


pred1 <- predict(model, ..., type="response")
pred2 <- prediction(pred1, binary_classifier_vector)
perf  <- performance(pred, "sens", "spec")

(Where "prediction" and "performance" are ROCR-functions.)

How can I then retrieve the cutoff value for the sensitivity/specificity tradeoff with regard to the data in the model (e.g. model = glm(binary_classifier_vector ~ data, family="binomial", data=some_dataset)? Perhaps I missed something in the manual? Or do I need an entirely different approach for this? Or is there an alternative solution?

Thanks,
Christian


--
#
On Apr 6, 2011, at 2:27 PM, Christian Meesters wrote:

            
Or perhaps in your learning phase regarding decision theory perhaps?  
You have not indicated that you understand the need to assign a cost  
to errors of either type before you can talk about a preferred cutoff  
value.

  
    
#
Christian,
a) look into the performance object, you find all values there

b) have a look at this thread
https://stat.ethz.ch/pipermail/r-help/attachments/20100523/51ec813f/attachment.pl
http://finzi.psych.upenn.edu/Rhelp10/2010-May/240021.html
http://finzi.psych.upenn.edu/Rhelp10/2010-May/240043.html

Claudia