Skip to content

SF-36 questionnaire scoring for R?

10 messages · Barry Rowlingson, Frank E Harrell Jr, Marc Schwartz +3 more

#
On Mon, Sep 13, 2010 at 11:16 AM, Dieter Menne
<dieter.menne at menne-biomed.de> wrote:
I do love SAS code for a good chuckle on a wet Monday morning...

http://gim.med.ucla.edu/FacultyPages/Hays/UTILS/SF36/sf36.sas

* SAS CODE FOR SCORING 36-ITEM HEALTH SURVEY 1.0
* WRITTEN BY RON D. HAYS, RAND, 310-393-0411 (EXT. 7581) ***;
DATA TEMP1;
 SET TEMP;
RENAME
I1=I1
I2=I2
I3=I3
I4=I4
I5=I5
I6=I6
I7=I7
I8=I8
I9=I9
I10=I10
I11=I11
I12=I12
I13=I13
I14=I14
I15=I15
I16=I16
I17=I17

[etc etc]

 The rest of it appears to be pages and pages of nested IF- statements
which could be translated into R fairly easily, but without a test set
(and currently ROFL'ing over those first 40 lines) I can't attempt.

Barry
#
Barry Rowlingson wrote:
Thanks, Barry, but there was a mistake from my side: I am looking for SF-8.

Anyone else? Google was not successful for me

Dieter
#
I know someone who has R code for SF-36 and perhaps SF-12.  Aren't there
copyright issues relating to SF-* even if it is reprogrammed?
Frank


-----
Frank Harrell
Department of Biostatistics, Vanderbilt University
#
Frank Harrell wrote:
You are right. I was not aware of this, and I could not believe it first
that a company holds the right to use an almost trivial algorithm.

(BTW: I just heard from a colleague they needed SIX month to send a
quotation.)

Dieter
#
Yes the company behind that probably received federal funds for some of the
research and has been very careful to minimize their contribution to the
community.

I didn't understand your parenthetical remark.
Frank


-----
Frank Harrell
Department of Biostatistics, Vanderbilt University
#
If it's not possible to use their particular algorithms, does anyone
think it would be helpful/practical to try to write a general scoring
system?  I imagine a function with arguments for column names, a list
where each element is a vector that indicates the numbers that
correspond to various subscales, an argument that could handle any
reverse scoring, etc.

I am willing to have a go at this if people think it would be
worthwhile (read: if someone wiser than me thinks it is not a waste of
time).

Josh
On Mon, Sep 13, 2010 at 6:19 AM, Marc Schwartz <marc_schwartz at me.com> wrote:

  
    
#
On Sep 13, 2010, at 8:59 AM, Joshua Wiley wrote:

            
It's not clear to me what you are proposing. 

The SF-* instruments are validated scoring systems, that have been demonstrated to correlate to quality of life and in turn, to healthcare resource utilization and cost.

Are you proposing to develop an algorithm that performs the same set of functions? If so, note that you would have to go through the same scoring system validation that originally RAND and now QualityMetric have gone through. 

Of course, you could use RAND's original implementation of the SF-36 (RAND-36):

  http://www.rand.org/health/surveys_tools/mos/mos_core_36item.html

which is in the public domain. However, there are material differences in the scoring systems now used by QM and the original RAND scoring mechanism, as I understand it, is almost never used these days.

Regards,

Marc Schwartz
#
On Sep 13, 2010, at 15:59 , Joshua Wiley wrote:

            
I don't think that's the issue at all. It is a matter of being able to say that you did it "The Standard Way" (i.e. their way, by the book/manual) or not. It really doesn't matter how trivial the procedure is, or even whether it is the right thing to do. Even if you do a complete clean-room implementation of their scoring system, they can claim either that what you do is not SF-36 or if you say that it is, that you owe them money.

  
    
#
On Mon, Sep 13, 2010 at 8:02 AM, Marc Schwartz <marc_schwartz at me.com> wrote:
Apologies on not being clearer, I certainly did not mean develop and
validate an alternate scoring system for the SF-*.  I was thinking
many scales have steps in common (like overall vs. subscale scores,
reverse coding, z-scoring items, weighting items by difficulty), and
that this could be automated in a function.  Ideally, once these
initial steps were done, the user would have little left to do to
score it.  I can think of a few scales in psychology where the
subscales are essentially just summed (after appropriate reversing and
scaling).  This pseudo-code is an example of what I was thinking

function(dataset = mydata, variables = colnames(mydata)[1:20],
  reverse.code = c(3, 6, 8, 12, 14),
  subscales = list("A" = c(1:6, 11:14), "B" = c(7:10, 15:20)),
  scale = TRUE/FALSE, weights = , algorithm = )

But, the more I think about it, the first part is trivial to
implement, and in the case where scoring is more complex, users are
not just going to be able to pass one line of code to the mystical
algorithm argument and be done, which answers my question whether it
would be useful.

Sincerely,

Josh