39 Decisional Analysis and Statistics

Performance measure

Incidence Rate

Prevalence

Sensitivity

Specificity

False-Negative Rate

False-Positive Rate

Positive Predictive Value

Negative PredictiveValue

Overall Accuracy/Diagnostic Efficiency Deriving Missing Performance Measures Youden's Index

Bayes's Theorem and Modifications

Odds and Likelihood Ratios

Odds & Likelihood Ratios for Seq.Testing

Risk Sensitivity and Risk Specificity

Statistics for Use in Quality Control

Mean

Standard Deviation (SD) Coefficient of Variation (CV) Standard Deviation Interval (SDI)
Coefficient of Variation Interval (CVI) Total Allowable Error Chi Square
Outcome Comparison of Two Groups Comparison of Two Observers

Receiver Oper.Characteristics Plots

Z-score Westgard Control Rules

Series of Control Rules

Evaluating the Medical Literature Assessing the Method.Qual.of Clin.Stud. Measures of the Consequence of Treatment
Number Needed to Treat Corr.Risk Ratio & Estimating Relat. Risk Confidence Intervals
Confidence Interval for a Single Mean Confid. Interv. for Diff. between Two Means Confidence Interv. for a Single Proport.
Confidence Interv. Observations is 0 or 1 Confidence Interv. btwn Odds Ratio Confidence Interv Difference Btwn etc.
Odds and Percentages Benefit, Risk and Threshold for an Action

Testing and Test Treatment Thresholds

39.01  Performance Measures    
Overview: The usefulness of a test is often judged in how well it makes the diagnosis for the presence or absence of a disease.

A person with the disease who has a "positive" test is termed a true positive, whereas a person with the disease but a "negative" test result is termed a false negative.

A person without disease who has a "positive" result is termed a false positive, while a person without disease having a "negative" result is termed a true negative.

 In real life things are not always clear cut; the distinction between positive and negative in a test result is sometimes artificial while it is not always possible to say if a person does or does not have a disease.

 

Positive for Disease (+)

Negative for Disease (-)

Result Positive (+)

a = true positive

b = false positive

Result Negative (-)

c = false negative

d = true negative

TOP References

Braunwald E, Isselbacher KJ, et al (editors). Harrison's Principles of Internal Medicine, 11th edition. McGraw-Hill Book Publishers. 1987. page 7

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Panzer RJ, Black ER, Griner PF. Interpretation of diagnostic tests and strategies for their use in quantitative decision making. pages 17-28. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983. pages 50-51, and 210

39.01.01  Incidence Rate
Overview:
The incidence rate is the number of new cases of a disease in the total population per unit time.

incidence rate =

((A) / (a + b + c + d)) / T)

where:

  • A = number of new cases of a disease for given time period, which is a subset of all the true positives (a + c) ;
  • (a + b + c + d) = sum of (true positives, false positives, false negatives, true negatives) = total population
  • T = unit of time
TOP References:      

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983., page 51

39.01.02

  Prevalence        

Overview:

Prevalence is all patients with disease divided by all patients tested. This is also termed the  "prior probability."

prevalence =

(a + c) / (a + b + c + d)

where:

  • a + c = true positives + false negatives = all people with disease
  • (a + b + c + d) = sum of (true positives, false positives, false negatives, true negatives) = total population
TOP References      

Braunwald E, Isselbacher KJ, et al (editors). Harrison's Principles of Internal Medicine, 11th edition. McGraw-Hill Book Publishers. 1987. page 7

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983. pages 50-51, and 210

 

39.01.03  Sensitivity        

Overview:

Sensitivity is the true-positive test results divided by all patients with the disease.

sensitivity =

 (a / (a + c))

where:

 
  • a  = true positives
  • a + c = true positives+ false negatives = all people with disease

Comments

  • The better the seNsitivity of the test, the fewer the false Negatives.
TOP References      

Braunwald E, Isselbacher KJ, et al (editors). Harrison's Principles of Internal Medicine, 11th edition. McGraw-Hill Book Publishers. 1987. page 7

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Panzer RJ, Black ER, Griner PF. Interpretation of diagnostic tests and strategies for their use in quantitative decision making. pages 17-28. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983. pages 50-51, and 210

39.01.04

Specificity        

Overview:

The specificity of a test is the  true-negative test results divided by all patients without the disease.

specificity =

(d / (b + d))

where:

  • d = true negatives
  • (b + d) = sum of ( false positives, true negatives) = all people without disease

Comments

  • The better the sPecificity of the test, the fewer the false Positives.

 

TOP References      

Braunwald E, Isselbacher KJ, et al (editors). Harrison's Principles of Internal Medicine, 11th edition. McGraw-Hill Book Publishers. 1987. page 7

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Panzer RJ, Black ER, Griner PF. Interpretation of diagnostic tests and strategies for their use in quantitative decision making. pages 17-28. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983. pages 50-51, and 210

39.01.05

False-Negative Rate      

Overview:

The false negative rate for a test is the  false-negative test results divided by all patients with the disease.

false-negative rate =

(c / (a + c))

where:

  • c =  false negatives
  • (a + c ) = sum of (true positives,  false negatives) = all people with disease
TOP References      

Braunwald E, Isselbacher KJ, et al (editors). Harrison's Principles of Internal Medicine, 11th edition. McGraw-Hill Book Publishers. 1987. page 7

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Panzer RJ, Black ER, Griner PF. Interpretation of diagnostic tests and strategies for their use in quantitative decision making. pages 17-28. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983. pages 50-51, and 210

39.01.06

False-Positive Rate      

Overview:

The false positive rate for a test is the  false-positive test results divided by all patients without the disease.

false-positive rate =

(b / (b + d))

where:

  • b =  false positives
  • (b + d) = sum of (false positives, true negatives) = all people without disease
TOP References      

Braunwald E, Isselbacher KJ, et al (editors). Harrison's Principles of Internal Medicine, 11th edition. McGraw-Hill Book Publishers. 1987. page 7

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Panzer RJ, Black ER, Griner PF. Interpretation of diagnostic tests and strategies for their use in quantitative decision making. pages 17-28. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983. pages 50-51, and 210

39.01.07

Positive Predictive Value      

Overview:

The positive predictive value is true-positive test results divided by all positive test results. This is also referred to as the predictive value of a positive test. This is equivelent to Bayes's formula for post-test probability given a positive result.

positive predictive value =

(a / (a + b))

where:

  • a = true positives
  • (a + b ) = sum of (true positives, false positives) = all positive test results
TOP References      

Braunwald E, Isselbacher KJ, et al (editors). Harrison's Principles of Internal Medicine, 11th edition. McGraw-Hill Book Publishers. 1987. page 7

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Panzer RJ, Black ER, Griner PF. Interpretation of diagnostic tests and strategies for their use in quantitative decision making. pages 17-28. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983. pages 50-51, and 210

39.01.08 

Negative Predictive Value      

Overview:

The negative predictive value is the true-negative test results divided by all patients with negative results. This is also referred to as the  predictive value of a negative test. This is equivelent to Bayes's formula for post-test probability given a negative result.

negative predictive value =

(d / (c + d))

where:

  • d =  true negatives
  • (c + d) = sum of ( false negatives, true negatives) = all negative test results
TOP References      

Braunwald E, Isselbacher KJ, et al (editors). Harrison's Principles of Internal Medicine, 11th edition. McGraw-Hill Book Publishers. 1987. page 7

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Panzer RJ, Black ER, Griner PF. Interpretation of diagnostic tests and strategies for their use in quantitative decision making. pages 17-28. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983. pages 50-51, and 210

39.01.09 

Overall Accuracy, or Diagnostic Efficiency  

Overview:

The overall accuracy of a test is the measure of "true" findings (true-positive + true-negative results) divided by all test results. This is also termed "the efficiency" of the test.

overall accuracy =

((a+d) / (a + b + c + d))

where:

  • a + d = true positives + true negatives = all people correctly classified by testing
  • (a + b + c + d) = sum of (true positives, false positives, false negatives, true negatives) = total population
TOP References      

Braunwald E, Isselbacher KJ, et al (editors). Harrison's Principles of Internal Medicine, 11th edition. McGraw-Hill Book Publishers. 1987. page 7

Goldman L. Chapter 10: Quantitative aspects of clinical reasoning. pages 43-48. IN: Isselbacher KJ, Braunwald E, et al. Harrison's Principles of Internal Medicine, Thirteenth Edition. McGraw-Hill. 1994.

Clave P, Guillaumes S, Blanco I, et al. Amylase, Lipase, Pancreatic Isoamylase, and Phospholipase A in Diagnosis of Acute Pancreatitis. Clin Chem. 1995; 41:1129-1134. <with ROC>

Speicher C, Smith JW Jr.. Choosing Effective Laboratory Tests. WB Saunders. 1983. pages 50-51, and 210

39.01.10

Deriving Missing Performance Measures When Only Some Are Known  

Overview:

If some performance measures for a test are known but others are not, it is often possible to calculate the missing values from those that are known.

Key for equations
  • SE = sensitivity
  • SP = specificity
  • PPV = positive predictive value
  • NPV = negative predictive value
  • ACC = accuracy

(1)sensitivity =

 (( 1 + (((NPV^(-1)) - 1) * (((SP^(-1)) - 1)^(-1)) * ((PPV^(-1)) - 1)))^(-1))

(2) specificity =

 (( 1 + (((PPV^(-1)) - 1) * (((SE^(-1)) - 1)^(-1)) * ((NPV^(-1)) - 1)))^(-1))

(3) positive predictive value =

(( 1 + (((SP^(-1)) - 1) * (((NPV^(-1)) - 1)^(-1)) * ((SE^(-1)) - 1)))^(-1))

(4) negative predictive value =

 (( 1 + (((SE^(-1)) - 1) * (((PPV^(-1)) - 1)^(-1)) * ((SP^(-1)) - 1)))^(-1))

(5) accuracy =

((1 + (((((PPV ^ (-1)) - 1) ^ (-1) ) + (((SP ^ (-1)) - 1) ^ (-1)))^(-1)) + (((((SE ^ (-1)) - 1) ^ (-1) ) + (((NPV ^ (-1)  ) - 1) ^ (-1)))^(-1))) ^ (-1))

(6) positive predictive value =

((1 + (((SE ^ (-1)) - (ACC ^ (-1))) * (((((ACC ^ (-1)) - 1) * (((SP ^ (-1)) - 1) ^ (-1))) - 1) ^(-1)))) ^ (-1))
where:
  • The equation does not apply if SE = SP = ACC

(7) sensitivity =

 ((1 + (((PPV ^ (-1)) - (ACC ^ (-1))) * (((((ACC ^ (-1)) - 1) * (((NPV ^ (-1)) - 1) ^ (-1))) - 1) ^(-1)))) ^ (-1))

where:

  • The equation does not apply if PPV = NPV = ACC

(8) specificity =

 (( 1 + ((((SE ^ (-1)) + (PPV ^ (-1)) - (ACC ^ (-1)) - 1) ^ (-1)) *  ((ACC ^ (-1)) - 1) * ((PPV ^ (-1)) -1))) ^ (-1))

(9) positive predictive value =

 (( 1 + ((((SP ^ (-1)) + (NPV ^ (-1)) - (ACC ^ (-1)) - 1) ^ (-1)) *  ((ACC ^ (-1)) - 1) * ((SP ^ (-1)) -1))) ^ (-1))

(10) specificity =

 ((((((ACC ^ (-1)) - 1) * (((SE ^ (-1)) - 1) ^ (-1)) * ((NPV ^ (-1)) - 1)) + (ACC ^ (-1)) - (NPV ^ (-1)) + 1)) ^ (-1))

(11) sensitivity =

 ((((((ACC ^ (-1)) - 1) * (((SP ^ (-1)) - 1) ^ (-1)) * ((PPV ^ (-1)) - 1)) + (ACC ^ (-1)) - (PPV ^ (-1)) + 1)) ^ (-1))

Parameter Known?

Equations to

SE

SP

PPV

NPV

ACC

Apply

Y

Y

Y

N

N

4 (NPV)

5 (ACC)

Y

Y

N

Y

N

3 (PPV)

5 (ACC)

Y

Y

N

N

Y

6 (PPV)

4 (NPV)

Y

N

Y

Y

N

2 (SP)

5 (ACC)

Y

N

Y

N

Y

8 (SP)

4 (NPV)

Y

N

N

Y

Y

10 (SP)

3 (PPV)

N

Y

Y

Y

N

1 (SE)

5 (ACC)

N

Y

Y

N

Y

11 (SE)

4 (NPV)

N

Y

N

Y

Y

9 (PPV)

1 (SE)

N

N

Y

Y

Y

7 (SE)

2 (SP)

 

Implementation Notes
  • Some of the equation numbers differ from that in Einstein et al (1997). Equations with similar structure are grouped together
  • Substituting variables for some of the more complex structures makes implementing the equations somewhat easier.
TOP References:      
Einstein AJ, Bodian CA, Gil J. The relationship among performance measures in the selection of diagnostic tests. Arch Pathol Lab Med. 1997; 121: 110-117.

39.01.11 

Youden's Index      

Overview:

Youden's index is one way to attempt summarizing test accuracy into a single numeric value.

Youden's index =

 1 - ((false positive rate) + (false negative rate))

=

1 - ((1 - (sensitivity)) + (1 - (specificity)))

 (sensitivity) + (specificity) - 1
It may also be expressed as:

Youden's index =

( a / (a + b)) + (d / (c + d)) - 1 =
((a * d) - (b * c)) / ((a + b) * (c + d))

where:

a + b = people with disease

c + d = people without disease

a = people with disease identified by test (true positive)

b = people with disease not identified by test (false negatives)

c = people without disease identified by test (false positives)

d = people without disease not identified by test (true negatives)

Interpretation

minimum index: -1

maximum index: +1

A perfect test would have a Youden index of +1.

Limitation

The index by itself would not identify problems in sensitivity or specificity.
TOP References:      

Hausen H. Caries prediction - state of the art. Community Dentistry and Oral Epidemiology. 1997; 25: 87-96.

Hilden J, Glasziou P. Regret graphs, diagnostic uncertainty and Youden's index. Statistics in Medicine. 1996; 15: 969-986.

Youden WJ. Index for rating diagnostic tests. Cancer. 1950; 3: 32-35.

39.02  Bayes's Theorem and Modifications    
39.02.01   Bayes's Theorem  

Overview:

Bayes's theorem gives the probability of disease in a patient being tested based on disease prevalence and test performance.

post-test probability disease present given a positive test result =

= ((pretest probability that disease present) * (probability test positive if disease present)) / (((pretest probability that disease present) * (probability test positive if disease present)) + ((pretest probability that disease absent) * (probability test positive if disease absent)))

post-test probability disease present given a negative test result =
= ((pretest probability that disease present) * (probability test negative if disease present)) / (((pretest probability that disease absent) * (probability disease absent when test negative)) + ((pretest probability that disease present) * (probability test negative if disease present)))
   
Variable   Alternative Statement  
pretest probability that disease present prevalence
probability test positive if disease present sensitivity
pretest probability that disease absent (1 - (prevalence))
probability test positive if disease absent

false positive rate = (1 - (specificity))

probability test negative if disease present false negative rate = (1 - (sensitivity))
probability disease absent when test negative specificity
 
Bayes's formula can also be expressed in the positive and negative predictive values:

post-test probability given a positive result =

= positive predictive value =

= (true positives) / (all positives) =

= (true positives) / ((true positives) + (false positives))

 

post-test probability given a negative result =

= negative predictive value =

= (false negatives) / (all negatives) =

= (false negatives) / ((true negatives) + (false negatives))

Limitations of Bayes's theorem

Bayes's theorem assumes test independence, which may not occur if multiple tests are used for diagnosis

TOP References      
Einstein AJ, Bodian CA, Gil J. The relationship among performance measures in the selection of diagnostic tests. Arch Pathol Lab Med. 1997; 121: 110-117.
Nicoll D, Detmer WM. Chapter 1: Basic principles and diagnostic test use and interpretation. pages 1 - 16. IN: Nicoll D, McPhee SJ, et al. Pocket Guide to Diagnostic Tests, Second Edition. Appleton & Lange. 1997.
Noe DA. Chapter 3: Diagnostic Classification. pages 27-43. IN: Noe DA, Rock RC (Editors). Laboratory Medicine. Williams and Wilkins. 1994.
Schultz EK. Chapter 14: Analytical goals and clinical interpretation of laboratory proceedures, pages 485-507. IN: Burtis C, Ashwood E. Tietz Textbook of Clinical Chemistry, Second edition. W.B. Saunders Company. 1994.
Scott TE. Chapter 2: Decision making in pediatric trauma. pages 20-40. IN: Ford EG, Andrassy RJ. Pediatric Trauma - Initial Assessment and Management. W.B. Saunders. 1994
Suchman AL, Dolan JG. Odds and likelihood ratios. pages 29-34. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.
Weissler AM. Chapter 11: Assessment and use of cardovascular tests in clinical prediction. pages 400-421. IN: Giuliani ER, Gersh BJ, et al. Mayo Clinic Practice of Cardiology, Third Edition. Mosby. 1996
39.02.02 Odds and Likelihood Ratios      

Overview:

One form of Bayes's theorem is to calculate the post-test odds for a disorder from the pre-test odds and performance characteristics for the test.

odds ratio =

 (probability of disease) / (1 - (probability of disease))

likelihood ratio =

 (probability of a test result in a person with the disease) / (probability of a test result in a person without the disease)

post-test odds =

  (pre-test odds) * (likelihood ratio)

where:

disease prevalence in the population can be used as the pretest odds

likelihood ratios can be expressed in terms of the sensitivity and specificity of the test for the diagnosis

positive likelihood ratio is the likelihood ratio for a positive test result; it is the true-positive rate divided by the false-positive rate, or (sensitivity) / (1 - (specificity))

negative likelihood ratio is the likelihood ratio for a negative test result; it is the false negative rate divided by the true negative rate, or (1 - (sensitivity)) / (specificity)

post-test odds that the person has the disease if there is a positive test result  =

 (pre-test odds) * (positive likelihood ratio)

post-test odds that the person has the disease if there is a negative test result  =

 (pre-test odds) * (negative likelihood ratio)
TOP Calculating Post-Test Odds    

Step 1: Calculate the positive and negative likelihood ratios for the test

positive likelihood ratio =

= (sensitivity) / (1 - (specificity))

negative likelihood ratio =

= (1 - (sensitivity)) / (specificity)

Step 2: Convert the prior probability to prior odds:

((prior probability) * 10) : ((1 - (prior probability)) * 10)

Step 3: Multiply the prior odds by the likelihood ratios to obtain the post-test odds

((positive likelihood ratio) * (prior probability) * 10) : ((1 - (prior probability)) * 10)

((negative likelihood ratio) * (prior probability) * 10) : ((1 - (prior probability)) * 10)

Step 4: Convert the post-test odds to post-test probabilities

positive post-test probability =

= ((positive likelihood ratio) * (prior probability) * 10) / (((positive likelihood ratio) * (prior probability) * 10) + ((1 - (prior probability)) * 10))

negative post-test probability =

= ((negative likelihood ratio) * (prior probability) * 10) / (((negative likelihood ratio) * (prior probability) * 10) + ((1 - (prior probability)) * 10))

TOP References:      

Einstein AJ, Bodian CA, Gil J. The relationship among performance measures in the selection of diagnostic tests. Arch Pathol Lab Med. 1997; 121: 110-117.

Noe DA. Chapter 3: Diagnostic Classification. pages 27-43. IN: Noe DA, Rock RC (Editors). Laboratory Medicine. Williams and Wilkins. 1994.

Scott TE. Chapter 2: Decision making in pediatric trauma. pages 20-40. IN: Ford EG, Andrassy RJ. Pediatric Trauma - Initial Assessment and Management. W.B. Saunders. 1994

Suchman AL, Dolan JG. Odds and likelihood ratios. pages 29-34. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.

Weissler AM. Chapter 11: Assessment and use of cardovascular tests in clinical prediction. pages 400-421. IN: Giuliani ER, Gersh BJ, et al. Mayo Clinic Practice of Cardiology, Third Edition. Mosby. 1996

39.02.03 Odds and Likelihood Ratios for Sequential Testing  

Overview:

If more than one test or finding is used for diagnosis, the final post-test probability can be calculated by combining the likelihood ratio for each test.

post-test odds =

 (pre-test odds) * (likelihood ratio for test 1) * (likelihood ratio for test 2) * .... * (likelihood ratio for test n)

Limitation

For valid results, tests must be conditionally independent of each other, where conditionally independent indicates that the results of the tests are not associated with each other.

If conditionally dependent tests are used, then the calculated post-test probability will be over-estimated.

TOP References:      
Nicoll D, Detmer WM. Chapter 1: Basic principles and diagnostic test use and interpretation. pages 1 - 16. IN: Nicoll D, McPhee SJ, et al. Pocket Guide to Diagnostic Tests, Second Edition. Appleton & Lange. 1997.
Schultz EK. Chapter 14: Analytical goals and clinical interpretation of laboratory proceedures, pages 485-507. IN: Burtis C, Ashwood E. Tietz Textbook of Clinical Chemistry, Second edition. W.B. Saunders Company. 1994.
Suchman AL, Dolan JG. Odds and likelihood ratios. pages 29-34. IN: Panzer RJ, Black ER, et al. Diagnostic Strategies for Common Medical Problems. American College of Physicians. 1991.
Weissler AM. Chapter 11: Assessment and use of cardovascular tests in clinical prediction. pages 400-421. IN: Giuliani ER, Gersh BJ, et al. Mayo Clinic Practice of Cardiology, Third Edition. Mosby. 1996
39.03  Risk Sensitivity and Risk Specificity  
Overview:
Risk sensitivity and specificity can be used to evaluate how good a risk factor is for predicting mortality in the population.
Risk sensitivity is the proportion of people who die during the follow-up period who were identified as high risk.
Risk specificity is the proportion of people who survive during the follow-up period who were identified as low risk.
Patient subgroups
high risk fraction = those with risk factor
low risk fraction = those without risk factor

risk sensitivity in percent =

 (mortality for high risk subgroup in percent) * (percent of population identified as high risk) / (cumulative mortality in percent for the whole population)

risk specificity in percent =

 (survival for low risk subgroup in percent) * (percent of population identified as low risk) / (cumulative survival in percent for the whole population)

percent of population in high risk group =

 100 - (percent of population in low risk group)

cumulative survival of high risk group =

 100 - (cumulative mortality of high risk group)

cumulative survival of low risk group =

 100 - (cumulative mortality of low risk group)

cumulative survival of population =

 100 - (cumulative mortality of population)
TOP References:      
Weissler AM. Chapter 11: Assessment and use of cardovascular tests in clinical prediction. pages 400-421. IN: Giuliani ER, Gersh BJ, et al. Mayo Clinic Practice of Cardiology, Third Edition. Mosby. 1996
39.04  Statistics for the Normal Distribution and Use in Quality Control  
39.04.01   Mean of Values in a Normal Distribution  
Overview:
If data follows a normal Gaussian distribution, then the mean of the data can be calculated.

mean of values =

 (sum of all values) / (number of values)
TOP References:      
Woo J, Henry JB. Chapter 6: Quality management. pages 125-136 (128). IN: Henry JB (editor-in-chief). Clinical Diagnosis and Management by Laboratory Methods, 19th edition. WB Saunders.1996.
39.04.02  Standard Deviation (SD)      
Overview:
The standard deviation is a measure of the dispersion of data about the mean.

standard deviation =

 absolute value [square root of the variance]
where
variance =  ((sum of ((each value) - (mean of values))^2) / ((number of values) - 1))
TOP References:      
Barnett RN. Clinical Laboratory Statistics, Second Edition. Little, Brown and Company. 1979. page 4
Woo J, Henry JB. Chapter 6: Quality management. pages 125-136 (128). IN: Henry JB (editor-in-chief). Clinical Diagnosis and Management by Laboratory Methods, 19th edition. WB Saunders.1996.
39.04.03  Coefficient of Variation (CV)      
Overview:
The coefficient of variation for a test (CV) gives a true picture of deviation regardless of the nature of the measurement or the methodology.

CV (expressed as a percent) =

((standard deviation) * 100 / (mean))
TOP References:      
Dharan, Murali. Total Quality Control in the Clinical Laboratory. C.V. Mosby Co. 1977. page 22
39.04.04 Standard Deviation Interval (SDI)    
Overview:
The Standard Deviation Interval gives information about how a given laboratory's mean differs from the mean of a group of comparable laboratories, taking into account the variation among the laboratories. This is also called the Standard Deviation Index. This is a measure of accuracy.

SDI =

 (((mean) - (average of all means)) / (standard deviation of all means))
where:
average of all means = ((sum of all means) / (number of means))
Interpretation
Values > +2.0 or < -2.0 need to be investigated.
TOP References:      
College of American Pathologists QAS Program
39.04.05  Coefficient of Variation Interval (CVI)    
Overview:
The CVI is a measure of precision., but it is difficult to get a good definition of. It also may be called the Coefficient of Variation Index, or the CVR.

CVI =

 (CV for laboratory) / (pool CV)

  or    

 (CV for laboratory for time period) / (peer group CV for time period)
Interpretation:
Values > +2.0 or < -2.0 need to be investigated.
TOP References:      
The Interlaboratory Quality Assurance Program. Coulter Diagnostics. 1988.
39.05 Total Allowable Error      
Overview:
Analysis of the total allowable error (TEa) can help a laboratory meet its goals for precision performance.

Variables

 laboratory mean =

 mean at laboratory for period of stability in reagents & controls

"true" mean =

 mean for all methods & laboratories

laboratory standard deviation =

 standard deviation noted at laboratory

method standard deviation =

 standard deviation reported by vendor
CLIA limit
given as a range, either using a percent or an absolute value
if both specified, use whichever is greater

Calculations

calculated bias =

 laboratory's deviation (based on site and method) from mean of all sites =
((laboratory mean) - (true mean))

laboratory imprecision  =

 (factor) * (laboratory standard deviation)
where
factor is 1.96 for 95%
factor is 2.50 for 99%
Total allowable error = TEa =  ((CLIA limit) * (true mean))

bias as percent of CLIA limit =

 ((calculated bias) / ((CLIA limit) * (true mean))) =
 (((laboratory mean) - (true mean)) / (total allowable error)) =
( ((laboratory mean) - (true mean)) / ((CLIA limit) * (true mean)))

total error =

 calculated bias + imprecision =
(((laboratory mean) - (true mean)) + (laboratory imprecision))
( ((laboratory mean) - (true mean)) + ((factor) * (laboratory standard deviation)))

assessment of performance =

 (total error) / (total allowable error) * 100 =
(((laboratory mean) - (true mean))+ (laboratory imprecision)) / (((CLIA limit) * (true mean)))* 100 =
(((laboratory mean) - (true mean))+ (1.96 * (laboratory standard deviation))) / (((CLIA limit) * (true mean))) * 100

systemic error (critical) = SEc =

( ( ( (total allowable error) - (calculated bias) ) / (laboratory standard deviation) ) - 1.65) =
( ( ( ((CLIA limit) * (true mean)) - ((laboratory mean) - (true mean))) / (laboratory standard deviation)) - 1.65)
Use the systemic error for selection of QC control rules to use
standard deviation to use =  ((calculated standard deviation) * ((denominator of primary rule) / 2))

Example: If using 1:3s rule, where the denominator = 3

standard deviation to use =

 (laboratory standard deviation) * (3 / 2) = 1.5 * (laboratory standard deviation)  
TOP References      
Blanchard J-M, O'Grady M. Applicationof the Westgard quality control selection grids (QCSG) to the Kodak Ektachem 700 analyzer. Abstract presented athte 43rd AACC National Meeting, Washington, DC. July 30-August 1, 1991.
 
Westgard JO, Bawa N, et al. Laboratory precision performance. Arch Pathol Lab Med. 1996; 120: 621-625.
 
Westgard JO. Error budgets for quality management: Practical tools for planning and assuring the analytical quality of laboratory testing processes. Clinical Laboratory Management Review. July/August, 1996. pages 377-403.
 
Westgard JO. Chapter 150: Planning statistical quality control procedures. pages 1191-1200. IN: Rose NR, de Macario EC, et al (editors). Manual of Clinical Laboratory Immunology, Fifth Edition. ASM Press. 1997.
39.06  Chi Square        
 
39.06.01 Outcome Comparison of Two Groups    
Overview:
When 2 different groups receive different treatment, the number of each group improved and not improved can be compared as follows:

group A improved

group B improved

total improved

group A not improved

group B not improved

total not improved

total group A

total group B

total patients

 

This data shows 1 degree of freedom.

chi square value using Yates correction for 1 degree of freedom =

 ((total number) * ((ABS(((number of group A improved) * (number group B not improved)) - ((number of group A not improved) *  (number of group B improved))) - ((total number) / 2))^2)) / (((number of group A improved) + (number of group B improved)) * ((number of group A not improved) + (number of group B not improved)) * (total number of group A) * (total number of group B))
From the chi-square value, it is the probability that a difference is due to chance can be calculated. The Excel function CHIDIST will give the probability of the difference being due to chance for the chi-square value. The probability that the difference is not due to chance is then (1 - (probability due to chance)).
TOP References:      
Barnett RN. Clinical Laboratory Statistics, 2nd edition. Little, Brown and Company. 1979. pages 26-29
Beyer WH. CRC Standard Mathematical Tables, 25th edition. CRC Press. 1978. page 537
Keeping ES. Introduction to Statistical Inference. Dover Publications. 1995 printing of 1962 work. pages 314-322
39.06.02 Comparison of Two Observers      
Overview:
When 2 observers tally data from the same material, it is useful to see whether the differences in their tabulations is due to chance or due to observer variation.
For more than 2 observations:

chi square =

(summation from 1 to number of observations ( (((observer A value) - (observer B value)) ^ 2) / ((observer A value) + (observer B value)) ) )
From this value, the probability that the differences between the 2 observers is due to chance can be calculated. The equation can be simplified from an integral depending on whether there is an even or odd degree of freedom.
Even Degrees of Freedom
For even degrees of freedom, this is relatively simple.
probability due to chance (chisquare, degree of freedom) =  ((e) ^ ((-1) * (chisquare) / 2)) * (summation of i from 0 to I ( (((chisquare) / 2) ^ (i)) / (factorial (i)))
where:
I = (1/2 * ((degree of freedom) - 2))
 
2 degrees of freedom
probability = e^((-1) * (chisquare) / 2)
 
4 degrees of freedom
probability = (e^((-1) * (chisquare) / 2)) * (1 + ((chisquare) / 2) )
 
6 degrees of freedom
probability = (e^((-1) * (chisquare) / 2)) * (1 + ((chisquare) / 2))+ (((chisquare) ^2) / 8))
 
8 degrees of freedom
probability = (e^((-1) * (chisquare) /2 )) * (1 + ((chisquare) / 2))+ (((chisquare) ^2) / 8) + + (((chisquare) ^3) / 48))
 
10 degrees of freedom
probability = (e^((-1) * (chisquare) / 2)) * (1 + ((chisquare) / 2))+ (((chisquare) ^2) / 8)+ (((chisquare) ^3) / 48) + (((chisquare) ^4) / 384))

Odd Degrees of Freedom

For odd degrees of freedom, this is quite complex, and it is easier to use the Excel function CHIDIST.

probability due to chance (chisquare, degree of freedom) =

 1 - ( (1 / (gamma function (I + 1))) * (summation from 0 to infinity (((-1)^i) * (((chisquare) / 2) ^ (I + i + 1)) / ((factorial (i)) * (I + i + 1)))
where
I = (1/2 * ((degree of freedom) - 2))
TOP References:      
Barnett RN. Clinical Laboratory Statistics, 2nd edition. Little, Brown and Company. 1979. pages 26-29
Beyer WH. CRC Standard Mathematical Tables, 25th edition. CRC Press. 1978. page 537
Keeping ES. Introduction to Statistical Inference. Dover Publications. 1995 printing of 1962 work. pages 314-322
39.07  Test Comparison Using Receiver Operating Characteristics (ROC) Plots  
Overview:
The Receiver Operating Curve (ROC) originated during World War II with the use of radar in signal detection. This was extended to the use of diagnostic tests for identifying disease states, using plots of sensitivity versus specificity for different test results. The area under a ROC curve serves as a measure of the diagnostic accuracy (discrimination performance) for a test.

Receiver Operating Curve

To generate a receiver operating curve it is first necessary to determine the sensitivity and specificity for each test result in the diagnosis of the disorder in question.
The x axis ranges from 0 to 1, or 0% to 100%, and can be either the
false positive rate (1 - (specificity)), or
true negative rate (specificity)
The false positive rate is the one typically used.
The y axis ranges from 0 to 1, or 0% to 100%
true positive rate (sensitivity), with range 0 to 1 (or 0 to 100%)
When the x-axis is the false positive rate (1 - (specificity)), the curve starts at (0,0) and increases towards (1,1). When the x-axis is the true negative rate (specificity), the curve starts at (0, 1) and drops towards (1, 0). The endpoints for the curve will run to these points.
 

Area under Curve

One way of measuring the area under a curve is by measuring subcomponent trapezoids. Data points can be connected by straight lines defined by:

 

y = ((slope) * x) + intercept

The area under each line can be determined by integration of (y * dx) over the interval of x1 to x2:
area = (((slope) / 2) * ((x2 ^ 2) - (x1 ^2))) + ((intercept) * (x2 - x1))
 
By summating the areas under each segment, an approximation of the area under the entire curve can be reached. However, the trapezoidal method tends to underestimate areas (Hanley, 1983), so that other techniques for measuring area should be used if greater accuracy is required.
The maximum area under ROC curve is 1 and is seen with the ideal test. The closer the area under the ROC curve is to 1, the better (more accurate) the test.
Comparison of Two Methods
Two methods can be compared by the area under their respective ROC curves. The method with the larger area under the ROC curve is preferable over one with a smaller area, allowing for variability, as being more accurate.
TOP References:      
Bamber D. The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. J Math Psych. 1975; 12: 387-415.
Beck JR, Shultz EK. The use of relative operating characteristic (ROC) curves in test performance evaluation. Arch Pathol Lab Med. 1986; 110: 13-20.
Dorfman DD. Maximum-likelihood estimation of parameters of signal-detection theory and determination of confidence intervals - rating method data. J Math Psychol. 1969; 6: 487-496.
Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982; 143: 29-36.
Hanley JA, McNeil BJ. A method of comparing the areas under receiver operating characteristics curves derived from the same cases. Radiology. 1983; 148: 839-843.
Henderson AR. Assessing test accuracy and its clinical consequences: a primer for receiver operating characteristic curve analysis. Ann Clin Biochem. 1993; 30: 521-539.
Henderson AR, Bhayana V. A modest proposal for the consistent presentation of ROC plots in Clinical Chemistry (Letter to the Editor). Clin Chem. 1995; 41: 1205-1206.
Lett RR, Hanley JA, Smith JS. The comparison of injury severity instrument performance using likelihood ratio and ROC curve analysis. J Trauma. 1995; 38: 142-148.
Pellar TG, Leung FY, Henderson AR. A computer program for rapid generation of receiver operating characteristic curves and likelihood ratios in the evaluation of diagnostic tests. Ann Clin Biochem. 1988; 25: 411-416.
Pritchard ML, Woosley JT. Comparison of two prognostic models predicting survival in patients with malignant melanoma. Hum Pathol. 1995; 26: 1028-1031.
Raab SS, Thomas PA, et al. Pathology and probability: LIkelihood ratios and receiver operating characteristic curves in the interpretation of bronchial brush specimens. Am J Clin Pathol. 1995; 103: 588-593.
Schoonjans F, Depuydt C, Comhaire F. Presentation of receiver-operating characteristic (ROC) plots (Letter to the Editor). Clin Chem. 1996; 42: 986-987.
Shultz EK. Multivariate receiver-operating characteristic curve analysis: Prostate cancer screening as an example. Clin Chem. 1995; 41: 1248-1255.
Vida S. A computer program for non-parametric receiver operating characteristic analysis. Comput Meth Prog Biomed. 1993; 40: 95-101.
Zweig MH. Evaluation of the clinical accuracy of laboratory tests. Arch Pathol Lab Med. 1988; 112: 383-386.
Zweig MH, Campbell G. Receiver-operating characteristic (ROC) plots: A fundamental evaluation tool in clinical medicine. Clin Chem. 1993; 39: 561-577.
Zweig MH, Ashwood ER, et al. Assessment of the clinical accuracy of laboratory tests using receiver operating characteristics (ROC) plots: Approved guideline. NCCLS. 1995; 15 (19).

 

39.08  Z-score        
Overview:
The Z-score can be used to put a patient result in perspective with reference values from a control population. It basically gives the number of standard deviations that a given value is from the reference population mean.

Z-score =

((patient value) - (mean for reference population)) / (standard deviation for reference population)
This appears to be share features with the Standard Deviation Interval (SDI).
TOP References      
Withold W, Schulte U, Reinauer H. Methods for determination of bone alkaline phosphatase activity: analytical performance and clinical usefulness in patients with metabolic and malignant bone diseases. Clin Chem. 1996; 42: 210-217.
39.09 Westgard Rules and the Multirule Shewhart Procedure  
 
39.09.01  Westgard Control Rules      
Overview:
Westgard et al have proposed a series of multiple rules (multirules) for interpreting quality control data. The rules are sensitive to random and systemic errors, and they are selected to keep the probability of false rejection low.
Procedure

1. Starting with a stable testing system and stable control material, a control material is analyzed for at least 20 different days. This data is used to calculate a mean and standard deviation for the control material.

2. Usually 2 control materials are analyzed (one with a low value, one with a higher value in the analytical range). Sometimes 3 or more control materials may be used, and rarely only 1.

3. The controls are included with each analytical run of the test system.

4. A Levey-Jennings control chart is prepared to graphically represent the data for each control relative to the mean and multiples of the standard deviation.

5. With each analytical run, the pattern of the current and previous control results are analyzed using all of the selected Westgard control rules.

6. If none of the rules fail, then the run is accepted. If one or more rules fail, then different responses may occur. This may include rejecting the run, adjusting the stated mean, and/or recalibrating the test.

 

Westgard Control Rule

Definition

1:2S

control result is outside +/- 2 standard deviations of the mean

1:3S

control result is outside +/- 3 standard deviations of the mean

2:2S

2 consecutive control results are more than 2 standard deviations from the mean

R:4S

either (a) one control is more than 2 SD above mean and other is more than 2 SD below the mean; or (b) the range between 2 controls exceeds 4 SD

4:1S

the last 4 consecutive control results are all either 1 SD above or below the mean

10:X

the last 10 consecutive control results all lie on the same side of the mean

 

Rule Failure

Systemic Error

Random Error

1:3S

yes

yes

2:2S

yes

 

R:4S

 

yes

4:1S

yes

 

10:X

yes

 

 

TOP References:      

Lott JA. Chapter 18: Process control and method evaluation. pages 293-325 (Figure 18-4, page 302). IN: Snyder JR, Wilkinson DS. Management in Laboratory Medicine, Third Edition. Lippincott. 1998.

Westgard JO. Chapter 150: Planning statistical quality control procedures. pages 1191-1200. IN: Rose NR, de Macario EC, et al (editors). Manual of Clinical Laboratory Immunology, Fifth Edition. ASM Press. 1997.

Westgard JO, Klee GG. Chapter 17: Quality management. pages 384-418. IN: Burtis CA, Ashwood ER. Tietz Textbook of Clinical Chemistry, Third Edition. WB Saunders Company. 1999 (1998).

 

39.09.02  Using a Series of Control Rules in the Multirule Shewhart Procedure  
Overview:
Westgard et al have developed a series of rules for evaluating controls which can be used to judge if the data from an analysis is acceptable. The results of the rule analysis can be employed sequentially in a multirule Shewhart procedure to determine whether to accept or reject an analytic run.

Westgard Rule Failed?

Yes

No

1:2S

go to next rule

in control, accept run

1:3S

out of control, reject run

go to next rule

2:2S

out of control, reject run

go to next rule

R:4S

out of control, reject run

go to next rule

4:1S

out of control, reject run

go to next rule

10:X

out of control, reject run

in control, accept run

 

TOP References:      

Lott JA. Chapter 18: Process control and method evaluation. pages 293-325 (Figure 18-4, page 302). IN: Snyder JR, Wilkinson DS. Management in Laboratory Medicine, Third Edition. Lippincott. 1998.

Westgard JO, Barry PL, Hunt MR. A multi-rule Shewhart chart for quality control in clinical chemistry. Clin Chem. 1981; 27: 493-501.

39.10  Evaluating Reports in the Medical Literature    
 
39.10.01 Criteria for Assessing the Methodologic Quality of Clinical Studies  
Overview:
The methodologic quality of a clinical study or trial can be evaluated by examining its design and implementation. A score based on the key parameters can be used to evaluate the study and to compare it with other similar studies.
Parameters
(1) randomization
(2) blinding
(3) analysis
(4) patient selection
(5) comparability of groups at baseline
(6) extent of follow-up
(7) description of treatment protocol
(8) cointerventions
(9) description of outcomes

Parameter

Finding

Points

randomization

not concealed or not sure

1

 

concealed randomization

2

blinding

not blinded

0

 

adjudicators blinded

2

analysis

other

0

 

intention to treat

2

patient selection

selected patients or unable to tell

0

 

consecutive eligible patients

1

comparability of groups at baseline

no or not sure

0

 

yes

1

extent of follow-up

< 100%

0

 

100%

1

treatment protocol

poorly described

0

 

reproducibly described

1

cointerventions (extent to which interventions applied equally across groups)

not described

0

 

described but not equal or not sure

1

 

well described and all equal

2

outcomes

not described

0

 

partially described

1

 

objectively defined

2

 

Interpretation

minimum score: 0

maximum score: 14

The higher the score, the higher the quality in the study design and implementation.

TOP References:      

Heyland DK, Cook D, et al. Maximizing oxygen delivery in critically ill patients: a methodologic appraisal of the evidence. Crit Care Med. 1996; 24: 517-524.

Heyland DK, MacDonald S, et al. Total parenteral nutrition in the critically ill patient. JAMA. 1998; 280: 2013-2019.

39.11 Measures of the Consequences of Treatment    
 
39.11.01 Number Needed to Treat  
Overview:
The number needed to treat is a simple method of looking at the benefit of a treatment intervention to prevent a condition or complication. It is the inverse of the absolute risk reduction for the treated versus untreated control populations. It can be used to extrapolate findings in the literature to a given patient at an arbitrary specified baseline risk when the relative risk reduction associated with treatment is constant for all levels of risk.

Variables:

number of people in control group

number of people in control group who develop condition of interest during time interval

number of people in active treatment group

number of people in active treatment group who develop condition of interest during time interval

event rate in control group = 

 (number of people in control group with condition) / (number of people in control group)

event rate in active treatment group =

 (number of people in active treatment group with condition) / (number of people in active treatment group)

relative risk reduction =

 ((event rate in control group) - (event rate in active treatment group)) / (event rate in control group)

absolute risk reduction =

 (event rate in control group) - (event rate in active treatment group)

number needed to treat =

 1 / (absolute risk reduction) =
 1 / ((event rate in control group) - (event rate in active treatment group))

Interpretation

The number needed to treat indicates the number of patients who need to be treated to prevent the condition of interest during the time interval.

The smaller the number needed to treat, the greater the benefit of the treatment to prevent the condition.

The number needed to treat should be considered together with other factors such as the seriousness of the condition to be prevented and the risk of adverse side effects from the treatment.

TOP References:      

Altman DG. Confidence intervals for the number needed to treat. BMJ. 1998; 317: 13091312.

Cook RJ, Sackett DL. The number needed to treat: a clinically useful measure of treatment effect. BMJ. 1995; 310: 452-454.

Laupacis A, Sackett DL, Roberts RS. An assessment of clinically useful measures of the consequences of treatment. N Engl J Med. 1988; 318: 1728-1733.

39.12 The Corrected Risk Ratio and Estimating Relative Risk  
Overview:
The corrected risk ratio can be used to derive an estimate of an association or treatment effect that better represents the true relative risk.

Odds ratio and relative risk (see Figure on page 1690 of Zhang and Yu)

If the incidence of an outcome in the study population is < 10%, then the odds ratio is close to the risk ratio.

As the incidence of the outcome increases, the odds ratio overestimates the relative risk if it is more than 1, or underestimates the relative risk is less than 1.

Situations when desirable to perform correction

if the incidence of the outcome in the nonexposed population is more than 10%, AND

if the odds ratio is > 2.5 or < 0.5

 

incidence of outcome in nonexposed group = N =

 (number with outcome in nonexposed group) / (number in nonexposed group)

incidence of outcome in exposed group = E =

 (number with outcome in exposed group) / (number in exposed group)

risk ratio =

 E / N

odds ratio =

 (E / (1 - E)) / (N / (1 - N))

E / N =

 (odds ratio) / [(1 - N) + (N * (odds ratio))]

corrected risk ratio =

 (odds ratio) / [(1 - N) + (N * (odds ratio))]
This equation can be used to correct the adjusted odds ratio obtained from logistic regression.
TOP References      

Wacholder S. Binomail regression in GLIM: Estimating risk ratios and risk differences. Am J Epidemiol. 1986; 123: 174-184.

Zhang J, Yu KF. What's the relative risk? A method for correcting the odds ratio in cohort studies of common outcomes. JAMA. 1998; 280: 1690-1691.

39.13  Confidence Intervals      
 
39.13.01  Confidence Interval for a Single Mean  
Overview:
The confidence interval for a series of findings can be calculated from the number of values, the mean, standard deviation and standard statistical tables.
Data assumptions: single mean, symmetrical distribution

confidence interval =

 (mean) +/- ((one-tailed value of Student's t distribution) * (standard deviation) / ((number of values) ^ (0.5))

where:

for a 95% confidence interval, the one-tailed value is for 2.5% (F 0.975, t 0.025)

degrees of freedom = (number of values) - 1

as the number of values increases, the closer the one-tailed value for t=0.025 approaches 1.96; at 120 degrees of freedom it is 1.98

TOP References:      

Beyer WH. CRC Standard Mathematical Tables, 25th Edition. CRC Press. 1978. Section: Probability and Statistics. Percentage points, Student's t-distribution. page 536.

Young KD. Lewis RJ. What is confidence? Part 2: Detailed definition and determination of confidence intervals. Ann Emerg Med. 1997; 30: 311-318.

39.13.02 Confidence Interval for the Difference Between Two Means  
Overview:
The confidence interval for the observed difference in the means for two sets of data can be calculated from standard statistical tables and data characteristics (number of values, mean, standard deviation) for the two data sets.
Data assumptions: 2 sets of data with symmetrical distribution

confidence interval for the difference in the means between 2 sets of data =

 ABS((mean first group) - (mean second group)) +/- (factor)

factor =

 (one-sided value of Student's t-distribution) * (pooled standard deviation) * (((1 / (number in first set)) + (1 / (number in second set))) ^ (0.5))

degrees of freedom =

 (number in first set) + (number in second set) - 2

pooled standard deviation =

 ((A + B) / (degrees of freedom)) ^ (0.5)
A = ((number in first set) - 1) * ((standard deviation of first set) ^ 2)
B = ((number in second set) - 1) * ((standard deviation of second set) ^ 2)

where:

for a 95% confidence interval, the one-tailed value is for 2.5% (F 0.975, t 0.025)

as the number of values increases, the closer the one-tailed value for t=0.025 approaches 1.96; at 120 degrees of freedom it is 1.98

TOP References:      
Beyer WH. CRC Standard Mathematical Tables, 25th Edition. CRC Press. 1978. Section: Probability and Statistics. Percentage points, Student's t-distribution. page 536.

Young KD. Lewis RJ. What is confidence? Part 2: Detailed definition and determination of confidence intervals. Ann Emerg Med. 1997; 30: 311-318.

39.13.03  Confidence Interval for a Single Proportion    
Overview:
When a certain event occurs several times in a series of observations, then its proportion and confidence interval can be calculated.

Variables

N observations

X events of interest

Distribution used

F distribution, with F = 0.975 for the 95% confidence interval

uses m and n as degrees of freedom

proportion of events =

 X / N

lower limit for the 95% confidence interval =

 X / (X + ((N - X + 1) * (F distribution for m and n)))

where

m = 2 * (N - X + 1)

n = 2 * X

upper limit for the 95% confidence interval =

 ((X + 1) * (F distribution for m and n)) / (N - X + ((X + 1) * (F distribution for m and n)))

where

m = 2 * (X + 1) = n + 2

n = 2 * (N - X) = m - 2

TOP References      

Beyer WH. CRC Standard Mathematical Tables, 25th Edition. CRC Press. 1978. Section: Probability and Statistics. F-distribution. page 540.

Young KD. Lewis RJ. What is confidence? Part 2: Detailed definition and determination of confidence intervals. Ann Emerg Med. 1997; 30: 311-318.

39.13.04 Confidence Interval When the Proportion in N Observations is 0 or 1  
Overview:
If either 0 or n events occur in n observations, then the limits of the confidence interval can be calculated based on the confidence interval and the number of observations.

X =

 1 - ((confidence interval in percent) / 100)

If 0 events occur in n observations

lower limit for the confidence interval: 0

upper limit for the confidence interval: 1 - ((X/2) ^ (1/n))

 

If n events occur in n observations

lower limit for the confidence interval: ((X/2)) ^ (1/n))

upper limit for the confidence interval: 1 (100%)

TOP References      
Young KD. Lewis RJ. What is confidence? Part 2: Detailed definition and determination of confidence intervals. Ann Emerg Med. 1997; 30: 311-318.
39.13.05 Confidence Interval for the Difference Between Two Proportions Based on the Odds Ratio  
Overview:
When comparing two populations for an event, the odds ratio and 95% confidence intervals can be calculated from looking at the number in each group positive and negative for the event.

 

Group 1

Group 2

Negative

A

B

Positive

C

D

 

(Table page 316, Young 1997)

odds for the event in group 1 =

 C / A

odds for the event in group 2 

 D / B

odds ratio for group 2 relative to group 1 =

 (odds group 2) / (odds group 1) =
 (A * D) / (B * C)

confidence interval for 95% =

 EXP( X +/- Y)

X = 

 LN ((A * D) / (B * C))

Y =

 1.96 * SQRT((1/A) + (1/B) + (1/C) + (1/D))

where:

1.96 is the value for Z from the standard normal distribution with F(Z) = 0.975

If the odds ratio is 1.0, then there is no difference between the two groups. If the 2 groups are comparing an intervention, then this is equivalent to a null hypothesis of no intervention difference.

Small Sample Sizes

If sample sizes are small (less than 10 or 20), then 0.5 is added to each of the factors.

odds ratio =

 (odds group 2) / (odds group 1) =
 ((A+0.5) * (D+0.5)) / ((B+0.5) * (C+0.5))

confidence interval for 95% =

 EXP( X +/- Y)

X =

 LN (((A+0.5) * (D+0.5)) / ((B+0.5) * (C+0.5)))

Y =

 1.96 * SQRT((1/ (A+0.5)) + (1/ (B+0.5)) + (1/ (C+0.5)) + (1/ (D+0.5)))
NOTE: I am using sample size as (A + B + C + D).
TOP References      

Beyer WH. CRC Standard Mathematical Tables, 25th Edition. CRC Press. 1978. Section: Probability and Statistics. F-distribution. page 524.

Young KD. Lewis RJ. What is confidence? Part 2: Detailed definition and determination of confidence intervals. Ann Emerg Med. 1997; 30: 311-318.

39.13.06 Confidence Interval for the Difference Between Two Proportions Using the Normal Approximation  
Overview:
If the events for two proportions are normally distributed, then the confidence interval for the difference between the two proportions can be calculated using the normal approximation.

Requirements

(1) events occur with a normal distributions

(2) populations and events are sufficiently large

(3) the proportions for the 2 populations are not too close to 0 or 1

 

Population 1

Population 2

total number

N1

N2

number showing response

R1

R2

 

proportion responding  in population 1 = P1 =

 (R1) / (N1)

proportion responding  in population 2 = P2 =

 (R2) / (N2)

confidence interval =

 P1 - P2 +/- ((one tailed value of the standard normal distribution) * (SQRT (((P1 * (1 - P1)) / N1) + ((P2 * (1 - P2)) / N2)))
where:
The one tailed values for standard normal distributions with two-tailed confidence intervals, assuming an infinite degree of freedom:

Confidence Intervals

one-tailed value

80%

1.282

90%

1.645

95%

1.960

98%

2.326

99%

2.576

99.8%

3.090

 

Interpretation
If the confidence interval includes 0, then the data shows no statistically significant difference between the 2 proportions.
TOP References      

Beyer WH. CRC Standard Mathematical Tables, 25th Edition. CRC Press. 1978. page 524.

Young KD. Lewis RJ. What is confidence? Part 2: Detailed definition and determination of confidence intervals. Ann Emerg Med. 1997; 30: 311-318.

39.14  Odds and Percentages  
Overview:
The rate of occurrence for a condition can be expressed as the odds or the percentage of a population involved.

total population =

 (number affected) + (number unaffected)

odds denominator =

 (total population) / (number affected) =
 1 + ((number unaffected) / (number affected))

odds =

 1 in (odds denominator)
 (number affected) to (number unaffected)

percent affected =

 (number affected) / (total population) * 100% =
 1 / (odds denominator)
TOP References      
Harper PS. Practical Genetic Counselling, Fifth Edition. Butterworth Heinemann. 1999. Table 1.1, page 10.
39.15  Benefit, Risk and Threshold for an Action  
 
39.15.01  Benefit-to-Risk Ratio and Treatment Threshold for Using a Treatment Strategy  
Overview:
Each treatment strategy has potential risks and benefits. The treatment threshold uses a treatment's benefit and risk when used for a given condition to help decide if and when to treat.

benefit for treatment =

 (risk of adverse outcome from the disease in those untreated) - (risk of adverse outcome from the disease with treatment)

risk of treatment =

 (risk of significant adverse complication due to treatment)

benefit-to-risk ratio =

 (benefit for treatment) / (risk for treatment)

treatment threshold =

 1 / ((benefit-to-risk ratio) +1 ) =
 (risk) / ((benefit) + (risk))

Interpretation

Treatment should be given when the risk of having the condition exceeds the treatment threshold.

Treatment should be withheld if the risk of having the condition is less than the treatment threshold.

TOP References      
Beers MH, Berkow R, et al (editors). The Merck Manual of Diagnosis and Therapy, Seventeenth Edition. Merck Research Laboratories. 1999. Chapter 295. Clinical Decision Making. page 2523.
39.15.02 Testing and Test Treatment Thresholds  
Overview:
If a test is performed to determine whether a treatment strategy is used, then the testing and test treatment thresholds can help decide if the test should be done.

Test features

performance characteristics (sensitivity and specificity) for the condition are known

assume that the test has no direct adverse risk to the patient

benefit for treatment =

 (risk of adverse outcome from the disease in those untreated) - (risk of adverse outcome from the disease with treatment)

risk of treatment =

 (risk of significant adverse complication due to treatment)

testing threshold =

 ((1 - (specificity of test)) * (risk of treatment)) / (((1 - (specificity of test)) * (risk of treatment)) + ((sensitivity of test) * (benefit of test)))

test treatment threshold =

 ((specificity of test) * (risk of treatment)) / (((specificity of test) * (risk of treatment)) + ((1 - (sensitivity of test)) * (benefit of test)))

Interpretation

If the probability of disease is equal or more than the testing threshold and equal or less than the test treatment threshold, then the test should be done.

If the probability

TOP