If the answer to any of these questions is no, do not use this source
Do the authors have proper credentials in the area that they are writing about?
Different degrees denote different types of expertise. For example, an MD will not have had as much experience in research methodology as a PhD.
Check to be sure the authors did not work for the company of the drug that they are researching which would introduce bias into the research.
Are there references at the end of the paper?
Are the references reflective to the time period that the study was done?
Are they missing references or studies that would apply to this material (especially in the background)?
Evaluating - Background review
Is the journal considered reputable? Is the journal appropriate to find an article relating to this particular subject?
The journal should be peer-reviewed, you can usually find specifics about peer review by looking up the author submission guidelines for the journal.
What was the source of financial support for the study?
Check to see if financing was provided by a company, NIH, etc
Was the study approved by an investigational review board (IRB) or Ethics Review Board?
An investigational review board is a committee that protects the rights and welfare of human research subjects recruited to take part in studies by reviewing research at its related institution to ensure the research adheres to ethical research standards.
Does the article mention a Safety committee or a committee to evaluate interim data to determine if the study needs to be stopped early either for good results or safety considerations?
Is the purpose and objectives clearly stated and free from bias?
The purpose is what the researchers will do.
The objectives explain how the researchers will carry out the research.
Evaluating - Study - Hypothesis and Population/Subjects
Does the investigator state a null hypothesis? Is the alternative hypothesis stated?
Null hypothesis – “Statement of no difference between the intervention and control”. You can assume a null hypothesis if no hypothesis was stated.
Alternative hypothesis is a hypothesis that there will be a difference between the therapy/drug under investigation and the control.
If the researchers do not state a null hypothesis, what would the null hypothesis be (based on the primary outcome variable)?
Is the sample size large enough? Is the sample representative of the population?
The primary outcome should be powered - authors should provide a sample size calculation. Sample size calculations include alpha error, beta error, clinical difference detected between groups (delta), and variability (standard deviation).
Are the inclusion and exclusion criteria clearly stated and are they appropriate?
Does it make sense with who they excluded? Is it an efficacy study where only one condition with no comorbidities? If they excluded drugs or disease states does that make sense for that condition (i.e. corticosteroids for osteoporosis)?
Evaluating - Study Design and Methods
What is the study design? Is it appropriate?
Preference is to use studies that are randomized, controlled trials although sometimes not possible. RCTs are the highest level of evidence either by themselves or combined in a meta-analysis. Some things cannot be studied as an RCT such as drug exposure in pregnancy. So an observational study design would be preferred. If you do not think it was appropriate, what other design would have been preferred or have more power?
Was the study randomized correctly? Even if the study is adequately randomized, are the groups (treatment and control) equivalent?
Review the study carefully, just because they say it is randomized does not always mean it was done appropriately.
A good double check is to look at the demographic (baseline) characteristics of each group. If they are similar, then there is a good chance the randomization was done correctly. A good demographic table will do a statistical comparison to see if the groups are equal (meaning that the p value is GREATER than alpha).
Was the study adequately controlled? Were the controls adequate and appropriate? Was a control even used?
Was a control used?
Was the control a placebo group or a comparative group?
The comparative group (if a drug) should be the gold standard or drug of first choice currently.
Was the study adequately blinded?
Four types of blinded studies:
No blinding - both the researchers and subjects know who has been assigned the control and the intervention.
Single-blinding - either researchers or subjects but not both know who has been assigned to the intervention and who has been assigned the control.
Double-blinding - Both researchers and subjects do not know who has been assigned to the intervention and the control.
Triple-blinding - Researchers, subjects and those involved with data analysis are unaware of who is assigned to the intervention and control.
Were the appropriate doses and regimens used for the disease state under study?
Check Lexi-Comp or Micromedex for this information.
Was the length of the study adequate to observe outcomes?
DId the study go long enough to see ECHO (economical, clinical, or humanistic outcomes)?
If the study is a cross-over study, was the washout period adequate?
A cross-over study is a study in which the subjects receive all treatments (including the control).
A washout period is the time period between treatments. Before starting a new treatment, there should be a period of time where the patient receives no treatment so no remaining drugs are in the subject’s system when they start the next drug. For drug therapy, typically it takes 5 half-lives for a drug to clear the body (pharmacokinetic clearance). It might take longer for the pharmacodynamic action of the drug to washout or go away.
In a well designed cross-over study, ALL treatments should be started on the first day. For example: ½ of the group is treated with drug A and ½ is treated with drug B on the first day. Then the groups are crossed over with the A group now receiving B and the B group now receiving A. This controls for maturation and historical bias.
Were operational definitions (describe how they were measuring variables) given?
Operational definitions--how did they define the variables they are measuring? For example, if they saw hyperglycemia as the outcome variable--how are they defining it? Is that a blood sugar of > 100 or is >110 or is >120 or is a postprandial blood sugar > 180. The variables should be clearly defined.
Were appropriate statistical tests chosen to assess the data? Was the level of alpha and beta error chosen before the data were gathered? Were multiple statistical tests applied until a significant result was achieved?
Refer to the study guide on statistical tests, Remember continuous variables (interval or ratio) with a sample size greater than 30 (Central Limit Theorem) can be analyzed with parametric statistics. Discrete data (nominal or ordinal) must be analyzed using non-parametric statistics.
Be cautious if multiple statistical tests were run. Is it appropriate or was it an attempt to find something statistically significant?
Was patient adherence monitored?
Did they determine if a patient actually took the medication as prescribed and/or comply with dietary restrictions, etc.
If multiple observers were collecting data, did the authors describe how variations in measurements were avoided?
How did they assure that all data was collected the same way? This is critical in multi-center studies.
Did the authors justify the instrumentation used in the study?
Instrumentation is the instrument that they used to measure the variables. For example, the instrumentation could be a blood pressure cuff or a scale or a survey question. Instrumentation is the tool used to gather/measure the data.
Were measurements or assessments of effects made at the appropriate times and frequency?
If measuring weight, did the researchers have all of the subjects complete the measurement first thing in the morning or if measuring blood pressure, did the researchers have the subjects sit down for 15 minutes with no caffeine, etc.
Evaluating - Results
Is the data presented in an appropriate, understandable format?
Tables, figures, and charts should have appropriate labels and definitions. They should not be distorted (such as breaks in the axis). Can you figure them out? Some studies will overwhelm you with data to distract you from the fact that they really did not find anything worthy or the methodology was flawed.
Are standard deviations or confidence intervals shown along with mean values?
Means should be expressed with a standard deviation (NOT standard error of the mean (SEM). Standard error of the mean is used when studying more than one study (meta-analysis) so NOT used with most studies.
Medians should be expressed with a 95% confidence interval. This also includes relative risk (RR) and odds ratios (OR).
Are there any problems with Type I (alpha) or Type II (beta) errors?
If the researchers reject the null hypothesis, they have the potential of a Type I or alpha error.
If the researchers fail to reject (accept) the null hypothesis, they have the potential of a Type II or beta error.
Remember that Power is equal to 1 minus beta error.
Are there any potential problems with internal validity or external validity?
Internal validity refers to the quality of the study design and strong design should lead to reliable results (look at materials and methods section)
Internal validity types include: history, maturation, instrumentation, selection, morbidity and mortality. Lookk for large amounts of drop outs or lost subjects in follow-ups.
External validity is the degree to which study results can be applied in patients in clinical practice. Generalizability of the study results to the general population. External to the study.
Are adverse reactions reported in sufficient detail?
Did they discuss safety issues or adverse effects? How were they evaluated and reported? Did they have a Safety Committee overseeing it?
Are the conclusions supported by the data? Is some factor other than the study treatment responsible for the outcomes?
Make sure the conclusions fit who the researchers actually studied. Do they make broad sweeping conclusions that do not fit the study parameters?
Could anything else explain the results besides what the researchers studied?
Are the results both statistically and clinically significant?
Statistical significance is when the p value is LESS than alpha.
Do not trust a study that states that the results were trending to significance (this is not fundamentally sound and a bogus response). Remember that alpha is the amount of error you are willing to accept (0.05 means that you are allowing a 5% chance of error --the wrong results and conclusion while the p value is the ACTUAL probability from that study that you made a Type I or alpha error). So if the p value is 0.059 then it is not statistically significant and it IS NOT trending towards---your error rate is too high to answer the question.
Do the authors discuss study limitations in their conclusions?
Every study has limitations. The authors should recognize their own limitations and explain why. Reading the study, you should always look for limitations above and beyond what is stated by the authors.
Evaluating - References
Were appropriate references used? Are references timely and reputable? Have any of the studies been disproven or updated? Do references cited represent a complete background?
Do the authors cite studies completed by others? (not just themselves)
Evaluating - Final Question
Would this article change clinical practice or a recommendation that you would give to a
patient or health care professional?