11.6 Evaluating the Literature
Evaluating published evidence comes in three flavours. These cascade down from conventional markers of quality, to how a study is reported on, and finally to the actual study design.
The first is via proxy measures, which is usually the first thing we learn about. Examples of proxy measures include peer review, author affiliations, society vs commercial publishers etc These are proxy measures because the literature is not being evaluated directly.
The second is via reporting; how much information does a given publication provide on how they conducted their study? This might include things like publication of a protocol, availability of data and scripts etc. These kinds of reporting allow the reader to benchmark bias (protocol) and verify the reproducibility of findings (data and code). There are an increasingly large number of reporting frameworks available. In the biological sciences, two common frameworks include the Materials Design Analysis Reporting (MDAR) Framework for primary research and the Preferred reporting items for systematic reviews and meta-analyses in ecology and evolutionary biology, an extension of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Guidelines originally developped for health research.
The third is with explicit evaluation of the study design and potentially each reported outcome. These tools generally ask questions related to appropriate study design and analysis as a way of evaluating study bias, and usually include questions like, was the study design appropritate for the research question, was the data collected in the most appropriate way, were the statistical tests applied appropriate for the data, etc. There are a lot of tools to choose from out there to engage in this kind of evaluation, not least because what is appropriate for an experimental study will be different from an observational study, let alone whether that experimental study used randomization or not.
These tools are generally used for large knowledge synthesis activities (systematic reviews and meta analyses), but can also be useful guides for systematically evaluating individual studies, especially as a means of building critical literacies.
The National Health and Medical Research Council of Australia, hosts a great list of tools for a variety of study types.
An evaluation of a variety of tools used in review protocols is available in Farrah, K., Young, K., Tunis, M.C. et al. Risk of bias tools in systematic reviews of health interventions: an analysis of PROSPERO-registered protocols. Syst Rev 8, 280 (2019). https://doi.org/10.1186/s13643-019-1172-8.