We performed a systematic review of articles published during a 2-year period in 4 journals in the field of infectious diseases to determine the extent to which the quasi-experimental study design is used to evaluate infection control and antibiotic resistance. We evaluated studies on the basis of the following criteria: type of quasi-experimental study design used, justification of the use of the design, use of correct nomenclature to describe the design, and recognition of potential limitations of the design. A total of 73 articles featured a quasi-experimental study design. Twelve (16%) were associated with a quasi-experimental design involving a control group. Three (4%) provided justification for the use of the quasi-experimental study design. Sixteen (22%) used correct nomenclature to describe the study. Seventeen (23%) mentioned at least 1 of the potential limitations of the use of a quasi-experimental study design. The quasi-experimental study is used frequently in studies of infection control and antibiotic resistance. Efforts to improve the conduct and presentation of quasi-experimental studies are urgently needed to more rigorously evaluate interventions.
In the study of infectious diseases and, in particular, the study of infection control and antibiotic resistance, the quasi-experimental study design, sometimes called the “pre-post intervention” or “before-after intervention” study design, is often used to evaluate the effectiveness of specific interventions. In the social sciences, studies of the methodological principles for optimizing quasi-experimental studies and the relative hierarchy of quasi-experimental study designs have been published during the past several decades [1, 2]. However, little has been written about the optimal use of the quasi-experimental design in the field of infectious diseases. In a recent article, we reviewed the existing literature featuring quasi-experimental study designs and proposed a relative hierarchy of the specific quasi-experimental study designs and epidemiological design methods to improve the internal validity of quasi-experimental studies in the field of infectious diseases . The aim of the previous article was to identify methods for optimizing the use of the quasi-experimental design when seeking to establish stronger causal associations between interventions and outcomes. These improvements could lead to more valid conclusions about the effectiveness of interventions in the areas of infection control and antibiotic resistance.
The prevalence of the use of the quasi-experimental study design in the literature on infection control and antibiotic resistance is not known. In addition, it is unknown how well studies describe the type of quasi-experimental design used or its potential limitations. Such data are important for better evaluating the results of such studies and the interventions they seek to explore.
In this article, we report results of a systematic review of articles published during a 2-year period in 4 journals in the field of infectious diseases and determined the extent to which quasi-experimental study designs are used in reports about infection control and antibiotic resistance. In addition, we sought to analyze variations in the use of quasi-experimental study designs and in the description and acknowledgment of the potential advantages and disadvantages of such designs. By evaluating the use and knowledge of quasi experiments, we aimed to highlight potential limitations in the existing literature and areas for potential improvement.
We systematically reviewed articles published during a 2-year period (1 January 2002 through 31 December 2003) in 4 major journals in the field of infectious diseases (Infection Control and Hospital Epidemiology, American Journal of Infection Control, Clinical Infectious Diseases, and Emerging Infectious Diseases) to determine the number of quasi-experimental studies. Two of the 3 authors reviewed the titles and abstracts of every article published in the 2-year period to determine whether the article featured a quasi-experimental study design. Only quasi-experimental studies that focused on infection control or antibiotic resistance were included in the subsequent review. Two of the 3 then reviewed the articles and classified them on the basis of the following 4 criteria: type of quasi-experimental study design used, justification of the use of the design, use of correct nomenclature to describe the design, and recognition of potential limitations of the design.
Type of quasi-experimental study design used (criterion 1). We first reviewed the previously published hierarchy of quasi-experimental designs in the field of infectious diseases . We slightly modified a priori the quasi-experimental study designs listed in the hierarchy by adding a category B study design that uses control groups but no pretests (study design B0 in figure 1). The result, summarized in figure 1, was then used to classify the studies according to criterion 1. In general, a quasi-experimental design that uses a control group (category B) is preferable to a design that does not use a control group (category A). In addition, the quality of the study designs increases as one moves down within each category (e.g., in category A, the quality of study design 5 is higher than that for study design 4).
Justification of the use of the quasi-experimental study design (criterion 2). We reviewed the articles to see whether the authors explained or justified why they chose the quasi-experimental study design instead of other study designs. We rated satisfaction of this criterion as “yes” or “no.” If the authors made any mention as to why they chose the quasi-experimental design (e.g., because a randomized, controlled trial would be unethical or infeasible), the design was rated as “yes” for this criterion.
Use of correct nomenclature to describe the quasi-experimental study design (criterion 3). We noted whether authors correctly identified their study as a quasi-experimental study. Articles that used the terms “quasi-experimental,” “before-after study,” and/or “pre-post study” were deemed to have used correct nomenclature. Satisfaction of this criterion was rated as “yes” or “no.”
Recognition of potential limitations of the quasi-experimental study design (criterion 4). Because the quasi-experimental study design has recognized limitations, we sought to determine whether authors acknowledged the potential limitations of this design. Examples of acknowledgment included mention of the lack of randomization, the potential for regression to the mean, the presence of temporal and/or seasonal confounders, and the presence of maturation effects [1, 3]. Satisfaction of this criterion was rated as “yes” or “no.” If the authors mentioned at least 1 limitation, the design was rated as “yes” for this criterion.
Studies in their entirety were then reviewed independently by 2 of us and were assessed on the basis of criteria 1–4. If the 2 reviewers disagreed on any classifications of the 4 criteria, the third investigator reviewed the article, and a group discussion resolved disagreements.
We identified and reviewed 73 articles published during a 2-year period in 4 journals in the field of infectious diseases that used quasi-experimental study designs and involved the topics of infection control or antibiotic resistance [4–76].
Criterion 1 involved determination of the type of quasi-experimental design used. Of the 73 articles, 57 (78%) featured category A study designs, and 16 (22%) featured category B designs. A total of 39 (53%) had A1 study designs, 16 (22%) had A2 designs, 2 (3%) had A5 designs, 4 (5%) had B0 designs, 10 (14%) had B1 designs, and 2 (3%) had B3 designs. No study designs were characterized as A3, A4, or B2. Of the 16 that were classified as A2, nine used a time-series method of regression analysis, which is the preferred analytic method for A2 study designs with at least 3 data points before and 3 data points after the intervention [77, 78].
Criterion 2 involved determination of whether the authors justified their choice of the quasi-experimental study design. Authors of 3 articles (4%) justified their use of the quasi-experimental study design [15, 17, 25].
Criterion 3 involved determination of whether the authors used the correct nomenclature to describe the quasi-experimental study design. Authors of 16 articles (22%) correctly described the study design [11, 13, 17, 25, 33, 34, 40, 41, 43, 48, 52, 54, 55, 63, 68, 76]. Common, nonstandard, and inaccurate nomenclature used to describe apparent quasi experiments included names such as “descriptive study design” and “controlled intervention trial.”
Criterion 4 involved determination of whether the authors recognized some of the limitations of the quasi-experimental study design. Authors of 17 articles (23%) mentioned at least 1 of the potential limitations associated with quasi-experimental study designs [11, 13, 15, 17, 18, 25, 29, 34, 37, 38, 48, 50, 56, 57, 63, 68, 71]. Potential limitations that were mentioned included the preferability of a randomized trial, the possibility that temporal confounders were present, and the problems associated with use of data for nonrandomized, historical control subjects.
Quasi-experimental studies aim to demonstrate causality between an intervention and an outcome and encompass a broad range of nonrandomized intervention studies. These designs are frequently used when it is not logistically feasible or ethical to conduct a randomized, controlled trial.
As we have outlined elsewhere  and as others have noted in numerous articles and books in the social sciences literature, the major threats to establishing causal connections in quasi-experimental studies arise from the nonrandomization of the interventions. Some of these problems include difficulty in controlling for important confounding variables, results that may be explained by the statistical principle of regression to the mean, and maturation effects [1–3].
The use of study designs with higher relative quality (which, in general, increases as one moves down figure 1) aims to address some of these potential major threats. The hierarchy is not absolute because, in some cases, it may be unfeasible to use a higher-quality study design. In addition, in some cases, an A5 design may be a better choice than a B0 design. The nonabsolute nature of the hierarchy proposed in figure 1 is similar to the relative hierarchy in the evidence-based literature that assigns a hierarchy to randomized controlled trials, cohort studies, case-control studies, and case series [79–81]. In addition, we acknowledge that, in some studies (such as outbreak investigations), use of a control group or inclusion of more data points from the preintervention period may not be possible, because of the urgency to end the outbreak. We also acknowledge that the relative hierarchy proposed by us is not the product of a consensus panel, nor has the hierarchy been statistically validated.
In this article, we demonstrate that the quasi-experimental study design is used frequently in studies of infection control and antibiotic resistance. Unfortunately, as is evidenced by our data for criterion 1, very few studies are using designs with control groups, and few studies are using the higher-quality study designs. In addition, results for criteria 2 and 3 demonstrate that few authors are able to identify their research as a quasi-experimental study design or to justify the choice of their study design. This lack of standard nomenclature makes it difficult for readers to understand these studies. Perhaps of paramount concern, very few of the authors outlined limitations of the quasi-experimental study designs they used, and, thus, they failed to outline the potential threats to their conclusions.
Thus, on the basis of our review, we recommend the following. Future quasi-experimental studies should use a more standard nomenclature to describe their studies. We suggest that before-after studies and pre-post intervention studies should be uniformly referred to as quasi-experimental studies. In addition, authors should justify why randomization was not used. In general, researchers should aim to choose quasi-experimental study designs that are higher in the hierarchy, include control groups, and perform increased numbers of measurements before and after the intervention, all of which, we hope, will establish a stronger causal connection between intervention and outcome. In addition, strengths and weaknesses of the quasi-experimental study design chosen should be outlined, so that readers can assess the findings with appropriate caution. In the literature, there are a number of articles that have outlined important epidemiological or statistical principles and have thus aimed to improve the quality of future studies [82–86]. In a similar vein, with our present article, we hope that, as the quality of quasi-experimental study designs improves, more-effective interventions will be tested and implemented to solve the complicated problems associated with infection control and antibiotic resistance that we are currently facing.
Financial support. National Institutes of Health (K23 AI01752-01A1 to A.D.H.), Veterans Affairs Health Services Research and Development Service (RCD-02026-1 to E.P.), and National Institutes of Health Public Health Service (DK-02987-01 to E.L.).
Potential conflicts of interest. All authors: no conflicts.