Abstract

Background

The Consolidated Standards of Reporting Trials (CONSORT) guidelines were developed in the mid-1990s for the explicit purpose of improving clinical trial reporting. However, there is little information regarding the adherence to CONSORT guidelines of recent publications of randomized controlled trials (RCTs) in oncology.

Methods

All phase III RCTs published between 2005 and 2009 were reviewed using an 18-point overall quality score for reporting based on the 2001 CONSORT statement. Multivariable linear regression was used to identify features associated with improved reporting quality. To provide baseline data for future evaluations of reporting quality, RCTs were also assessed according to the 2010 revised CONSORT statement. All statistical tests were two-sided.

Results

A total of 357 RCTs were reviewed. The mean 2001 overall quality score was 13.4 on a scale of 0–18, whereas the mean 2010 overall quality score was 19.3 on a scale of 0–27. The overall RCT reporting quality score improved by 0.21 points per year from 2005 to 2009. Poorly reported items included method used to generate the random allocation (adequately reported in 29% of trials), whether and how blinding was applied (41%), method of allocation concealment (51%), and participant flow (59%). High impact factor (IF, P = .003), recent publication date (P = .008), and geographic origin of RCTs (P = .003) were independent factors statistically significantly associated with higher reporting quality in a multivariable regression model. Sample size, tumor type, and positivity of trial results were not associated with higher reporting quality, whereas funding source and treatment type had a borderline statistically significant impact.

Conclusion

The results show that numerous items remained unreported for many trials. Thus, given the potential impact of poorly reported trials, oncology journals should require even stricter adherence to the CONSORT guidelines.

Randomized controlled trials (RCTs) are the gold standard for evaluating new therapies or strategies in medicine (1). However, the results of poorly designed or poorly reported RCTs can have widespread detrimental consequences for routine clinical practice and may impair the quality of pooled analyses such as meta-analyses (2,3). Therefore, the Consolidated Standards of Reporting Trials (CONSORT) statement was developed by trial methodologists and editors of biomedical journals in the mid-1990s for the explicit purpose of improving clinical trial reporting (4). The CONSORT statement, which provides guidance to authors regarding essential items that should be included in RCT reports, was updated in 2001 (and again in 2010) to incorporate new elements (5,6). Since its original publication in 1996, CONSORT has been supported by more than 400 journals (www.consort-statement.org) and editorial groups, including the International Committee of Medical Journal Editors (7). As a result, the overall quality of RCT reporting has improved (8–11).

There is little information regarding the adherence of oncology trials to the CONSORT guidelines. Although RCT reporting quality has previously been assessed in rare tumor subtypes (8,10,12), it has never been systematically and comprehensively investigated in all tumor types. Therefore, the primary objective of this study was to assess the overall reporting quality of oncology RCTs published between 2005 and 2009 according to the 2001 CONSORT statement. In addition, we investigated manuscript characteristics associated with better reporting quality. To provide baseline data for future evaluations of reporting quality, each RCT was also assessed with the more recent 2010 CONSORT guidelines.

Methods

Study Selection

We searched MEDLINE via PubMed to identify all publications of RCTs assessing systemic anticancer therapies published between January 2005 and December 2009 in 10 journals that are thought to publish the majority of oncology RCTs: Annals of Oncology; British Journal of Cancer; Breast Cancer Research and Treatment; Cancer; European Journal of Cancer; Journal of Clinical Oncology; Journal of the National Cancer Institute; Lancet; Lancet Oncology; and New England Journal of Medicine. Exclusion criteria were pediatric studies; treatment solely with radiotherapy or surgery; phase I, II, or IV trials; supportive care, palliative care or prevention trials; meta-analyses, overviews, or publications using pooled data from two or more trials; and secondary reports of previously published trials. The dataset has previously been used to investigate consistency in the reporting and analysis of primary outcomes of oncology RCTs, from registration to publication (13).

Rating of Overall Reporting Quality

Based on the methodology used in previous studies (8,10,12,14), an overall quality score (OQS) was developed. The OQS consisted of 18 of the 22 items detailed in the 2001 CONSORT statement (Table 1). Each item was scored 1 if it was well reported or 0 if it was not clearly reported or not stated; each item was weighted with equal importance. Concurrently, each RCT was also assessed using a 27-item OQS based on the more recent 2010 CONSORT guidelines (Table 2). Items 20 through 22, which assess the discussion section of RCTs, were excluded from the OQS because it was too difficult to objectively evaluate them during the primary analysis. Nevertheless, some data related to items 20–22 guidelines were collected to secondarily evaluate their association with the overall quality of reporting. Item 10 from the 2001 and 2010 guidelines, which assesses the separation of the generator and executor of the allocation sequence, was not evaluated separately but was included in the assessment of allocation concealment (Item 9, Tables 1 and 2). Although authors in other studies of reporting adequacy have singled out certain items for special emphasis as “Key Methodologic Factors” including item 9 (allocation concealment), item 11 (blinding), and item 16 (intent-to-treat, ITT) (8,10,14), the rationale was not clearly justified. The three items were included in the overall OQS in our analysis.

The rating methodology of each item was discussed among three authors (JP, DM, and BY) based on explanation and elaboration of the 2001 CONSORT statement (5). The data from the first 20 trials (360 items) were independently rated by two investigators (JP and DM) who were blinded to each other’s results. Among these 360 items, 37 discrepancies were identified; however, all were successfully resolved by consensus [23 items in favor of the principal extractor (JP) and 14 against]. The error rate of the principal data extractor was therefore 3.9% (14/360). Based on this finding, an updated data extraction form was then used by the primary extractor (JP) to capture the remaining data in this study.

Although most items in the OQS of the 2001 CONSORT guidelines (Table 1) were unambiguous, a few items were open to interpretation. To avoid any ambiguity, the following definitions were used for the purposes of this review. Allocation concealment is the method of masking upcoming treatment assignments from individuals involved in patient enrollment. Allocation concealment was considered adequately reported if there was a description of central randomization (eg, numbered or coded vehicles, opaque sealed envelopes, or sequentially numbered envelopes). These criteria were similar to both criteria recommended for Cochrane reviews and criteria used in previous reports (15). ITT was considered adequately reported if the method was described, regardless of the actual definition of ITT used. Finally, blinding was considered adequately reported if the trial was clearly stated as an open trial or if the blinded population was clearly described (patients, physicians, and/or those assessing the outcomes) and if the similarities in the characteristics of true and sham interventions were stated (such as appearance, taste, smell, and method of administration).

Definition of Trial Characteristics

A “positive” trial was defined as one in which the experimental arm was deemed by trial investigators to be superior to the standard arm in superiority trials, not inferior in noninferiority trials, or equivalent in equivalence trials. A “negative” study was defined as one in which the experimental arm was deemed not superior, inferior, or not equivalent to the reference arm. All others were defined as “unclear.” Trials were considered industry-funded if there was at least partial funding by an industry identified in the publication. The geographic regions where RCTs were led were derived from the addresses of the first author institutions.

Statistical Analysis

The 2001 and 2010 OQSs were the sum of the number of items that were well reported, based on the 2001 and 2010 CONSORT criteria, respectively (Tables 1 and 2). The OQS was reported as an integer between 0 and 18 for the 2001 OQS and between 0 and 27 for the 2010 OQS. OQSs were summarized across trials using descriptive statistics such as median, minimum, maximum, and interquartile (IQ) range. The primary analysis was based on the 2001 OQS.

Cochran–Armitage tests for trend were used to identify a measurable change in reporting frequency of a single item over the years. For continuous covariable evolution, univariate linear regression models were used.

Univariate and multivariable linear regression analyses were used to identify factors associated with higher OQS. The following trial characteristics were investigated: year of publication, tumor site, source of trial funding, journal impact factor (IF), geographic region, positivity of primary outcome, and sample size. The multivariable model included every covariate associated with a P value lower than .10 in the univariate analysis. A high α level of significance was deemed acceptable to reduce the risk of excluding potential independent factors from the multivariate analysis.

It was also hypothesized that manuscripts from the same journal might have OQSs that were more closely correlated to each other than manuscripts from different journals. Therefore, generalized estimating equations were used as a supportive regression analysis with incorporation of an unstructured correlation term for manuscripts from the same journal. Results were similar to those of the linear model without the assumption of correlation. Therefore, for simplicity, only the results of the linear model are reported here. Statistical analyses were performed using SAS version 9.1 (Cary, NC). All statistical tests were two-sided.

Results

Characteristics of Selected RCTs

A total of 357 RCTs met the inclusion criteria and were included in this analysis (Table 3). From the 659 trials initially screened, 357 primary reports met inclusion criteria (Figure 1). Details of these trials have been reported previously (13). Overall, 61% of trials were at least partially funded by the pharmaceutical industry (Table 3). This proportion increased over time, from 49% in 2006 to 74% in 2009. Median RCT sample size was 437 participants. Although there was a trend for the sample size to increase over time (from 369 in 2005 to 586 in 2009), this increase did not achieve statistical significance (P = .775) with a univariate linear regression. Three journals (Journal of Clinical Oncology, New England Journal of Medicine, and Annals of Oncology) published more than 70% of all RCTs. Seventy-five percent of RCTs were published in journals with high IFs (>10).

Rating of Overall Quality Score

The mean 2001 OQS for all items was 13.4 on a 0 to 18 scale [range: 6–18, 95% confidence interval (CI) = 9 to 17], demonstrating that one-half of RCTs clearly reported at least 78% of items listed in the 2001 CONSORT guidelines. The overall RCT reporting quality score improved by 0.21 points per year from 2005 to 2009. Only seven trials received a perfect score of 18. One-third (34%) of RCTs had a 2001 OQS of 12 or less. With regard to correct reporting frequencies of individual items from the 2001 criteria (Table 1), items that were most often poorly reported were as follows: method used to generate the randomization (adequately reported in 29% [104] of RCTs, 95% CI = 24% to 34%), use of blinding (41% [146], 95% CI = 36% to 46%), method of allocation concealment (51% [182], 95% CI = 46% to 56%), and participant flow (59% [212], 95% CI = 54% to 65%). Among poorly reported items, only items 11 (blinding) and 13 (flow) showed improvement over time using the Cochran–Armitage test for trend. Blinding increased from 27% of RCTs in 2005 to 55% in 2009, whereas flow was correctly reported in 47% of RCTs in 2005 compared with 83% in 2009. Although a CONSORT diagram for reporting participant flow was recommended as early as the 1996 CONSORT statement, it was observed in only 60% of publications. However, use of the CONSORT diagram did increase statistically significantly over time, from 42% in 2005 to 83% in 2009 (Cochran–Armitage test for trend P < .001). Moreover, the CONSORT diagram was more commonly observed in high-IF journals (P = .015). Among publications that described the blinding process, most were open label trials (76%, n = 111). The most common description of method of allocation concealment was centralized randomization (95%, n = 172). Among the 104 RCTS with adequate reporting of random allocation, the sequence was generated by the process of dynamic allocation of baseline factors in 72 RCTs.

Mean 2010 OQS was 19.3 on a 0 to 27 scale (Table 2). Items that were additions or were redefined in the 2010 CONSORT statement were more likely to be poorly reported. The exception was abstract structure (Table 2), which was clearly reported in 90% of RCTs.

Factors Associated With Reporting Quality

In univariate analyses, the following trial characteristics were associated with higher 2001 OQS: recent year of publication 
(P = .005), at least partial funding from the pharmaceutical industry (P = .004), larger sample size (P = .052), high journal IF 
(P < .001), trials of adjuvant treatment (P = .087), trials with positive results (P = .007), and RCTs conducted outside North America (P = .029) (Table 4).

The multivariable regression model subsequently revealed that year of publication (P = .008), journal IF (P = .003), geographic region (P = .003), funding source (P = .079), and treatment type 
(P = .085) were independent predictors of 2001 OQS. Sample size, tumor type, and positivity of trial results were not associated with higher reporting quality, whereas funding source (P = .079) and treatment type (P = .085) had a borderline statistically significant impact. Compared with low-IF journals (IF <10), the OQS of journals with IFs between 10 and 20 was higher by a mean value of 0.54 (95% CI = 0.01 to 1.07) and the OQS of high-IF journals 
(IF >20) was higher by a mean value of 1.35 (95% CI = 0.57 to 2.13). Compared with the OQS of studies published in 2005, there was a mean improvement in OQS by 0.20 (95% CI = 0.04 to 0.36) for each subsequent year. OQSs were 0.68 (95% CI = −0.28 to 1.64) higher in adjuvant trials than in neoadjuvant trials or trials of unknown settings. Trials with at least some industry funding had a mean OQS of 0.31 (95%CI = -0.24 to 0.86) higher than that of trials without any industry funding. Finally, North American-led trials had a mean OQS of 0.79 (95% CI = 0.30 to 1.28) lower than that of trials led from Europe.

Discussion

To our knowledge, this is the first large-scale study of conformity of oncology randomized clinical trial publications to CONSORT criteria. Poor reporting of RCT findings can result in overestimation of treatment effect and may potentially lead to erroneous conclusions (2,16–18). It is encouraging that this study shows an improvement in the reporting quality of oncology RCTs between 2005 and 2009, which is consistent with smaller previously published reports in various oncology subspecialties (8–10,12,19,20). Median OQS rose from 65% to 70% (9.5/15 to 10/15) in previous studies of oncology subspecialties or other medical specialties (10,14) to 77% (14/18) in this study. More recent publications were more likely to adhere to the CONSORT criteria, possibly reflecting the increased uptake of the CONSORT guidelines by the International Committee of Medical Journal Editors and journal editorial boards since 2001 (21). A number of other factors were also associated with improved reporting quality, including high IF for the publishing journal. This finding is similar to the results of other studies (8,10,14,22) and may be explained by stricter peer review or higher scrutiny before submission to high-IF journals. Furthermore, rejection of poorly reported RCTs by high-IF journals may explain publications in lower IF journals. Industry funding was another factor associated with improved reporting quality in this study. Although industry funding has been associated with better reporting quality in some reports (8,14), others have found industry funding to be associated with publication biases (23–25). In this study, complex funding arrangements may not be clearly reported and could have biased the analysis, although the reporting quality of industry funded RCTs appeared to be at least equivalent to that of government or academic trials. Finally, trials from outside of North America and Europe were better reported. Because some trials are international, the geographical origin of the first author may be different from that of the sponsor. However, except for intergroup trials performed on several continents, the sponsor and the first author are frequently from the same geographical regions of the world.

Although we observed an overall improvement in RCT reporting over time, some items remained poorly reported. Items pertaining to clinical features such as measures of outcomes, eligibility criteria, intervention, or baseline data (75%, 95%, 90%, and 99%, respectively) were usually better reported than methodological items such as randomization sequence generation, sample size, allocation concealment, blinding, and ITT (29%, 67%, 51%, 41%, and 62% of studies, respectively). This finding may result from a perception that the clinical aspects of RCTs are of greater importance and interest, particularly because many authors of such articles are clinicians. Therefore, paucity of data about the methodological aspects of RCTs may reflect a deliberate de-emphasis, especially when article lengths are limited, rather than shortcomings in trial design or execution (3,20,26–28). Several authors have demonstrated a strong association between poorly reported trials and poorly designed or conducted trials (20,27). Another group found that poorly reported allocation concealment is frequently related to unclear allocation concealment procedure in the trial protocol (3). However, most of these statements were made during the 1980s and 1990s, before development of data and trial design computerization. Nevertheless, adequate reporting of trial methodology is critical to avoid publication biases and to help readers decide whether the study conclusions are valid. Therefore, efforts should be made to improve the reporting of CONSORT items in future publications.

The current study has a number of limitations. Our analysis was limited to published studies, and therefore it is potentially subject to publication bias. Indeed, it is known that some RCTs, especially those with negative results, are never published (24). In addition, some RCTs may have been poorly designed, or manuscripts may have been so poorly written that they were rejected for publication. Such manuscripts would likely have low OQSs and would lead to a worse overall quality of oncology RCTs. Furthermore, although we report on the adequacy of reporting, as defined by the number of CONSORT-mandated items reported, we are unable to comment on the accuracy of reporting because we were unable to compare the publications to the actual trial protocols (29). The recent decision of Journal of Clinical Oncology to require access to the full study protocol for the submission of all RCTs might facilitate this type of analysis in the future (30). Finally, a single investigator rated most publications, which could have introduced subjectivity into the process. Although explanatory notes are provided in the CONSORT statement (31), the requirements for clear reporting of some items, such as allocation concealment, blinding, or settings and locations, are complex. To reduce subjectivity, a data collection template was designed and piloted before its use for data collection in this study.

The rating process considered only the blinding of patients and physicians and not those involved in outcome assessment and data analysis. Definition and interpretation of blinding vary considerably among publications (32). However, blinding of patients and treating physicians is the most important safeguard of methodological quality, even if a full blinding process does not guarantee real blinding (16,26). ITT was considered well reported if readers had enough information about the procedure used to determine the analyzed population and if this procedure was consistent with the text. We did not assess whether ITT definition was concordant with standard criteria (17,33). The CONSORT statement is a set of recommendations for reporting RCTs and does not dictate how trials should be conducted. In addition, it was too challenging to rate the reporting of qualitative items such as trial limitation, external validity, and general interpretation of the results with reproducibility, and therefore those items were not included in the primary analysis of 2001 OQS.

In previous studies (8,10,14, 27), three key methodological items including ITT, blinding, and allocation concealment reporting were singled out as being associated with increased overall reporting quality (27) and were often analyzed separately from the other CONSORT items (8,10,14). Although these three items are likely associated with improved overall reporting quality, each CONSORT item by itself is also associated with improved overall reporting quality, and there is no differentiation of these three items in the CONSORT statements. Indeed, these three items reflect only the methodological aspect of trials, although it is also important to adequately report clinical aspects. Because it was unclear whether there was sufficient rationale for separating these three items from the remaining items, the primary analysis was based on a single analysis that included all items.

In conclusion, our findings show that the reporting quality of oncology RCTs has improved statistically significantly over time and is associated with high-IF journals. However, the fact that some items remain very poorly reported is a concern, given data showing that failures in reporting may conceal important problems in trial design or conduct. Therefore, we recommend that oncology journals require even stricter adherence to the CONSORT guidelines, and manuscript preparation should be conducted in full collaboration among methodologists, statisticians, and clinicians.

Table 1. Overall quality of reporting, rating using items from the 2001 CONSORT statement*


Item no. 
Criterion 
Description No. (%) of trials in which item was clearly reported 
“Randomized” stated in abstract Study identified as a randomized trial in the title or abstract 345 (97) 
Background Adequate description of the scientific background and explanation of rationale 340 (95) 
Participants Description of the eligibility criteria for participants 338 (95) 
Intervention Details of the interventions intended for each group 321 (90) 
Objectives Description of the specific objectives or scientific hypotheses in the methods section 284 (80) 
Outcome: how Definition of primary outcome measures 267 (75) 
Sample size Description of sample size calculation 241 (67) 
Randomization Description of the method used to generate the random allocation sequence, including details of any restriction 104 (29) 
Allocation concealment Description of the method used to implement the random allocation sequence, assuring concealment until interventions were assigned 182 (51) 
11 Blinding Whether or not participants or those administering the interventions were blinded to group assignment; if relevant, description of the similarity of interventions 146 (41) 
12 Statistical methods Description of the statistical methods used to compare groups for primary outcomes 323 (90) 
13 Flow Description of the flow of participants through each stage (number of participants randomly assigned, receiving intended treatment, and analyzed for the primary outcome) 212 (59) 
14 Recruitment Dates defining the periods of recruitment and follow-up 243 (68) 
15 Baseline data Description of baseline demographics and clinical characteristics of each group 354 (99) 
16 Intent-to-treat analysis Number of participants in each group included in each analysis and whether “intent-to-treat” analysis was conducted 220 (62) 
17 Outcome measures For the primary outcome, a summary of results for each group, the estimated effect size, and its precision (eg, 95% CI) are provided 236 (66) 
18 Ancillary analysis Clear statement of whether subgroup/adjusted analyses were prespecified or exploratory 291 (81) 
19 Adverse events Description of all important adverse events in each group 337 (94) 

Item no. 
Criterion 
Description No. (%) of trials in which item was clearly reported 
“Randomized” stated in abstract Study identified as a randomized trial in the title or abstract 345 (97) 
Background Adequate description of the scientific background and explanation of rationale 340 (95) 
Participants Description of the eligibility criteria for participants 338 (95) 
Intervention Details of the interventions intended for each group 321 (90) 
Objectives Description of the specific objectives or scientific hypotheses in the methods section 284 (80) 
Outcome: how Definition of primary outcome measures 267 (75) 
Sample size Description of sample size calculation 241 (67) 
Randomization Description of the method used to generate the random allocation sequence, including details of any restriction 104 (29) 
Allocation concealment Description of the method used to implement the random allocation sequence, assuring concealment until interventions were assigned 182 (51) 
11 Blinding Whether or not participants or those administering the interventions were blinded to group assignment; if relevant, description of the similarity of interventions 146 (41) 
12 Statistical methods Description of the statistical methods used to compare groups for primary outcomes 323 (90) 
13 Flow Description of the flow of participants through each stage (number of participants randomly assigned, receiving intended treatment, and analyzed for the primary outcome) 212 (59) 
14 Recruitment Dates defining the periods of recruitment and follow-up 243 (68) 
15 Baseline data Description of baseline demographics and clinical characteristics of each group 354 (99) 
16 Intent-to-treat analysis Number of participants in each group included in each analysis and whether “intent-to-treat” analysis was conducted 220 (62) 
17 Outcome measures For the primary outcome, a summary of results for each group, the estimated effect size, and its precision (eg, 95% CI) are provided 236 (66) 
18 Ancillary analysis Clear statement of whether subgroup/adjusted analyses were prespecified or exploratory 291 (81) 
19 Adverse events Description of all important adverse events in each group 337 (94) 
*

N = 357.

Table 2. Overall quality of reporting: rating using items from the 2010 CONSORT statement*


Item no. 
Criterion 
Description No. (%) of trials in which item was clearly reported 
1a “Randomized,” stated in title Study identified as a randomized trial in the title 189 (53) 
1b Abstract structure Structured summary of trial design, methods, results, and conclusions 323 (90) 
2a Background Adequate description of the scientific background and explanation of rationale 340 (95) 
2b Objectives Description of the specific objectives or scientific hypotheses in the introduction 131 (37) 
Trial design Description of trial design, including allocation ratio 120 (34) 
4a Participants Description of the eligibility criteria for participants 338 (95) 
4b Settings and location Settings and locations where the data were collected 246 (69) 
Intervention Details of the interventions intended for each group 321 (90) 
6a Outcome: how Definition of primary outcome measures 267 (75) 
6b Outcome: when Definition of the time when the primary outcome was measured 260 (73) 
7a Sample size Description of sample size calculation 241 (67) 
7b Interim analysis When applicable, explanation of any interim analyses and stopping guidelines 320 (90) 
8a Randomization, sequence generation Description of the method used to generate the random allocation sequence 109 (31) 
8b Randomization, restriction Type of randomization; details of any restriction 298 (83) 
Allocation concealment Description of the method used to implement the random allocation sequence, assuring concealment until interventions were assigned 182 (51) 
11 Blinding Whether or not participants or those administering the interventions were blinded to group assignment; if relevant, description of the similarity of interventions 146 (41) 
12a Statistical methods Description of the statistical methods used to compare groups for primary outcomes 323 (90) 
12b Additional analysis, method Methods for additional analyses, such as subgroup analysis and adjusted analysis 227 (64) 
13a Diagram Flow of participants is represented with a CONSORT diagram 215 (60) 
13b Flow Description of the flow of participants through each stage (number of participants randomly assigned, receiving intended treatment, and analyzed for the primary outcome) 212 (59) 
14 Recruitment Dates defining the periods of recruitment and follow-up 243 (68) 
15 Baseline data Description of baseline demographics and clinical characteristics of each group 354 (99) 
16 Intent-to-treat analysis Number of participants in each group included in each analysis and whether “intent-to-treat” analysis was conducted 220 (62) 
17 Outcome measures For the primary outcome, a summary of results for each group, the estimated effect size, and its precision (eg, 95% CI) are provided 236 (66) 
18 Ancillary analysis Clear statement of whether subgroup/adjusted analysis were prespecified or exploratory 291 (81) 
19 Adverse event classification Description of all important adverse events in each group, with classification 282 (79) 
25 Funding Sources of funding and other support 305 (86) 

Item no. 
Criterion 
Description No. (%) of trials in which item was clearly reported 
1a “Randomized,” stated in title Study identified as a randomized trial in the title 189 (53) 
1b Abstract structure Structured summary of trial design, methods, results, and conclusions 323 (90) 
2a Background Adequate description of the scientific background and explanation of rationale 340 (95) 
2b Objectives Description of the specific objectives or scientific hypotheses in the introduction 131 (37) 
Trial design Description of trial design, including allocation ratio 120 (34) 
4a Participants Description of the eligibility criteria for participants 338 (95) 
4b Settings and location Settings and locations where the data were collected 246 (69) 
Intervention Details of the interventions intended for each group 321 (90) 
6a Outcome: how Definition of primary outcome measures 267 (75) 
6b Outcome: when Definition of the time when the primary outcome was measured 260 (73) 
7a Sample size Description of sample size calculation 241 (67) 
7b Interim analysis When applicable, explanation of any interim analyses and stopping guidelines 320 (90) 
8a Randomization, sequence generation Description of the method used to generate the random allocation sequence 109 (31) 
8b Randomization, restriction Type of randomization; details of any restriction 298 (83) 
Allocation concealment Description of the method used to implement the random allocation sequence, assuring concealment until interventions were assigned 182 (51) 
11 Blinding Whether or not participants or those administering the interventions were blinded to group assignment; if relevant, description of the similarity of interventions 146 (41) 
12a Statistical methods Description of the statistical methods used to compare groups for primary outcomes 323 (90) 
12b Additional analysis, method Methods for additional analyses, such as subgroup analysis and adjusted analysis 227 (64) 
13a Diagram Flow of participants is represented with a CONSORT diagram 215 (60) 
13b Flow Description of the flow of participants through each stage (number of participants randomly assigned, receiving intended treatment, and analyzed for the primary outcome) 212 (59) 
14 Recruitment Dates defining the periods of recruitment and follow-up 243 (68) 
15 Baseline data Description of baseline demographics and clinical characteristics of each group 354 (99) 
16 Intent-to-treat analysis Number of participants in each group included in each analysis and whether “intent-to-treat” analysis was conducted 220 (62) 
17 Outcome measures For the primary outcome, a summary of results for each group, the estimated effect size, and its precision (eg, 95% CI) are provided 236 (66) 
18 Ancillary analysis Clear statement of whether subgroup/adjusted analysis were prespecified or exploratory 291 (81) 
19 Adverse event classification Description of all important adverse events in each group, with classification 282 (79) 
25 Funding Sources of funding and other support 305 (86) 
*

N = 357.

Table 3. Trial Characteristics*

Characteristic Studies, No. (%) 
Year of publication  
2005 91 (26) 
2006 68 (19) 
2007 71 (20) 
2008 74 (21) 
2009 53 (15) 
Tumor site  
Lung 77 (22) 
Breast 92 (26) 
Urinary system 32 (9) 
Colon/Rectum 52 (15) 
Others 104 (29) 
Sources of trial funding  
Government/foundation 89 (25) 
Completely funded by industry 133 (37) 
Partially funded by industry 84 (24) 
Funding not reported 51 (14) 
Journal  
Journal of Clinical Oncology 186 (51) 
Annals of Oncology 41 (12) 
New England Journal of Medicine 31 (9) 
British Journal of Cancer 21 (6) 
European Journal of Cancer 20 (6) 
Other journals 58 (16) 
Journal impact factor  
<10 95 (27) 
10–20 212 (59) 
>20 50 (14) 
Region in which RCTs was led  
Europe 214 (60) 
North America 107 (30) 
Asia 28 (8) 
Others 8 (2) 
Primary outcome  
Positive 149 (42) 
Negative 195 (55) 
Unclear 13 (4) 
Sample size  
Median 437 
Interquartile range 258–754 
Characteristic Studies, No. (%) 
Year of publication  
2005 91 (26) 
2006 68 (19) 
2007 71 (20) 
2008 74 (21) 
2009 53 (15) 
Tumor site  
Lung 77 (22) 
Breast 92 (26) 
Urinary system 32 (9) 
Colon/Rectum 52 (15) 
Others 104 (29) 
Sources of trial funding  
Government/foundation 89 (25) 
Completely funded by industry 133 (37) 
Partially funded by industry 84 (24) 
Funding not reported 51 (14) 
Journal  
Journal of Clinical Oncology 186 (51) 
Annals of Oncology 41 (12) 
New England Journal of Medicine 31 (9) 
British Journal of Cancer 21 (6) 
European Journal of Cancer 20 (6) 
Other journals 58 (16) 
Journal impact factor  
<10 95 (27) 
10–20 212 (59) 
>20 50 (14) 
Region in which RCTs was led  
Europe 214 (60) 
North America 107 (30) 
Asia 28 (8) 
Others 8 (2) 
Primary outcome  
Positive 149 (42) 
Negative 195 (55) 
Unclear 13 (4) 
Sample size  
Median 437 
Interquartile range 258–754 
*

N = 357.

Table 4. Results of regression analyses of factors predicting 2001 OQS*





Study characteristics 



Mean OQS (95% CI) Linear Regression 
Univariate analysis Multivariable analysis 
Estimate (95% CI)† P Estimate (95% CI)† P 
Year of publication‡ NA 0.22 (0.07 to 0.38) .005 0.20 (0.04 to 0.36) .008 
Journal impact factor 
<10 12.88 (12.43 to13.34) Referent <.001 Referent .003 
10–20 13.39 (13.12 to 13.66) 0.50 (0.00 to 1.00)  0.54 (0.01 to 1.07)  
>20 14.46 (13.88 to 15.04) 1.58 (0.87 to 2.29)  1.35 (0.57 to 2.13)  
Source of trial funding 
No industry funding 13.15 (12.73 to 13.57) Referent .004 Referent .079 
Some industry funding 13.68 (13.39 to 13.96) 0.53 (0.01 to 1.05)  0.31 (−0.24 to 0.86)  
Unknown 12.69 (12.15 to 13.30) −0.46 (−1.14 to −0.31)  −0.40 (−1.13 to 0.33)  
Treatment type      
Neoadjuvant/unknown 13.20 (12.91 to 13.52) Referent .087 Referent .085 
Metastatic disease 13.25 (12.65 to 13.99) 0.05 (−0.91 to 1.02)  0.16 (−0.78)to 1.10)  
Adjuvant 13.79 (13.37 to 14.22) 0.59 (−0.42 to 1.60)  0.68 (−0.28 to 1.64)  
Sample size/100‡ NA 0.03 (0.00 to 0.05) .052 0.00 (0.00 to 0.01) .65 
Number of treatment arms 
13.44 (13.22 to 13.69) Referent .34 Referent .29 
>2 13.12 (12.45-13.78) −0.33 (−1.00 to 0.35)  −0.36 (−1.00 to 0.29)  
Results of primary outcome 
Negative 13.26 (12.97 to 13.57) Referent .007 Referent .63 
Positive 13.72 (13.38 to 14.06) 0.46 (0.01 to 0.91)  0.18 (−0.58 to 0.94)  
Unclear 12.00 (10.90 to 13.10) −1.26 (−2.43 to -0.08)  NA  
Results of primary outcome, by authors 
Negative 13.30 (12.99 to 13.64) Referent .003 Referent .81 
Positive 13.59 (13.28 to 13.90) 0.29 (−0.16 to 0.73)  −0.09 (−0.83 to 0.65)  
Unclear 12.00 (10.90 to 13.10) −1.30 (−2.49 to −0.11)  −1.12 (−2.26 to 0.02)  
Tumor site      
Lung 13.10 (12.65 to 13.56) Referent .24 Referent .15 
Breast 13.32 (12.86 to 13.77) 0.21 (−0.43 to 0.85)  0.14 (−0.49 to 0.77)  
Urinary system 13.06 (12.22 to 14.04) −0.04 (−0.91 to 0.83)  −0.20 (−1.04 to 0.64)  
Colon/rectum 13.52 (12.98 to 14.06) 0.42 (−0.33 to 1.16)  0.35 (−0.38 to 1.08)  
Others 13.75 (13.35 to 14.75) 0.65 (0.02 to 1.27)  0.64 (0.03 to 1.25)  
Region in which RCT was led 
Europe 13.56 (13.26 to 13.85) Referent .029 Referent .003 
North America 12.96 (12.59 to 13.34) −0.59 (−1.08 to −0.11)  −0.79 (−1.28 to −0.30)  
Others 13.81 (12.98 to 14.51) 0.25 (−0.49 to 0.99)  0.25 (−0.49 to 0.99)  




Study characteristics 



Mean OQS (95% CI) Linear Regression 
Univariate analysis Multivariable analysis 
Estimate (95% CI)† P Estimate (95% CI)† P 
Year of publication‡ NA 0.22 (0.07 to 0.38) .005 0.20 (0.04 to 0.36) .008 
Journal impact factor 
<10 12.88 (12.43 to13.34) Referent <.001 Referent .003 
10–20 13.39 (13.12 to 13.66) 0.50 (0.00 to 1.00)  0.54 (0.01 to 1.07)  
>20 14.46 (13.88 to 15.04) 1.58 (0.87 to 2.29)  1.35 (0.57 to 2.13)  
Source of trial funding 
No industry funding 13.15 (12.73 to 13.57) Referent .004 Referent .079 
Some industry funding 13.68 (13.39 to 13.96) 0.53 (0.01 to 1.05)  0.31 (−0.24 to 0.86)  
Unknown 12.69 (12.15 to 13.30) −0.46 (−1.14 to −0.31)  −0.40 (−1.13 to 0.33)  
Treatment type      
Neoadjuvant/unknown 13.20 (12.91 to 13.52) Referent .087 Referent .085 
Metastatic disease 13.25 (12.65 to 13.99) 0.05 (−0.91 to 1.02)  0.16 (−0.78)to 1.10)  
Adjuvant 13.79 (13.37 to 14.22) 0.59 (−0.42 to 1.60)  0.68 (−0.28 to 1.64)  
Sample size/100‡ NA 0.03 (0.00 to 0.05) .052 0.00 (0.00 to 0.01) .65 
Number of treatment arms 
13.44 (13.22 to 13.69) Referent .34 Referent .29 
>2 13.12 (12.45-13.78) −0.33 (−1.00 to 0.35)  −0.36 (−1.00 to 0.29)  
Results of primary outcome 
Negative 13.26 (12.97 to 13.57) Referent .007 Referent .63 
Positive 13.72 (13.38 to 14.06) 0.46 (0.01 to 0.91)  0.18 (−0.58 to 0.94)  
Unclear 12.00 (10.90 to 13.10) −1.26 (−2.43 to -0.08)  NA  
Results of primary outcome, by authors 
Negative 13.30 (12.99 to 13.64) Referent .003 Referent .81 
Positive 13.59 (13.28 to 13.90) 0.29 (−0.16 to 0.73)  −0.09 (−0.83 to 0.65)  
Unclear 12.00 (10.90 to 13.10) −1.30 (−2.49 to −0.11)  −1.12 (−2.26 to 0.02)  
Tumor site      
Lung 13.10 (12.65 to 13.56) Referent .24 Referent .15 
Breast 13.32 (12.86 to 13.77) 0.21 (−0.43 to 0.85)  0.14 (−0.49 to 0.77)  
Urinary system 13.06 (12.22 to 14.04) −0.04 (−0.91 to 0.83)  −0.20 (−1.04 to 0.64)  
Colon/rectum 13.52 (12.98 to 14.06) 0.42 (−0.33 to 1.16)  0.35 (−0.38 to 1.08)  
Others 13.75 (13.35 to 14.75) 0.65 (0.02 to 1.27)  0.64 (0.03 to 1.25)  
Region in which RCT was led 
Europe 13.56 (13.26 to 13.85) Referent .029 Referent .003 
North America 12.96 (12.59 to 13.34) −0.59 (−1.08 to −0.11)  −0.79 (−1.28 to −0.30)  
Others 13.81 (12.98 to 14.51) 0.25 (−0.49 to 0.99)  0.25 (−0.49 to 0.99)  
*

0 to 18 scale.

The estimates shown indicate the incremental benefit observed compared with the reference level. Any positive value indicates benefit compared with reference, whereas any negative value indicates detriment compared with reference. OQS = overall quality score; NS = statistically nonsignificant.

Continuous.

Figure 1.

Selection of randomized clinical trials in the systematic review. Records were screened by readers: BY = B. You; HG = H. K. Gan; JP = Julien Péron; DM = Denis Maillet.

Figure 1.

Selection of randomized clinical trials in the systematic review. Records were screened by readers: BY = B. You; HG = H. K. Gan; JP = Julien Péron; DM = Denis Maillet.

References

1.
Guyatt
GH
Oxman
AD
Vist
GE
et al
GRADE: an emerging consensus on rating quality of evidence and strength of recommendations
BMJ
 
2008
;
336
(
7650
):
924–
926
2.
Moher
D
Pham
B
Jones
A
et al
Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses?
Lancet
 
1998
;
352
(
9128
):
609–
613
3.
Pildal
J
Chan
AW
Hrobjartsson
A
Forfang
E
Altman
DG
Gotzsche
PC
Comparison of descriptions of allocation concealment in trial protocols and the published reports: cohort study
BMJ
 
2005
;
330
(
7499
):
1049–
1052
4.
Begg
C
Cho
M
Eastwood
S
et al
Improving the quality of reporting of randomized controlled trials. The CONSORT statement
JAMA
 
1996
;
276
(
8
):
637–
639
5.
Moher
D
Schulz
KF
Altman
DG
The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials
Lancet
 
2001
;
357
(
9263
):
1191–
1194
6.
Schulz
KF
Altman
DG
Moher
D
CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials
BMJ
 
2010
;
340
:
c332
7.
Moher
D
Hopewell
S
Schulz
KF
et al
CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials
BMJ.
 
2010
;
340
:
c869
8.
Lai
R
Chu
R
Fraumeni
M
Thabane
L
Quality of randomized controlled trials reporting in the primary treatment of brain tumors
J Clin Oncol
2006
;
24
(
7
):
1136
1144
9.
Moher
D
Jones
A
Lepage
L
Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation
JAMA
 
2001
;
285
(
15
):
1992
–199
5
10.
Toulmonde
M
Bellera
C
Mathoulin-Pelissier
S
Debled
M
Bui
B
Italiano
A
Quality of randomized controlled trials reporting in the treatment of sarcomas
J Clin Oncol
 
2001
;
29
(
9
):
1204
–120
9
11.
Hopewell
S
Dutton
S
Yu
LM
Chan
AW
Altman
DG
The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed
BMJ
 
2010
;
340
:
c723
12.
Kober
T
Trelle
S
Engert
A
Reporting of randomized controlled trials in Hodgkin lymphoma in biomedical journals
J Natl Cancer Inst
 
2006
;
98
(
9
):
620
–62
5
13.
You
B
Gan
H
Pond
G
Chen
XE
Consistency in the analysis and reporting of primary endpoints in oncology randomized controlled trials from registration to publication: a systematic review
J Clin Oncol
 
2012
;
30
(
2
):
210
–21
6
14.
Rios
LP
Odueyungbo
A
Moitri
MO
Rahman
MO
Thabane
L
Quality of reporting of randomized controlled trials in general endocrinology literature
J Clin Endocrinol Metab
 
2008
;
93
(
10
):
3810
3816
15.
Higgins
JPT
Green
S
Cochrane Handbook for Systematic Reviews of Interventions: Version 5.0.2. Oxford, UK: The Cochrane Collaboration;
2011.
http://www.cochrane-handbook.org
16.
Schulz
KF
Chalmers
I
Hayes
RJ
Altman
DG
Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials
JAMA
 
1995
;
273
(
5
):
408
412
17.
Nuesch
E
Trelle
S
Reichenbach
S
et al
The effects of excluding patients from the analysis in randomised controlled trials: meta-epidemiological study
BMJ
 
2009
;
339
:
b3244
18.
Wood
L
Egger
M
Gluud LL
et al.  
Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study
BMJ
 
2008
;
336
(
7644
):
601–
615
19.
Plint
AC
Moher
D
Morrison
A
et al
Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review
Med J Aust
 
2006
;
185
(
5
):
263–
267
20.
Liberati
A
Himel
HN
Chalmers
TC
A quality assessment of randomized control trials of primary treatment of breast cancer
J Clin Oncol
 
1986
;
4
(
6
):
942–
951
21.
International Committee of Medical Journal Editors.
Uniform requirements for manuscripts submitted to biomedical journals.
N Engl J Med
 
1997
;
336
(
4
):
309–
315
22.
Mills
E
Wu
P
Gagnier
J
Heels-Ansdell
D
Montori
VM
An analysis of general medical and specialist journals that endorse CONSORT found that reporting was not enforced consistently
J Clin Epidemiol
 
2005
;
58
(
7
):
662
667
23.
Djulbegovic
B
Lacevic
M
Cantor
A
et al
The uncertainty principle and industry-sponsored research
Lancet
 
2000
;
356
(
9230
):
635
638
24.
Easterbrook
PJ
Berlin
JA
Gopalan
R
Matthews
DR
Publication bias in clinical research
Lancet
 
1991
;
337
(
8746
):
867
872
25.
Rochon
PA
Gurwitz
JH
Simms
RW
et al
A study of manufacturer- supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis
Arch Intern Med
1994
;
154
(
2
):
157
163
26.
Devereaux
PJ
Choi
PT
El-Dika
S
et al.  
An observational study found that authors of randomized controlled trials frequently use concealment of randomization and blinding, despite the failure to report these methods
J Clin Epidemiol
 
2004
;
57
12
):
1232
1236
27.
Huwiler-Muntener
K
Juni
P
Junker
C
Egger
M
Quality of reporting of randomized trials as a measure of methodologic quality
JAMA
 
2002
;
287
(
21
):
2801
2804
28.
Soares
HP
Daniels
S
Kumar
A
et al
Bad reporting does not mean bad methods for randomised trials: observational study of randomised controlled trials performed by the Radiation Therapy Oncology Group
BMJ
 
2004
;
328
(
7430
):
22
24
29. Jones
MN
Mewhort
DJ
Representing word meaning and order information in a composite holographic lexicon
Psychol Rev
 
2007
;
114
(
1
):
1
37
30.
Haller
DG
Providing protocol information for Journal of Clinical Oncology readers: what practicing clinicians need to know
J Clin Oncol
 
2011
;
29(
9):1091
31.
Altman
DG
Schulz
KF
Moher
D
et al
The revised CONSORT statement for reporting randomized trials: explanation and elaboration
Ann Intern Med
 
2001
;
134
(
8
):
663
694
32.
Devereaux
PJ
Manns
BJ
Ghali
WA
et al
Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials
JAMA
 
2001
;
285
(
15
):
2000
–200
3
33.
Hollis
S,
Campbell
F
What is meant by intention to treat analysis? Survey of published randomised controlled trials
BMJ
 
1999
;
319
(
7211
):

670
674

Funding

HKG is the recipient of a Victorian Cancer Agency Research Fellowship from the Victorian State Government.

Notes

The funders did not have any involvement in the design of the study; the collection, analysis, and interpretation of the data; the writing of the manuscript; or the decision to submit the manuscript for publication. The authors want to thank Dr Marc Buyse, ScD, for help and active reviewing of the study. The authors have no conflict of interest to disclose.