-
Views
-
Cite
Cite
Laleh Amiri-Kordestani, Tito Fojo, Why Do Phase III Clinical Trials in Oncology Fail so Often?, JNCI: Journal of the National Cancer Institute, Volume 104, Issue 8, 18 April 2012, Pages 568–569, https://doi.org/10.1093/jnci/djs180
- Share Icon Share
Extract
Achieving success in the development of a cancer drug continues to be challenging. Given the increasing costs ( 1 ) and the small number of drugs that gain regulatory approval ( 2 ), it is crucial to understand these failures. In this issue of the Journal, Gan et al. ( 3 ) reviewed 235 recently published phase III randomized clinical trials (RCTs). They report that 62% of the trials did not achieve results with statistical significance. Trying to explain the high failure rate, they note the actual magnitude of benefit achieved in a clinical trial (designated B) is nearly always less than what was predicted at the time the trial was designed (designated δ) and conclude, “investigators consistently make overly-optimistic assumptions regarding treatment benefits when designing RCTs.”
But really should we be surprised that phase III trials, the venue for detecting “small” differences, so often disappoint? Almost by definition, phase III studies are designed to detect small differences ( 4 , 5 ). The problem is that small has given way to “marginal” as outcomes have fallen below our already modest expectations. And who or what is to blame? Are investigators really overly optimistic regarding experimental therapies and, as the authors suggest, responsible for the large number of negative studies? Although we agree that optimism regarding clinical benefit may lead to an underpowered trial, we disagree that optimistic investigators are those we should blame. We would ask, how do Gan et al. ( 3 ) define optimism? Where do they place the line between an optimistic and a realistic expectation? The authors demonstrated a poor correlation between the expected and observed benefits but in the majority of trials also found the “expected benefits” were less than 4 months—a duration many would argue represents a modest and defensible expected benefit for the majority of solid tumors. So that, rather than excessive optimism, we believe several factors including inaccurate assessments of “limited data from early phase trials and/or investigators’ experience” interpreted in what the authors themselves acknowledge “is usually an empirical process” lead to the differences that Gan et al. ( 3 ) found between the actual (B) and predicted (δ) benefit. Although there are models that use the results of phase I/II trials to predict the outcome of phase III studies, no model is perfect ( 6–8 ). For example, the response rate, which is part of the limited dataset available in designing phase III trials, has been correlated with survival and clinical benefit ( 9 , 10 ). But other factors such as the duration of response make response rate less reliable and lead to discrepancy in the results of phase II and III studies ( 11 ). Similarly, the rate of stable disease and the “clinical benefit rate,” two measures increasingly reported in early phase studies, have never been shown to correlate with outcomes yet are regarded by many as measures of efficacy ( 12–14 ). Hence, we would argue that inaccurate assessments of limited data and reliance on endpoints that have not been validated are likely more important than overoptimism.