Extract

Historically, phase II trials in oncology were generally single armed, constructed to distinguish between a tumor response rate felt to indicate a lack of promise (often 5%) and a rate that would indicate potential benefit (often 20%), with a one-sided type I error rate of 5%–10% and a type II error rate of 10%–20% ( 1 ). The dominant use of this design was based on the premise that an agent that could not produce a tumor response rate of 20% was not likely to produce a clinically meaningful overall survival (OS) or progression-free survival (PFS) benefit in subsequent phase III testing. Recent trends in oncology drug development have challenged this paradigm. Many phase II trials are now designed to assess the promise of a molecularly targeted agent, given either alone or in combination with another regimen. In many cases, these agents are not anticipated to produce or improve tumor response rates; rather, the desired outcome from their use is improved PFS or OS through means other than direct cell killing as evidenced by tumor shrinkage ( 2 ). In general, PFS is the preferred endpoint for such phase II trials. PFS is statistically more efficient than OS because the time to achieve the endpoint of PFS is substantially shorter, and the treatment effect is not diluted by salvage treatment. However, in a situation with no effective salvage therapy and/or a disease with concerns regarding the timing of progression assessment, OS could be chosen as the endpoint. Such trials can be single-arm studies, compared with historical controls, or can be randomized.

You do not currently have access to this article.