If you want to improve R&D yield in the pharma industry, focus on Phase 2 clinical trials. A recent paper in Nature Biotechnology (sub. req'd; and others cited therein) shows that >60% of drugs that enter Phase 2 fail to advance to Phase 3, making this the highest-attrition point along the path from the clinic to final approval. A major reason for the high Phase 2 failure rate is the trial design strategy. Most companies use Phase 2 trials to get an "early read" on efficacy at a lower cost than a full Phase 3 study, by using surrogate endpoints that can be analyzed more quickly and cheaply than those needed for registration. In cancer, for example, it's common to use progression-free survival as the Phase 2 endpoint, and then switch to overall survival for Phase 3. R&D leaders like this approach because they are typically trying to spread a fixed annual budget across the portfolio and want to avoid making too many "big bets".
But this strategy is fundamentally flawed, because in many clinical areas (like oncology), there is little or unknown relationship between surrogate and registration endpoints. R&D teams know this "dirty little secret" of clinical development, and attempt to counter it by setting extremely stringent Phase 2 success criteria - after all, they reason, despite the lack of demonstrated correlation, a very strong signal in a surrogate endpoint must logically predict clinical activity. But because the endpoints are divorced from one another, this high efficacy bar is essentially arbitrary, and it merely increases the number of "false negative" failures (and decreases the overall Phase 2 success rate) without improving the later odds of success. This approach may appear on its face to "kill the losers" (to quote David Grainger's thoughtful post), but it really kills potentially effective drugs as well, in the service of constraining near-term R&D costs.
The alternate approach is to "save the winners" with Phase 2 trials that correlate with Phase 3 success - and adaptive trials are a way to do this (as observed by the Nature Biotechnology authors). As one example (illustrated here), a combined Phase 2/3 trial could pre-specify criteria for an interim analysis that would classify the findings as "positive", "negative" or "indeterminate", and include provisions to increase the sample size (to allow for detection of a smaller, but still clinically meaningful, effect size) in the "indeterminate" case. This sort of trial is truly predictive and de-risking (because it uses "real" clinical and regulatory criteria) and avoids senselessly terminating projects because they failed on an irrelevant endpoint.
Adaptive trials aren't a new idea in clinical development, but they've been used sparingly - mainly, I believe, because of issues related to organizational structure and decision-making. Although external analysts and investors look at R&D productivity across the entire value chain from basic science to registration, in large pharma organizations many separate individuals and groups in R&D have their own interests - and, more importantly, their own budgets. Even if the risk-adjusted value to the company of running a "traditional" Phase 2 study followed by a Phase 3 trial is less than that of an adaptive Phase 2/3 approach, for example, the near-term cost for the Phase 2 component is often greater. And if the "owner" of the Phase 2 development budget isn't also responsible for the program's overall risk-adjusted value, then the cheaper option in the near-term will often win.
This is a challenge, but it's not insurmountable. The first step to improving R&D productivity is to acknowledge that we need to focus on "saving the winners" in Phase 2. That's the "why" and the "what" of boosting success rates in clinical development - more on the "how" in a later post.