
Clinical trials have long been the gold standard for determining the efficacy and safety of new therapies. However, the populations involved in these trials often do not reflect the diversity or complexity of real-world patients, particularly those in community care settings. This mismatch can slow the adoption of new therapies post-approval, especially among underrepresented populations.
Real-world evidence (RWE) studies are emerging as a crucial tool for bridging this divide. These studies help build physician confidence by providing data that mirrors everyday clinical practice. Access to representative datasets, beyond academic centers, is critical. Emerging technologies such as federated data models and AI-driven harmonization are helping overcome current limitations.
The Clinical Trial Divide
New therapies often enter the market with robust clinical trial data. However, their adoption in routine care can be slower than expected. One reason is that trial populations frequently differ from those seen in routine care, particularly in community-based settings. Trial cohorts are often drawn from academic institutions, with stricter eligibility criteria and more uniform care delivery. In contrast, routine practice involves broader patient demographics, variable access to diagnostic testing, and differences in care delivery across providers and regions.
Expanding adoption requires more than just education or outreach. It necessitates generating evidence that supports the therapy’s effectiveness in real-world populations that better reflect day-to-day clinical practice. Unlike guideline inclusion, which requires structured evidence through formal committee review, broader adoption often hinges on physicians seeing outcomes in patients who reflect their everyday practice.
Filling the Gaps with Real-World Evidence
Clinical trials are designed to demonstrate safety and efficacy under controlled conditions. However, the same controls that make results statistically sound also limit the diversity of enrolled patients. Strict inclusion and exclusion criteria often leave out patients with certain comorbidities, older adults, or those treated in community settings.
As a result, once a therapy enters the market, physicians practicing outside large academic medical centers may not see their patient populations reflected in the published trial results. This can create uncertainty about whether the same outcomes apply to patients in their care, especially when clinical presentations are more complex or diagnostic workflows differ from those in the trial.
The Role of Representative Real-World Evidence
To build confidence beyond the trial setting, pharmaceutical companies frequently turn to follow-on studies using real-world data. A post-approval study can demonstrate a therapy’s effectiveness in broader patient groups, especially those not well represented in the original trial. These studies are commonly led by principal investigators in partnership with community and academic research sites. Findings from these studies are published in peer-reviewed journals to support physician confidence and policy updates.
“Up to 80% of oncology patients in the US are treated outside academic centers.”
When done well, these studies address the clinical questions that trials were not set up to answer. They may show whether a therapy performs consistently in different demographic groups, in non-academic settings, or when delivered alongside varying standards of care. In doing so, they can support more confident prescribing and inform broader inclusion in guidelines or coverage policies. However, the value of these studies depends on the quality and representativeness of the data used.
Overcoming Data Access Challenges
For teams focused on expanding therapy adoption, the challenge is less about acquiring data and more about accessing datasets that reflect real-world care. Many widely used platforms draw heavily from academic medical centers, where patients, workflows, and diagnostic access differ from those in community settings.
This limits the ability to study how therapies perform across diverse populations or care environments. When datasets overrepresent one segment of the healthcare system, it becomes difficult to build evidence that supports broader clinical decision-making or addresses variation in real-world adoption.
Emerging technologies are helping overcome these limitations. Federated data models, artificial intelligence-driven data harmonization, and synthetic control arms allow researchers to generate robust, privacy-preserving insights across multiple care settings without centralizing sensitive patient data. These innovations enable studying therapy performance in truly diverse populations and unlocking broader clinical utility.
Bridging the Gap Between Trials and Practice
Regulatory approval confirms that a therapy is safe and effective in a defined trial population. However, translating that success into real-world adoption can be more complex. For therapies to reach broader patient populations, especially those underrepresented in trials, pharmaceutical teams often need to invest in generating evidence that mirrors real-world care. These studies play a critical role in filling the gaps left by clinical trials, helping physicians understand how a therapy performs in settings and patient groups they see every day.
As the oncology landscape continues to evolve, the ability to assess performance across diverse clinical environments is becoming a key factor in driving adoption. Generating this type of evidence is a strategic investment for ensuring that innovations in care translate to real-world benefit. As precision therapies grow more targeted and complex, the need for population-level, representative evidence will only increase. Bridging the divide between clinical trials and the real world is no longer a post-market task—it’s a prerequisite for scalable innovation.
According to Noah Nasser, CEO of datma, “Bridging this gap is essential for ensuring that new therapies deliver on their promise to all patients, not just those who fit narrow trial criteria.”