Gap timing

Year 2

When data quality starts to drift

Root cause

Infrastructure

Under-resourced after approval

Fix

Continuity

Planned from day one

Real-world evidence

Real-World Evidence After Approval:
The Gaps Teams Discover Late

Post-approval RWE studies are often planned with clinical trial rigor and executed with far less operational infrastructure. The gaps that derail them are predictable — patient disengagement, data completeness drift, site handoff failures — and they are almost always discovered later than they should be.

Data continuity · Patient retention · Regulatory-grade evidence

RWE
PMCF
Registry
Retention

Post-Approval RWE Model

Enroll · Sustain · Monitor · Report

"We thought the study was going well until we ran the year-two data pull."
Year two is when under-resourced RWE programs reveal their gaps. The teams that avoid this plan for long-duration continuity from the start.

Why Post-Approval Studies Fail Differently Than Pre-Approval Trials

Pre-approval trials run with urgency. Regulatory timelines, milestone payments, and organizational attention all concentrate resources and scrutiny on execution quality. Sites are engaged. Patient support is funded. Compliance monitoring is active.

Post-approval RWE studies often start with similar intentions and quickly diverge. The urgency is gone. The device or drug is already approved. Internal teams redirect their attention to the next pre-approval program. Sites become less attentive as the novelty of the study fades. The operational infrastructure that was fully funded for the pivotal trial gets quietly reduced for the post-approval follow-up.

The result appears in year-two data: missing assessments, lost-to-follow-up rates that were not anticipated in the protocol, data completeness rates that cannot support the planned analyses, and a scramble to document why the gaps occurred rather than evidence of active follow-up attempts.

See also: Post-Market Studies · Post-Market Compliance

Post-approval RWE study operational challenges

The Operational Gaps That Appear in Year Two

These are the specific failure modes that characterize post-approval RWE studies when the operational infrastructure was not built for the full study arc.

Patient disengagement without detection

Patients who stop responding to contact attempts are often not flagged as lost-to-follow-up until the next scheduled assessment window — months later. Without between-visit engagement and early detection of communication breakdown, disengagement becomes loss before the team knows there is a problem.

Data completeness drift

Assessment completion rates that were strong in year one decline gradually in year two and beyond. This is not a sudden failure — it is a slow erosion that is easy to miss in aggregate metrics until it has already compromised the endpoint's analytic validity.

Site handoff failures

Staff turnover at sites — coordinators leaving, investigators moving to different roles — creates relationship continuity breaks that are rarely managed actively. New site staff may not have the same investment in a long-running post-approval study.

Contact record decay

Patient addresses, phone numbers, and email addresses change. Studies that captured this information at enrollment and never verified it lose the ability to reach patients as time passes. Contact record maintenance is operational infrastructure that is routinely de-prioritized.

Device and technology obsolescence

Post-approval studies using connected devices or eDiary platforms face technology lifecycle risk. App updates break compatibility, device batteries degrade, and platforms may sunset features or require patient re-onboarding without a proactive management plan.

Regulatory documentation gaps

Post-approval studies must demonstrate active follow-up effort when data is missing. Studies without systematic documentation of contact attempts, reasons for non-completion, and recovery steps taken cannot satisfy regulatory documentation requirements — regardless of how much effort was actually made.

FAQ

Is RWE from post-approval studies accepted by regulators for label extensions?

Real-world evidence is increasingly accepted by regulators for certain purposes, including label extensions, pediatric studies, and rare disease applications. The data quality requirements are similar to those for clinical trials — incomplete or poorly documented RWE is unlikely to support a regulatory submission.

How should post-approval study resources be planned differently than pre-approval?

Post-approval studies need consistent funding across the full study arc, not a front-loaded model that reduces resources as the study matures. Patient support, compliance monitoring, and site engagement need to be sustained across all follow-up years — with explicit planning for years three, four, and five, not just year one.

What technology is best suited for long-duration post-approval data collection?

Platforms with a track record of long-term stability, low patient burden, and active patient support are best suited for post-approval programs. Studies that used complex technology in pre-approval phases often need to simplify for post-approval follow-up to maintain participation rates over multi-year periods.

Managing a Post-Approval Study or RWE Program?

Delve provides the patient engagement, compliance monitoring, and data continuity infrastructure that post-approval programs need to sustain evidence quality across multi-year follow-up periods.

Talk to Our Post-Market Team

See Delve Post-Market Studies →