eCOA completion

92–98%

Across programs combining the platform with concierge follow-up

Retention uplift

Up to 63%

Reported in long-duration / rare-disease cohorts

Sites supported

Globally

Multi-region, multi-device studies

Outcomes that move the trial forward

Compliance & Retention
Outcomes We See

Outcomes are not a marketing line. They are the numbers our customers report when they combine Delve's wearable, eCOA, and concierge layers in their studies. We publish ranges, not averages, and we attribute them to studies — not to the brand.

Documented · Per-study · Defensible

ECOA
RETAIN
WEAR
CONCIERGE

Outcomes Workflow

Run → Measure → Document → Compare

Will my study see results like the case studies?
Results vary by population, protocol, and operating model. We publish ranges and per-study outcomes so sponsors can compare their context to ours.
No 'industry average' marketing. Just what each study actually produced.
Real outcomes, per study Documented, defensible, comparable

What 'Outcomes' Actually Means Here

When we talk about outcomes on this page, we mean operational compliance and retention numbers reported by Delve customers across their studies. We publish ranges rather than averages, and we attribute outcomes to studies — not to the platform in the abstract.

All numbers are documented per study in case studies. Your study's results will reflect your study's protocol, population, and operating model.

Related pages: Case Studies · Concierge

Compliance and retention outcomes dashboard from clinical trials

Why 'Industry Average' Numbers Mislead

Most vendor outcome claims are misleading not because the numbers are fake, but because the context is missing. An industry-average compliance number doesn't tell you what to expect in your study.

Aggregated 'industry averages'

Hide the variation that actually predicts your study's experience.

Cherry-picked studies

Top-decile results presented as typical — without disclosure that they aren't.

No attribution

Numbers presented without saying which study, what population, or what protocol they came from.

Static benchmarks

Old numbers from old protocols repeated for years without re-measurement.

No comparable context

Compliance from a healthy-volunteer study compared to a chronic-disease trial.

Confusing 'completion' types

Form completion ≠ wear-time ≠ retention ≠ adherence. Vendors use whichever flatters them.

Outcomes are useful only when the context that produced them is published alongside them.

Delve Outcomes Disclosure vs Marketing 'Benchmarks'

Typical marketing 'benchmarks'

  • Single numbers without attribution
  • Averages without ranges
  • No protocol or population context
  • Compliance types blurred together
  • No per-study disclosure

Delve outcomes reporting

  • Ranges with attribution to specific studies
  • Per-study results in case studies
  • Protocol and population context published
  • Distinct metrics for completion / wear-time / retention
  • Confidence in numbers because they're defensible

We don't beat industry averages. We publish numbers customers can compare to their own context.

What Outcomes Reporting Should Include

Useful outcomes reporting is structured the same way regulators expect endpoint reporting to be structured — with attribution, context, and reproducibility.

Numbers presented this way let a sponsor compare their context to ours — which is the only useful comparison.

See related pages: Case Studies · Closed-Loop Compliance · Concierge

Per-study compliance and retention outcome reporting with protocol context

Where Outcomes Show Up in a Trial

Operational outcomes show up across the same layers Delve operates: data collection, compliance, retention, site experience, and downstream evidence.

eCOA / ePRO completion

High completion rates across programs that combine the platform with concierge follow-up.

Wearable continuity

Measurable reduction in non-wear and sync gaps in long-duration studies.

Retention

Improved drop-out in long-duration, rare-disease, and post-market cohorts.

Site burden

Coordinator-reported reduction in compliance-related escalation.

Time to clean data

Faster path from collection to analyzable dataset through harmonization + QC.

Submission readiness

Validation and documentation built alongside the study, not retrofitted at filing.

Each of these is documented per study in case studies — including the protocol, population, and operating context.

FAQ

Why do you publish ranges instead of single numbers?

Single numbers hide the variation that actually predicts a sponsor's own experience. Ranges with attribution let you compare your study's context to ours — which is the only useful comparison.

Are these results dependent on using concierge?

Largely yes. The highest compliance and retention outcomes come from programs that combine the platform with concierge follow-up. Platform-only deployments produce more modest results.

Can we see the underlying study data?

We publish per-study results in case studies. Underlying patient-level data is the property of each sponsor and isn't redistributed, but the study-level outcomes and protocol context are.

Compare Our Outcomes to Your Study's Context

Delve publishes per-study outcomes, ranges, and operating-model context so sponsors can compare what we've produced to what their own studies require — without industry-average marketing.

Talk Through Our Outcomes

Read Case Studies →