Compliance
≠
One number
On-time
Matters
Not just “done”
Recovery
Hours
Not weeks
Compliance Measurement
Tasks + Wear-time + Device Health + Recovery
Clinical trial compliance is the degree to which participants (and the study team) complete protocol-required tasks within defined time windows—producing usable, longitudinal data with minimal gaps.
Task completion
Did it happen?
ePRO / eCOA / diaries
On-time completion
When?
Window, late threshold
Wearable adherence
Was it worn + synced?
Wear-time + device health
Compliance is usually the first sign of risk—long before dropout. That’s why it’s your most actionable metric.
Think of compliance as a stack. If you measure only the top layer, you miss why the data degraded.
Was the protocol-required task done at all? (ePRO, diary, assessment, visit, device wear event.)
Was it done within the defined window? “Late but completed” often behaves like missing data.
Was the device able to produce usable data? Sync success, battery, pairing, permissions.
When drift happens, how fast does the study detect and recover it? Time-to-recovery is the real differentiator.
These KPIs are intentionally practical—built to drive actions. Use them per patient, per site, and at the study level.
| KPI | What it tells you | Simple formula | Best used for |
|---|---|---|---|
| ePRO Completion Rate | % of assigned ePROs completed | (Completed ePROs ÷ Assigned ePROs) × 100 | Overall adherence, risk trending |
| On-Time Completion Rate | % completed within the allowed window | (On-time completions ÷ Assigned) × 100 | Endpoint integrity, operational response needs |
| Late Completion Rate | How often tasks happen after the window | (Late completions ÷ Assigned) × 100 | Friction detection (UI, reminders, burden) |
| Consecutive Missed Days | Early drift signal (behavior breakdown) | Count of consecutive days missed | Triggering outreach + escalation rules |
| Wear-Time Adherence | Whether the device was worn enough | (Hours worn ÷ Expected hours) × 100 | Digital endpoint usability, wear-time decay |
| Valid Data Days | Days meeting minimum “valid” criteria | Count of days meeting endpoint rules | Endpoint readiness, statistical power protection |
| Sync Success Rate | Whether data actually arrived | (Successful syncs ÷ Expected syncs) × 100 | Silent failure detection (Bluetooth, permissions) |
| Device Health Score | Composite: battery + pairing + sync + app status | Weighted score (study-defined) | Proactive troubleshooting + triage |
| Time-to-Recovery (TTR) | How fast you restore compliance after drift | Time from issue detection → resolution | Operational quality, prevention of endpoint gaps |
| Escalation Rate | How often patient issues reach the site | (Site escalations ÷ Active participants) × 100 | Site burden control, workflow effectiveness |
Want a faster way to explain this internally? Use a single sentence: Compliance = Completion + Timeliness + Device Health + Recovery speed.
Metrics only matter if they trigger action. Below is a practical set of starting thresholds that many studies adapt.
Target: ≥ 90–95% completion (study-dependent).
Trigger: 2 missed tasks in a row → outreach within 24 hours.
Target: ≥ 85–90% on-time.
Trigger: late completion trend for 3+ tasks → friction investigation (UI, reminders, burden).
Target: endpoint-defined (e.g., ≥ 18 hrs/day, or ≥ 5 valid days/week).
Trigger: drop below threshold for 2 days → proactive outreach + troubleshooting.
Target: expected data receipt daily/near-real-time (study-defined).
Trigger: no sync for 24 hours → check battery, pairing, app permissions, OS settings.
Target: recover most issues within 24–48 hours.
Trigger: unresolved beyond 48 hours → escalate with context and steps attempted.
Target: fewer alerts, higher context.
Rule: don’t escalate until troubleshooting playbook is executed (unless safety/clinical threshold).
The goal isn’t pretty charts. It’s early detection + fast response. A compliance dashboard should answer: Who is drifting? Why? What was done? Did it work?
If your dashboard cannot show time-to-recovery and intervention outcomes, it’s reporting compliance—not operating it.
Software captures data. It rarely enforces behavior. The highest-performing studies treat compliance as an operating model: daily detection, rapid outreach, guided recovery, and escalation only when recovery fails.
If you’re building a compliance plan, these topics usually come next.
Compliance drift + wear-time decay + silent failures between visits.
How to treat wear-time and validity as compliance metrics—not “device issues.”
Completion is easier when the UI is consistent and reminders are designed correctly.
Dashboards that show recovery windows, intervention outcomes, and risk—fast.
Treating compliance as a single “% completed” number. You also need on-time behavior, device health, and time-to-recovery.
Use wear-time adherence (hours worn vs expected), valid data days (days meeting endpoint criteria), and sync success (whether data actually arrived). A device can be “worn” and still produce no usable data if sync fails.
Define thresholds per endpoint: minimum valid days, minimum wear-time, completion windows, and escalation rules. Then test thresholds during pilot/feasibility to calibrate them.
No. Retention is staying enrolled. Compliance is reliably producing protocol-required data over time. Compliance often degrades before dropout occurs.
If your endpoints depend on longitudinal data, measure compliance as a system: completion, timeliness, device health, and time-to-recovery—then operationalize response windows.
Book a Compliance Walkthrough