Digital Recruitment for Clinical Trials: What Works, What Doesn't, and Why
Published May 2026 · 10 min read · By Clinical Enroll
Every recruitment vendor pitches digital as the answer. The promises are consistent: more patients, faster enrollment, precise condition-level targeting. What they don't explain is what the numbers should actually look like, or how to tell whether a campaign is working.
This guide is written for research sites evaluating or running a digital recruitment campaign. Not how to buy ads (that's the vendor's job). How to read the funnel, benchmark the numbers, and know whether the math is moving in the right direction.
What Digital Recruitment Can and Cannot Do
Digital recruitment has one genuine advantage over passive channels: reach. Your internal database degrades at roughly 25% per year. Physician referrals depend on how much time the referring physician has that week. Word of mouth is noise. Digital campaigns reach patients in condition-specific environments (search results, social feeds, health content sites) where they're already thinking about their condition.
That reach is real. Studies using digital-first recruitment strategies have reported over 150% more prequalified inquiries compared to traditional approaches, with enrollment timelines shortened by four or more months in favorable cases.
What digital cannot do is override a bad patient-population match, an eligibility profile that disqualifies most patients who express interest, or slow site-side follow-up. Most campaigns that underperform fail at one of these three constraints, none of which the vendor controls.
The distinction matters because sites often evaluate digital recruitment by the vendor's output (leads, impressions, form completions) rather than by what the site delivers on the back end. The campaign can be running exactly as designed while enrollment stalls, because the bottleneck is at the site, not the ad. Understanding which part of the funnel is leaking is where the useful work happens.
How a Digital Recruitment Funnel Actually Works
A digital recruitment funnel for clinical trials moves through five stages. Each stage has a conversion rate. The product of all five is your cost per randomized patient.
Ad impression to click.
A well-targeted clinical trial ad generates a click-through rate of roughly 1-3%. This varies by platform, condition, and creative. Lower than that suggests targeting or creative issues. Higher usually means you're fishing in a small, highly specific pool.
Click to pre-screen form completion.
Of the people who click, roughly 20-40% complete the pre-screening form. Drop-off here is normal. Many clickers are browsing, not ready to act. Landing page copy, form length, and load speed all affect this number.
Pre-screen pass.
Of those who complete the form, some percentage will pass the eligibility filter. This varies dramatically by protocol. A study with narrow inclusion criteria and a long exclusion list will screen out most respondents. There is no universal benchmark here. The protocol sets the ceiling.
Coordinator contact to screening visit.
Of the patients who pass pre-screening and are contacted by the site, roughly 30-60% will book and attend a screening visit. This conversion rate is highly sensitive to response speed. The industry average for site follow-up after a digital lead is six days. At six days, a meaningful share of interested patients have reconsidered, lost urgency, or moved on.
Screening visit to randomization.
Of patients who attend a screening visit, somewhere between 40-80% randomize, depending on protocol complexity and how rigorously the pre-screen captured the key eligibility criteria.
1 in 30–50
ad responses per randomized patient in a typical digital recruitment funnel
6 days
industry average site follow-up time after a digital lead. The single biggest conversion killer.
The two places funnels leak most often: stage 2 (pre-screen form drop-off, often a landing page problem) and stage 4 (slow site response). Both are fixable. Stage 4 is always the site's responsibility.
What Cost-Per-Patient Math Actually Looks Like
The number that matters is cost per randomized patient (CPP). Not cost per lead, not cost per pre-screen completion: CPP. Everything else is a step toward it.
Here is what CPP looks like from Clinical Enroll's published case study data:
Phase III, vTv Therapeutics T1D
$2,500 CPP
5 randomized patients · $12,500 investment
Read the case studyPediatric Vaccine, Blue Lake Biotechnology
$1,714 CPP
7 randomized patients · $12,000 investment
Read the case studyThe lowest recorded CPP across Clinical Enroll campaigns is $577. The published average across the documented case study set is $1,167.
What drives CPP in either direction: indication prevalence (rare disease costs more per patient), eligibility complexity (more restrictive criteria means more spend per conversion), geography (competitive markets and rural areas both raise CPP), and site response speed (slow follow-up directly raises CPP by reducing stage 4 conversion).
Put these numbers against site revenue to understand the ROI math. Phase III trials pay research sites between $5,000 and $20,000 per randomized patient in grant payments, with a Journal of Clinical Oncology analysis finding an average of approximately $6,100 per patient. Phase II trials run higher: $15,000 to $40,000 per patient (ProRelix Research, Sofpromed).
At $2,500 CPP against a $6,100 per-patient site fee, recruitment cost is roughly 40% of the per-patient grant. At $1,167 CPP, it's about 19%. At $577, it's under 10%.
Sites that track CPP as their primary recruitment metric can evaluate any vendor or channel against the same standard. Sites that track leads or impressions cannot.
Free Resource
See what CPP should look like for your study
The free enrollment feasibility report includes a cost-per-patient projection specific to your NCT ID, indication, and site geography. Free during early access. Delivered within 24 hours.
Request a Free ReportWhat Each Digital Channel Actually Does
Not all digital channels produce the same type of patient, and not all indications perform the same way across channels.
Facebook and Instagram
These platforms reach the largest audience but require the most pre-screening. They don't target by diagnosis. They target by interests, behaviors, demographics, and engagement patterns. A patient with Type 2 diabetes might follow diabetes-adjacent content, respond to ads about blood sugar management, or belong to condition-specific groups. Facebook finds them through those signals.
The resulting leads are less pre-qualified than search leads and require a stronger pre-screen to filter efficiently. For high-prevalence conditions where the eligible pool is large, this is the right tradeoff: volume justifies the attrition. For rare disease or highly restrictive protocols, the attrition becomes expensive.
Google Search
Search captures intent. Someone searching "clinical trials for psoriasis near me" is already diagnosed, already considering participation, and actively looking. These leads convert at higher rates and require less pre-screening effort.
The tradeoff: the eligible search pool is smaller and more competitive. Cost-per-lead is typically higher on search than social, but CPP can be lower because conversion rates are better. Search works best when the target patient is motivated and already researching options.
Display and programmatic
Display ads (banners, contextual placements on health content sites and condition-specific forums) generate the lowest cost-per-click but the highest attrition. Useful at scale for conditions with very large eligible populations. Less useful when eligibility criteria are narrow, because the volume advantage disappears once most of the leads pre-screen out.
What sites should stop tracking across all channels: impressions, reach, and click-through rate. These are vendor performance metrics. They tell you whether the campaign is running, not whether it's working. The only metric that connects to your bottom line is CPP.
The Two Levers Sites Control
Digital recruitment vendors control the campaign. Sites control two things that determine whether the campaign produces results: response speed and pre-screening quality.
Response speed.
The industry average for site follow-up after a digital lead is six days. In lead conversion research across industries, the probability of successful contact drops sharply within the first hour of inquiry and continues declining over the following 24 hours. Clinical trial referrals behave the same way.
A patient who fills out a screening form Monday afternoon and doesn't hear back until Thursday has had three days to reconsider, forget, or find another option.
Under four hours for first contact is achievable at most sites without additional headcount, if the triage process is built for it. Most sites have not built it.
Pre-screening quality.
A pre-screen that doesn't filter on the criteria that disqualify most patients sends unqualified leads directly into the coordinator's queue. The coordinator then spends hours on calls that produce no enrollment. This raises CPP, burns coordinator time, and creates the impression that digital recruitment doesn't work. The problem is the pre-screen, not the channel.
Both of these levers are fully within the site's control. No vendor can fix them from the outside. Sites that address both consistently outperform sites that don't, on the same vendor, in the same campaign, running the same ads.
The guide on clinical trial enrollment challenges covers the broader structural picture, including why pre-screening and response speed matter across all recruitment channels. The guide on feasibility assessment covers the upstream step: knowing whether the patient population supports the campaign before it starts.
Three Questions to Ask Before Signing with Any Recruitment Vendor
Most recruitment proposals describe a campaign. What they don't describe is an outcome. Before signing anything, a site should have clear answers to three questions.
- 1
What specific number of randomized patients are you committing to in this agreement?
If the answer is a range, a projection, or a "we'll do our best," the vendor is not committing to an outcome. They're committing to activity. The contract should specify a randomization number. That number is what accountability is built around.
- 2
What is your projected CPP for this indication and geography?
A vendor that has run campaigns in comparable indications should be able to give you a CPP range based on actual historical performance. If they project leads or clicks but not CPP, they may not be tracking the metric that matters, or they don't want to be held to it.
- 3
What happens if you don't hit the number?
The answer reveals the structure of the relationship. If the vendor has no position on what happens when targets are missed, the financial risk sits entirely with the site. A vendor with a genuine commitment to outcomes has an answer: a refund, an extended campaign, additional spend, or a contractual make-good provision.
These three questions filter for accountability before a campaign starts. Vendors that answer all three clearly are structurally different from vendors that don't. The difference shows up in enrollment results.
What the Numbers Should Tell You
Digital recruitment works when the patient population supports it, the pre-screen filters correctly, the site responds fast, and the vendor is accountable to an outcome rather than an activity report.
It doesn't work when any one of those conditions is missing. The missing piece is usually not the ads.
A site that tracks CPP, maintains a sub-four-hour contact target, runs a tight pre-screen, and demands a contractual randomization commitment from its recruitment partners is not a site that blames digital recruitment when enrollment falls short. It's a site that knows exactly where the gap is and what to fix.
Sources: Imperial Clinical Research Services (average site response time, six days); Journal of Clinical Oncology (per-patient site fee benchmarks for Phase III trials); ProRelix Research, Sofpromed (phase-by-phase grant payment ranges); Clinical Enroll (first-party CPP data from published case studies: $577 lowest, $1,714 Blue Lake Biotechnology, $2,500 vTv Therapeutics, $1,167 published average).
Find out if your study qualifies for a contractual enrollment commitment.
Not every study is a fit. Clinical Enroll runs a feasibility evaluation before extending any commitment. If your protocol, patient population, and geography support it, you receive a campaign proposal with a contractual outcome guarantee.
Check if your study qualifiesAll campaigns developed for IRB review and deployed in accordance with FDA guidance on clinical trial advertising.