10 factors that make survey-takers trust, join, and finish studies
By Mindy Woodall●7 min. read●May 12, 2026

Before a participant clicks into your survey, they're running a fast, skeptical scan. They’re assessing pay, time, description, email domain, and looking for anything that seems off. Most of this assessment happens in seconds and is invisible to researchers. And the reality is that some of the factors you think matter to potential participants might actually have minimal impact.
To help you structure surveys that attract a high-quality audience, we surveyed active research participants to find out what they look for, what they trust, and what makes them bail.
Key takeaways
Pay rate is the deciding factor for most participants considering joining a study. 69% respondents ranked it as the most important factor.
Willingness to join drops off significantly for studies that take over 30 minutes.
The strongest single trust signal is an institutional email address, cited by 80% of participants.
Participants are surprisingly relaxed about backend data handling, but strict about being asked to identify themselves.
Technical glitches are the top reason participants abandon studies, and most leave silently.
Regret almost always ties back to mismatched expectations, especially when it comes to time and pay.
1. Pay rate is the deciding participation factor
Nothing else comes close. 69% of participants rank pay as the single most important factor in their decision to join a study, and another 24% rank it as the second most important.

This isn't surprising when you consider how people are using survey platforms. 69% of our respondents had completed more than 50 studies in the previous 30 days. For people participating at that volume, surveys can feel like a part-time job.
Our survey respondents reinforce this directly. They praise researchers who pay "consistent" and "predictable" rates, and several call out the value of studies that auto-pay without manual approval delays.
For researchers, this means pay is the variable participants are scanning for first. If your pay rate is below the market, your study description has to work much harder to compensate. Even that’s tricky, considering only 28% of respondents said the study description “very much” influences their decision to join a study.
2. Researcher reputation is the least important factor
This one might surprise you. Reputation tends to be something researchers feel they earn over time, but 51% of participants rank it as the least important factor in their decision to join a study, and only 2% rank it as their most important.
Part of what's happening here may be specific to survey platforms. By the time a participant has decided to engage, they've already weighed pay, time, and the study description before they encounter the researcher’s reputation. Reputation isn't always part of their initial scan.
Pay and time are what win engagement. Reputation doesn’t seem to be what gets participants in the door.
3. Willingness to participate drops off after 30 minutes
Up to about half an hour, participants are easy to recruit. After that, the dropoff in willingness to join is dramatic. 97% are likely to join a 10-minute study, and 73% are likely to join a 30-minute study. But by 45 minutes, that likely share drops to 40%.
Longer studies ask participants to commit, and commitment may require more confidence in the researcher, the study description, and the pay.

If your study runs longer than 30 minutes, you're working against participant willingness from the start. You’ll need to compensate for the increased time with higher pay.
Another option is tiered incentives: structuring longer studies so participants receive partial payment at checkpoints along the way, rather than a single payout at the end. The intermediate rewards give participants a reason to keep going through the parts of the study where willingness would otherwise drop.
4. Institutional email is the strongest single trust signal
Trust is built before the study starts with your description and metadata. The strongest signal is also the easiest to get right: the email domain you use.
80% of participants say a researcher using a university or institutional email address increases their trust in a survey more than any other signal we asked about. Other factors such as data use and named personal investigators (PIs) matter to a meaningful share as well, but institutional email is the one factor that nearly everyone looks for.

What's interesting here is that the strongest signal is also the easiest one for participants to verify in seconds. An institutional email domain is visible at a glance and hard to fake. International Review Board (IRB) numbers, by contrast, require participants to know what an IRB is, what a valid number looks like, and where to check it. Most people probably won't go to the effort of doing that.
The signals that scale best are the ones participants can gauge instantly, before they decide whether to invest any cognitive effort in the study.
5. Sloppiness is a bigger red flag than missing credentials
When we asked what would make participants decline a study, the top answers were not about credentials or affiliations but clarity. 71% flagged confusing or contradictory instructions. 61% flagged spelling errors. Another 61% flagged sloppy formatting.
Participants are reading study descriptions the way they'd read a phishing email: scanning for anything that feels off. Specific responses called out "threatening" language about attention checks, study titles containing the word "COPY," and required equipment that wasn't disclosed upfront.
The bar for looking professional is low, but missing it is costly.
6. Participants are relaxed about data storage but strict about identity
Participants are split on how they think about privacy. They're notably relaxed about backend data handling: 60% are not at all or only slightly concerned about how their data is stored and used after a study.
But ask them to identify themselves, and the picture flips. 58% say a request for personally identifying information like a full name or contact info significantly reduces their willingness to participate, or even causes them to refuse outright.

The implication: Participants trust the system to handle anonymous data responsibly, but they treat any request to identify themselves as a break from the implicit contract. Avoid asking for personal details unless you really need them.
7. Screeners are a major source of friction
Participants want to opt out before investing effort, not after. 79% say the most frustrating thing about screeners is getting screened out after answering several questions, and 62% say the most frustrating thing is long screeners that result in no compensation if they get screened out.
Only 4% of respondents said screeners had never frustrated them.
The friction isn't just wasted time. It's the asymmetry of the exchange. A participant who spends 10 minutes answering screening questions feels like they’ve done unpaid work. One survey respondent vents:
Share your criteria in the description so I don't waste my time.
A related complaint flagged by 48% of respondents is screeners that feel like an actual study. These are screeners that go beyond simple eligibility checks and start collecting substantive data before the researcher has decided whether the participant qualifies.
From the participant's perspective, they're being studied without being paid. From the researcher's perspective, they may be getting valuable pilot data, but the cost is participant goodwill.
The fix is mostly about sequencing. Move disqualifying questions to the top of the screener, share specific eligibility criteria in the description, and consider compensation for screeners with longer flows.
8. Attention checks erode trust when they read as gotchas
Attention checks exist to protect data quality, but they can do real damage when they seem designed to trick people rather than verify attention. 61% of survey respondents say they’ve failed an attention check they believed was unfairly worded, and only 9% successfully appealed.

Tone matters as much as the checks themselves. Respondents repeatedly mention researchers who "start off with messages that sound threatening, like it will be impossible to pass their attention checks." Negative framing alone may drive participants away.
9. Most participants who hit a technical glitch leave
Technical problems are the single biggest reason participants abandon studies midway: 79% said they’ve experienced a glitch they couldn't resolve, and most don’t wait around for a resolution. 52% return the study immediately, which means they formally exit the survey and notify the survey platform or researcher that they couldn’t complete it.
While returning a study isn’t great, it’s still better than abandoning with no notice, which only 3% of participants admitted to doing.
Only 16% of participants said they try to work around the glitch, and only 14% message the researcher and wait for a response.

The window for recovery is short, and most participants will leave before you ever know there's a problem. A clearly labeled return option and easy researcher contact method are more valuable than they look. By the time you notice a steep dropoff rate, you’ve already lost legitimate respondents and slowed the study's fielding.
Direct participant feedback lets researchers identify and resolve fielding issues quickly, before the cost of the glitch compounds.
10. Regret almost always traces back to mismatched expectations
When participants regret taking part in a study, the reasons are concrete and predictable. 63% said the study took much longer than estimated, and 52% said the incentive was lower than expected.

When participants regret taking on a survey, it’s usually due to the gap between what was promised in the description and what actually happened. Timing and pay miscalibrations tie back to the top two factors participants weigh when deciding to join in the first place.
The good news is that these top regret triggers are entirely within researchers' control. Time estimates can be calibrated by piloting the study with a few participants. Pay should be set against those actual times rather than the optimistic estimates, and benchmarked against fair market rates on the platform.
What participants say earns their trust
When we asked participants to share the single most important thing a researcher could do to earn their trust, the word that came up most often was transparency. Respondents want a clear idea of what the study is about, what data will be used for, and how long it will take to complete.
Be transparent and open about the purpose of the study, how it uses your data, and your pay.
Honest, accurate time estimates were the next most common theme. Several participants say they're happy to do a 45-minute study if it's labeled as one, but they resent a 20-minute study that turns into 45.
Be honest about how long the study will take. Nothing is worse than getting into a study and it takes a LOT longer than listed.
Other recurring themes: fair and timely pay, recognizable institutional affiliation, and small acts of respect, like a personal thank-you message after the study.
Send a follow-up message thanking me for my time and effort.
Implications for study design
Here are a few practical things researchers can put into practice during their next study:
Lead with pay and time. They are the two factors participants weigh most heavily, and they're the two things easiest to misrepresent. Calibrate your time estimates with soft launch data, and price accordingly.
Keep studies under 30 minutes when possible. Willingness to join drops sharply after that. If you need more time, plan for higher pay.
Treat descriptions as the trust-building moments. Participants are reading it for clarity, institutional signals, and data-use transparency. Avoid threatening language about attention checks and disclose any equipment requirements upfront.
Don't ask for identifying information unless you genuinely need it. Most respondents don’t want to share it, and most studies don’t need it.
Audit your screener flow. Move disqualifying questions to the top, and consider offering compensation for longer screen-outs.
Make the participant's exit easy. Most people who hit a technical glitch leave silently. A clearly labeled return option and a fast researcher response can save a study from churning qualified respondents.
Methodology
We surveyed 100 active research participants recruited through Prolific in April 2026. All respondents had completed at least 11 studies in the previous 30 days, with 69% completing more than 50. The survey included 20 questions covering decision-making, trust signals, friction points, and reasons for abandoning or regretting studies. Open-ended questions were analyzed for thematic patterns.


