Skip to Content

From Paper to Practice: Predictors of response compliance in ESM / EMA research.

Predictors of response compliance in ESM / EMA research.



Imagine asking people to pause their daily lives multiple times a day to reflect on their emotions or behavior. That’s essentially what experience sampling methodology (ESM) / ecological momentary assessment (EMA) does, and it’s what makes it both powerful and demanding.

ESM / EMA offers unmatched ecological validity, but this comes at a cost: participants may get fatigued, skip prompts, or drop out altogether. This isn’t just inconvenient, it threatens the validity and power of your data.

So, what actually predicts whether people will keep responding? 

In a recent publication, our m-Path team delved into the literature to provide a state-of-the-art overview.   

Design matters: When studies become too burdensome

Across dozens of studies, we cautiously conclude that a theme seems to emerge: the more burdensome the ESM / EMA protocol, the lower the compliance. That sounds straightforward, but the evidence is surprisingly mixed.

  • Survey length: Some experimental studies find longer surveys reduce response rates, while others don’t.
  • Sampling scheme: Fixed schedules sometimes boost compliance compared to (semi-)random prompts, but this is not seen in all meta-analyses.
  • Duration and frequency: Longer or denser ESM / EMA protocols can exhaust participants… but many studies carefully balance these parameters, making isolated effects hard to detect in pooled data analyses.

In short: there’s no clear and definite design rule, but if your study generally feels too heavy or unpredictable to participants, expect them to disengage.


Who responds (and who doesn’t)?

It’s tempting to think compliance is purely about study design, but research shows that participant characteristics matter too. However, also here the evidence is mixed:

  • Age and gender: Some studies find lower response rates in younger or male participants, while others find no differences.
  • Clinical status: Having a psychiatric diagnosis doesn’t consistently predict lower compliance overall, but certain groups (e.g., people with psychosis or substance use problems) may show more missing data.

Interestingly, there’s no evidence that older or female participants are less compliant.


The clock is ticking: Temporal patterns in response compliance

Even when participants start strong, compliance often fades over time.

  • Over a typical 7-10 day ESM study, some studies find that participants gradually skip more prompts, likely due to fatigue or waning motivation.
  • Within each day, the first prompts (especially in the morning) are more likely to be missed.

Occasionally, studies even find a small end-of-study rebound, where engagement increases as participants approach the finish line. But the general pattern is clear: motivation erodes over time.


Money talks: The role of financial incentives

If there’s one factor that consistently boosts response compliance, it’s this: paying people works 💸.

Across multiple systematic reviews, studies that offered financial compensation achieved significantly higher response rates than those that didn’t 🤑. Experimental evidence confirms this pattern. For example, offering participants a modest voucher or small cash payment often increases how many surveys they complete. 


The meta-analysis by Ottenstein & Werner (2022) found that ESM / EMA studies paying participants had 7% higher compliance rates on average than those that didn’t.


Interestingly, how much you pay seems less important than whether you pay at all. Higher incentives don’t always lead to better compliance, probably because researchers naturally match payment to the expected study burden. And sometimes, very large payments can backfire, attracting participants who sign up just for the money but then disengage quickly.

What about pay-per-response? Some studies use incremental reward schemes (more payment for more surveys completed), while others pay a fixed amount regardless of compliance. So far, there’s no clear evidence that one approach is better than the other, but surprisingly no studies seem to have tested this experimentally.


Why this matters for your research

When response rates drop, or when missingness becomes systematic, your ESM / EMA data may stop reflecting real-life patterns.

This can lead to:

  • Lower statistical power (too little data)
  • Biased estimates (if only certain contexts or moods get sampled)

Understanding the predictors of compliance helps you design studies that stay valid from start to finish.


Practical take-aways

Thus, before you launch your next ESM / EMA study, ask yourself:

  • Could my design feel too burdensome for participants? A small pilot study with qualitative and quanitative feedback from participants may be helpful. 
  • Are there specific groups who might need extra motivation?
  • What can I do to sustain engagement over time? As said, paying participants can boost response rates, but also other factors like a good researcher-participant alliance, frequent check-ins and reminders, visual feedback, etc. may be helpful.


After you collected your ESM / EMA data, scan for systematic patterns in missingness!


👉 Explore the full paper here: Computers in Human Behavior, 2024.

 
From Paper to Practice: Predictors of response compliance in ESM / EMA research.
Egon Dejonckheere September 17, 2025
Share this post
Archive