Search
  • Dillon Murphy

Do Undergraduate Participant Pools Introduce Sources of Unaccounted Variance?

Updated: Nov 27, 2020

PDF:

PIA_ Murphy, 2019
.pdf
Download PDF • 59KB

Go to article...


Do Undergraduate Participant Pools Introduce Sources of Unaccounted Variance?


In experimental design, researchers go to great lengths to avoid inserting biases or systematic differences while maintaining internal and external validity. Issues such as confirmation bias, hindsight bias, and demand characteristics, to name a few, could all affect the variance in participants’ data and impact whether a hypothesized effect is found. While researchers are careful to avoid these issues by using random sampling, random assignment, and counterbalancing, other sources of bias may be overlooked. 


The vast majority of psychological research uses undergraduate participant pools as a convenience sample. At many schools, including UCLA, this pool is run by an online research management system (e.g., Sona) where undergraduate students participate in research for course credit. While these systems are a vital resource for data collection and also expose participating students to research methodology, a major concern in drawing from these samples is that individual differences in cognitive abilities, levels of motivation, or personality characteristics could covary with the time of participation and have unmeasured effects on researchers’ dependent variables of interest. 


If students become fatigued throughout the academic term or have important assignments due in the final weeks, this could affect research participants’ cognitive abilities such that those who participate at the end of the academic term perform worse than those who participate earlier. In addition, students who participate at the beginning of the academic term may do so because they are motivated to excel in their classes and this motivation could result in better performance on laboratory tasks. Conversely, procrastinators may be less motivated and interested in research such that they may view the research participation requirement as a nuisance and be primarily focused on getting it out of the way, resulting in a lack of engagement on the tasks. It is also possible that proactive and procrastinating participants differ in personality characteristics such that their personalities reflect these tendencies. 


To investigate these issues, Robison and Unsworth (2016) evaluated whether participants systematically differ in their cognitive abilities, level of motivation, or personality characteristics as a function of the time at which students participate in research during an academic term. Across a wide variety of cognitive tasks, two different universities with different academic schedules, and several experiments with large samples, Robison and Unsworth (2016) did not find evidence that individuals differ in cognitive abilities (working memory capacity, fluid intelligence, crystallized intelligence, long-term memory, or attention control), task motivation, or any of the Big Five personality traits as a function of time of participation throughout an academic term. 


These results indicate that researchers shouldn't be overly concerned with when students participate in studies, however, if participants systematically fail to attend studies on certain days or at certain times, this non-response bias could reduce the external validity of a sample. For example, if a researcher primarily schedules participants on Fridays or early in the mornings but participants rarely show up at these times, the participants that do show up may be different from those that do not. To investigate this potential source of bias in the UCLA subject pool, I evaluated whether participants systematically no-show on certain days or at certain times. Specifically, I analyzed participants’ no-show rate, both excused and unexcused, to elucidate differences in research participation attendance based on the day of the week (Monday through Friday) and the time of day (morning, afternoon, or dusk). Similar to the results of Robison and Unsworth (2016), my findings indicate that researchers shouldn't be unduly concerned with a non-response bias resulting from a subset of participants failing to show up. Although about 17 percent of registered students did not show up for my study, attendance did not vary as a function of the day of the week or the time of day. While this does not eliminate the non-response bias, it does indicate that there are no systematic differences in terms of when participants are scheduled and when they show up. 


The discussed findings suggest that undergraduate participant pools do not introduce sources of unaccounted variance and researchers should not be particularly concerned with differences in cognitive abilities, motivation levels, personality characteristics, the time of year, or attendance rates as a potential confound in experimental psychology. While we can be confident in collecting data throughout each academic term, researchers should always be diligent by counterbalancing experimental conditions, using random assignment, and consider potential confounding variables. 



References 

Robison, M. K., & Unsworth, N. (2016). Do Participants Differ in Their Cognitive Abilities, Task Motivation, or Personality Characteristics as a Function of Time of Participation? Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 897–913.



37 views0 comments