If you’ve ever been involved in getting an academic research project off the ground, you likely share some form of frustration with the Institutional Review Boards (or IRBs) that you had to go through before you could begin. For those of you not the know, the IRB is an independent council set up by universities tasked with assessing and monitoring research proposals associated with the university for possible ethical violations. Their main goal is in protecting subjects – usually humans, but also nonhumans – from researchers who might otherwise cause them harm during the course of research. For instance, let’s say a researcher is testing an experimental drug for effectiveness in treating a harmful illness. The research begins by creating two groups of participants: one who receive the real drug and one who receives a placebo. Over the course of the study, if it becomes apparent that the experimental drug is working, it would be considered unethical for the researcher to withhold the effective treatment from the placebo group. Unfortunately, ethical breaches like that have happened historically and (probably) continue to happen today. It’s the IRB’s job to help reduce the prevalence of such issues.
Because the research ethics penguin just wasn’t cutting it
Well-intentioned as the idea is, the introduction of required IRB approval to conduct any research involving humans – including giving them simple surveys to fill out – places some important roadblocks in the way of researcher efficiency; in much the same way, after the 9/11 attacks airport security became much more of a headache to get through. First and foremost, the IRB usually requires a lot of paperwork and time for the proposal to be processes and examined. It’s not all that unusual for what should be a straightforward and perfectly ethical research project to sit in the waiting room of the IRB for six-to-eight weeks just to get green lit. That approval is not always forthcoming, though, with the IRB sending back revisions or concerns about projects regularly; revisions which, in turn, can hold the process up for additional days or weeks. For any motivated researcher, these kinds of delays can be productivity poison, as one’s motivation to conduct a project might have waned somewhat over the course of the two or three months since its inception. If you’re on a tight deadline, things can get even worse.
On the subject of concerns the IRB might express over research, today I wanted to talk about a matter referred to as sensitive topics research. Specifically, there are some topics – such as those related to sexual behavior, trauma, and victimization – that are deemed to pose greater than minimal risk to participants being asked about them. The fear in this case stems from the assumption that merely asking people (usually undergraduates) about these topics could be enough to re-traumatize them and cause them psychological distress above and beyond what they would experience in daily life. In that sense, then, research on certain topics can deemed above minimal risk, resulting in such projects being put under greater scrutiny and ultimately subjected to additional delays or modifications (relative to more “low-risk” topics like visual search tasks or personality measures, anyway).
That said, the IRBs are not necessarily composed of experts on the matter of ethics, nor do their concerns need empirical grounding to be raised; the mere possibility that harm might be caused can be considered grounds enough for not taking any chances and risking reputational or financial damage to the institution (or the participants, of course). That these concerns were raised frequently (but not supported) led Yeater et al (2012) to examine the matter empirically. The authors sought to subject their participants to a battery of questions and measures designated to be either (a) minimal risk, which were predominately cognitive tasks, or (b) above minimal risk, which were measures that asked about matters like sexual behavior and trauma. Before and after each set of measures, the participants would have their emotional states measured to see if any negative or positive changes resulted from taking part in the research.
The usual emotional response to lengthy surveys is always positive
The sample for this research involved approximately 500 undergraduates assigned to either the trauma-sex condition (n = 263) or the cognitive condition (n = 241). All of the participants first completed some demographic and affect measures designed to assess their positive and negative emotions. After that, those in the trauma-sex condition filled out surveys concerning their dating behavior, sexual histories, the rape myth acceptance scale, questions concerning their interest in short-term sex, sexual confidence, trauma and post-traumatic checklists, and childhood sexual and trauma histories. Additionally, females answered questions about their body, menstrual cycle, and sexual victimization histories; males completed similar surveys asking about their bodies, masturbation schedules, and whether they had sexually victimized women. Those in the cognitive condition filled out a similarly-long battery of tests measuring things like their verbal and abstract reasoning abilities.
Once these measures were completed, the emotional state of all the participants was again assessed along with other post-test reaction questions, including matters like whether they perceived any costs and benefits from engaging in the study, how mentally taxing their participation felt, and how their participation measured up to other life stressors in life like losing $20, getting a paper cut, a bad grade on a test, or waiting on line in the bank for 20 minutes.
The results from the study cut against the idea that undergraduate participants were particularly psychologically vulnerable to these sensitive topics. In both conditions, participants reported a decrease in negative affect over the course of the study. There was even an increase in positive affect, but only for the trauma-sex group. While those in the trauma-sex condition did report greater post-test negative emotions, the absolute value of those negative emotions were close to floor levels for both groups (both means were below a 2 on a scale of 1-7). That said, those in the trauma-sex condition also reported lower mental costs to taking part in the research and perceived greater benefits overall. Both groups reported equivalent positive emotions.
Some outliers were then considered. In terms of those reporting negative emotions, 2.1% of those in the cognitive condition (5 participants) and 3.4% of those in the trauma-sex condition (9 participants) reported negative emotions above the midpoint of the scale. However, the maximum value for those handful of participants were 4.15 and 5.52 (respectively) out of 7, falling well short of the ceiling. Looking specifically at women who had reported histories of victimization, there was no apparent difference between conditions with regard to affect on almost any of the post-test measures; the one exception was that women who had experienced a history of victimization reported the trauma-sex measures to be slightly more mentally taxing, but that could be a function of their having to spend additional time filling out the large number of extensive questionnaires rather than any kind of serious emotional harm. Even those who had been harmed in the past didn’t seem terribly bothered by answering some questions.
“While we have you here, would you like to answer a quick survey about your experience?”
The good news is that it would seem undergraduates are more resilient than they are often given credit for and not so easily triggered by topics like sex or abuse (which are frequently discussed on social platforms like Facebook and news sources). The sensitive topics didn’t seem to be all that sensitive; certainly not substantially more so than the standard types of minimal risk questions asked on other psychological measures. Even for those with histories of victimization. The question remains as to whether such a finding would be enough to convince those making the decisions about the risks inherent in this kind of research. I’d like to be optimistic on that front, but it would rely on the researchers being aware of the present paper (as you can’t rely on the IRB to follow the literature on that front, or indeed any front) and the IRB being open to hearing evidence to the contrary. As I have encountered reviewers who seem uninterested in hearing contrary evidence concerning deception, it’s a distinct possibility that the present research might not have the intended effect on mollifying IRB concerns. I certainly wouldn’t rule out it’s potential effectiveness, though, and this is definitely a good resource for researchers to have in their pocket if they encounter such issues.
References: Yeater, E., Miller, G., Rinehart, J., & Nason, E. (2012). Trauma and sex surveys meet minimal risk standards: Implications for institutional review boards. Psychological Science, 23, 780-787.