As more and more development economists conduct randomized control trials (RCT), rather than inferring the answer to an economics question from observational data, some researchers are drawing attention to unmentioned assumptions underlying interpretations of RCT results. One such implicit assumption concerns participant response to the expectation of treatment, rather than response to the actual treatment. The response is relevant for “blind” RCTs where the treatment and control groups do not know which group they are in. The expectation of a probability of receiving the treatment may change behavior, and this leads to a different outcome (a different difference between the treatment and control group) compared with an experiment that is not blind (so the control group knows it is not getting the treatment). Likewise, there might be differential estimates of the treatment effect in experiments where the probability of receiving the treatment is higher or lower. Chassang et al in a paper “Accounting for Behavior in Treatment Effects: New Applications for Blind Trials” show that randomizing the probability of treatment assignment in a blind trial (and letting participants know the different probabilities) can then enable inference of the importance of the effect of behavioral response interacted with treatment.
A paper I have assigned for my African Economic Development class made the headlines (OK maybe not headlines but a couple of blog and Twitter commentaries) back in 2012 and 2014 for using the Chassang et al result to measure and use this kind of respondent behavior in a typical development and agricultural economics setting. The paper is “Behavioural Responses and the Impact of New Agricultural Technologies: Evidence from a Double-Blind Field Experiment in Tanzania” by Erwin Bulte, , , , and
This result is very reasonable. It suggests that the effects estimated by open RCTs are likely a combination of behavioral and mechanical (i.e. laboratory) effects. One development wag suggested the paper was “a torpedo aimed straight at H.M.S. Randomista.” But I think their paper is more constructive than destructive. What Bulte et al show is that running two trials (an open and a blind trial) enables a researcher to make an inference about how much of an effect is behavioral and how much is mechanical. In their case, the results suggest that most of the new seed effect is behavioral. (Berk Ozler pointed out in comments on the 2012 draft that the Bulte et al paper has lots of other problems, but here I set all those aside.)
Many economists will probably respond with something like, “But none of our interventions could ever be done blind; the subject always knows whether they are in treatment or control.” But the more you think about it, the more examples you can think of where that is not true. Two come to mind. Any parent of schoolkids knows that school testing relies precisely on this not knowing. The teachers strongly suggest to students that school tests are high stakes, and matter. Parents obfuscate when their middle schooler asks. So many students think they matter. Once students mature and start to figure out and share with others that tests do not matter, performance changes. The perceived probability of being in a high stakes experiment affects the outcome. So you are measuring an interaction of knowledge and belief that demonstrating knowledge matters. The same goes for many experimental games. One could imagine an experiment designed to measure competitiveness (by placing respondents into absolute performance pay or relative performance pay arms) vary the probability of being in one of the two arms. Then the behavioral response (“Oh I am in the competitive pay situation I am supposed to behave this way”) can be perhaps separated from the (semi-unconscious?) psychological response.