Excellent texte de Daniel Welllingham
To what extent can we trust that experimental results from a psychology laboratory will be observed outside of the laboratory?
This question is especially pertinent in education. It's difficult to conduct research in classrooms. It's hard to get the permission of the administration, it's hard to persuade teachers to change their practice to an experimental practice which may or may not help students--and the ethics of the request ought to be carefully considered. The researcher must make sure that the intervention is being implemented equivalently across classrooms, and across schools. And so on. Research in the laboratory is, by comparison, easy.
But it's usually assumed that you give something up for this ease, namely, that the research lacks what is usually called ecological validity . Simply put, students may not behave in the laboratory as they would in a more natural setting (often called "the field").
A recent study sought to test the severity of this problem.
The researchers combed through the psychological literature, seeking meta-analytic studies that included a comparison of findings from the laboratory and findings from the field. For example, studies have examined the spacing effect--the boost to memory from distributing practice in time--both in the laboratory and in classrooms. Do you observe the advantage in both settings? Is it equally large in both settings?
The authors identified 217 such comparisons.
Each dot represents one meta-analytic comparison (so each dot really summarizes a number of studies).
What this graph shows is a fairly high correlation between lab and field experiments: .639.
If it worked in the lab, it generally worked in the field: only 30 times out of 215 did an effect reverse--for example, a procedure that reliably helped in the lab turned to out to reliably hurt in the field (or vice versa).
The correlation did vary by field. It was strongest in Industrial/Organizational Psychology : there, the correlation was a whopping .89. In social psychology it was a more modest, but far from trivial .53.
And what of education? There were only seven meta-analyses that went into the correlation, so it should be interpreted with some restraint, but the figure was quite close that observed in the overall dataset: the correlation was .71.
So what's the upshot? Certainly, it's wise to be cautious about interpreting laboratory effects as applicable to the classroom. And I'm suspicious that the effects for which data were available were not random: in other words, there are data available for effects that researchers suspected would likely work well in the field and in the classroom.
Still, this paper is a good reminder that we should not dismiss lab findings out of hand because the lab is "not the real world." These results can replicate well in the classroom.
Mitchell, G. (2012). Revisiting truth or triviality: The external validity of research in the psychological laboratory. Perspectives on Psychological Science, 7, 109-117.