Research

An experiment can be thought of, in its very simplest form, as an attempt to answer the question “What is the effect of X on Y?”.  While we may not think of ourselves as routinely doing experiments, every time that we take some action within our professional practice, and consciously evaluate the outcome of that action, we are, in effect, doing just that.  We can recast that simple definition then, as “I wonder what will happen if I do that?”.

Each time we find ourselves in a conversation which goes “I tried that, but it didn’t work”, or “I find that when I present it in this way ….”, then we are firmly in the domain of the experimental method.

A concrete example of an intervention in a physics classroom may give you an idea to work with while you think about these things.

Of course, we do not go about in our personal or professional lives randomly trying things out to see what happens. Our interventions are grounded in some good (and ideally theoretical) reason to believe that they will be successful, and at worst certainly not harmful.

Further, if we truly believe that a certain practice or approach will constitute a change for the better, then we would be reluctant to deny that benefit to any – we would not include a “control group” to whom the hypothesised good would be denied. Without such “control”, however, the causal link that we are claiming cannot logically be established. The challenges of designing a robust, though ethically sound, experimental study are significant.

However, the vicissitudes of nature (and of political and educational policy) may frequently provide us with a control group. Not everything changes in the same way, and with the same time course, everywhere, and so we may be in a position to compare a setting in which there is change, with an equivalent setting in which no change has occurred.

We might describe this as a “natural experiment” (or sometimes “correlational experiment”); one in which we use naturally occurring variance in the world, rather than a manipulation that we have explicitly introduced. In this case we may feel that the ethical problems are diminished, or eliminated, while the logic of the experiment, and of direct causal influence, can be retained.

Related to this would be the “quasi-experiment” in which there is a degree of manipulation of circumstance, but without the possibility of wholly random allocation of participants to conditions.

Perhaps, for example, you have two seminar classes in a week, each for one half of the students on a particular course, and you decide to conduct the seminars in systematically different ways, and look for evidences of differences in student learning. This would be a quasi-experimental design.

We should also keep in mind that highly controlled studies which, although they are confined to the laboratory (whether literally or figuratively) can provide insights into the nature of human cognition and behaviour which are relevant, and can generalise, to educational practice.

It is worth introducing here a few pieces of terminology that you are likely to encounter as you read the literature on educationally relevant “controlled trials”.  In our crude definition above, the “X” is referred to as the “independent variable” (that which the researcher manipulates) and the “Y” is referred to as the “dependent variable” (that which is being influenced, within the logic of the experiment, by the independent variable).

Notions of randomisation and bias are important in experimental studies, as is the notion of the “artefact” which can arise, which may be due to a “confounding variable”; something which changes along with the independent variable, but yet is not it.

In experimental research, particularly in the context of the natural experiment in which we depend on naturally occurring variation in the world rather than the random allocation of participant to experience groups under the control of the researcher, we can often see that our groups differ in ways that are unconnected with, but which may very well have an effect upon, the intended outcome of the study.

For example, we might be tempted to investigate the impact of an “iPads for all” intervention by comparing the intervention group with a comparable group in another school or institution. The crucial question for the reader of such a study (and, of course, for the researchers carrying it out) will be to determine what constitutes comparability.  It is quite likely that there are important, though difficult to define, differences between the settings which have led to the adoption of the iPads in the first place.

We should always read the account of the findings with an important question in mind: Where is the confounding variable?  Or: How can we interpret the findings of the study in a way that is different from that offered by the researchers, and which would invalidate their conclusions?  The default position should be scepticism.  Even (perhaps particularly) when we would like to believe the interpretation that is being offered.  This is our constant guard against the Confirmatory Bias.

Finally – something to watch out for in your own, and others’, educational interventions. You may very well have heard mention of the Hawthorne Effect, which is that almost any experimental intervention will change things, but the impact may be more to do with the change per se than with the specifics of just what has changed, and in what way. So when someone tells you that they have given iPads to every member of a particular class or programme, and that their academic performance has gone through the roof, then (politely) think “Hawthorne Effect”.

What researchers should be seeking to understand is the extent to which the improvements perceived (and remember, it may be just a perception – the teachers will be subject to the Hawthorne Effect in the same way as will the students) are due to the affordances of technology itself, rather than the general positive “aura” of being treated as special, and being given a new toy.

References

MacLeod, C. M. (1991). “Half a century of research on the Stroop effect: An integrative review.” Psychological Bulletin 109(2): 163-203.

Stroop, J.R. (1935) Studies of interference in serial verbal reactions.   Journal of Experimental Psychology18, 643-662.  The full text of the original article is conveniently available in the Classics in the History of Psychology site maintained by York University in Toronto, Canada.

Trochim, W. M. K. and J. P. Donnelly (2001). Research methods knowledge base. Cincinnati, Ohio, Atomic Dog Pub.

Transcend is a business WordPress theme geared towards online businesses and agencies. It helps you sell your product by showcasing its features in an elegant and professional design.
Flickr Images
Recent News