Action research in education currently makes up the bulk of the published research into instructional strategies evaluating classroom performance. Action research is research that is conducted within a classroom followed by an analysis that is performed comparing the strategies used. Action research forms the foundation of evidence-based practice in the education world today.
Action research typically involves a comparison within a single classroom where a teacher evaluates the students’ performance that uses the instructional strategy that has been in place for some time (pre-test or control measure), and then changes the instructional strategy (the intervention) and carries out a similar evaluation (post-test or experimental measure). The comparison is between the two evaluations. Does the student performance increase following the intervention?
This, the most common type of action research, is quite straightforward to carry out and involves the frontline teacher, allowing a teacher to evaluate the effectiveness of different teaching strategies on the same group of students. By comparing the students’ performance against themselves at two different periods of time, the statistical tests used to compare the two evaluations are very sensitive and will allow the researcher to find differences between types of strategies when they occur.
Action research arose as an alternative to the theoretical research that was being conducted in the 1940s based on what was known about teaching and learning. The research carried out at the time was based largely on the behaviorist school of learning and was carried out in carefully controlled laboratories, far removed from the challenges and chaos of everyday teaching in the classroom. Moving research into real classrooms using real teachers and real students was an obvious improvement.
Or was it?
Carrying out action research, as described above, has some methodological problems that are extremely difficult to address. The research is done with the best of intentions and the researchers are honestly doing their very best to find the most effective teaching strategies for students. Strategies that demonstrate, through empirical methods, their effectiveness in helping students learn.
However, there are methodological flaws that are very real and are almost impossible to address.
The first methodological flaw is the quasi-experimental (methodological jargon) nature of the studies. A quasi-experimental design is when the participants (the students) come to the study as a pre-existing group. Because of the nature of classroom-based research, virtually every study carried out in naturalistic settings, like classrooms, will be a quasi-experimental study. This is just the way it is. There are quasi-experimental studies carried out in many areas of behavioral research. This is not a fatal flaw but becomes problematic when this study is not carried out without a sound theoretical foundation.
The reason quasi-experimental studies are not as sound as true experiments is because of the problem with the random assignment of the participants. In order to infer causality in a study (e.g. the change in teaching strategy causes a change in the performance in the students), the participants need to be randomly assigned to one of the two comparison groups (control and experimental). In this type of action research, this cannot be done. This means that the students come into the study with characteristics that might be the cause of any differences found rather than the change in instructional strategy.
Here is an example A quasi-experimental study is to be carried out on the effect of adolescent eating behavior and on fat distribution and gain over a three year period (age 13 – 16). One group of adolescents is going to snack on high fat, high sugar, and high salt junk food while the other group is to snack on fruits and vegetables. In this particular study, random assignment is difficult for some unknown reason, and the groups are allowed to form themselves. As a result, the study is carried out on 22 boys, who are assigned to the junk food group, and 19 girls, who are assigned to the fruit and vegetable group. The result is obvious, with the junk food group showing significant fat redistribution when compared to the fruit and vegetable group.
Surprisingly. at the end of the study, the findings indicate that the diet of fruit and vegetables has resulted in larger weight gains and larger fat deposits than the junk food group. However, the cause is unlikely due to the type of food eaten but the natural fat accumulation and distributional differences in boys and girls during adolescence. A problem that has arisen from the quasi-experimental nature of the study.
I doubt that any problems in the action research studies carried out in a classroom would this extreme, but the illustration demonstrates the problems that can occur in a quasi-experimental design.
A far greater concern is experimenter bias. Experimenter bias occurs when the experimenter is aware of the experimental and control conditions in a study. In a carefully controlled laboratory study, the interaction between the experimenter and the participant is easy to keep constant. Usually, anything said beyond making the participant feel welcome and relaxed is carefully scripted in order to reduce experimenter bias. In theory, this kind of tight control means that as many of the potential variables that could influence a change in the variable of interest (the type of teaching strategy) is kept constant across groups. This allows the experimenter to infer causality in a study because if there is a difference in student performance and the only thing that is allowed to vary in the teaching strategy, the cause of the change in student performance can be attributed to the different teaching strategy used. For obvious reasons, this kind of control cannot be exerted in a naturally occurring classroom study.
So, why is experimenter bias a problem? It is not because of any intent on the part of the teacher. There is no question about the integrity of the teacher carrying out the study. It is a result that arises from natural human behavior for a number of reasons.
In the famous Oak School study, teachers were told the intelligence scores of their new, incoming students at the beginning of the school year, and at the end of the school year, the students who had the higher intelligence scores at the beginning of the year had higher grades than the students that had lower intelligence scores. This makes perfect sense until you realize that the intelligence scores were randomly assigned to each student at the beginning of the year. Being led to believe that the students had particular characteristics and abilities led the teachers to, unconsciously, treat the students differently and the students’ performance became aligned with the teacher’s expectations.
Experimenter bias is why, in ideal clinical trials of new drugs (double-blind studies), neither the experimenter conducting the study or the people participating in the study know what treatment they are receiving. This way, the knowledge of the experimenter cannot influence the effect of the drug. This is why I find it amusing that there are so many alternative healing companies who will tell their gullible buyers that the design of current (double-blind) research methodology to determine the effectiveness of miracle cures are unsuited for evaluating their particular cure. I am aware of a particular company who, after talking to an experimental unit of a university about testing their product, shook their collective heads when the methodology was explained to them. The patients receiving their particular remedy had to believe it would work or it wouldn’t work.
When you think about how most classroom action research is carried out, the teacher carrying out the research is not blind to any aspect of the study. The teacher is aware of the strategies being used (control and experimental), the participants (students) varying abilities, and the desired outcome. In addition, the teacher prepares and carries out the intervention. Finally, the teacher usually prepares and evaluates whatever is used to evaluate the students’ performance.
We are all aware of the power confirmation biases have in our society. We will gravitate to other people and ideas that align with our own. The confirmation bias is bigger than just that. We will also look for any reasons in what we do that will confirm our expectations. Confirmation bias resides in each of us. It is a part of what we are. How can we expect that a teacher, even if it is us, to ignore the power of built-in confirmation biases?
Think about it. The teacher carrying out the action research knows what to expect, believes that the intervention will be beneficial to the students (or they wouldn’t be trying it), and then evaluates the evaluations of the effectiveness of the intervention. How can anyone who relies on rational thinking and logic believe that traditional action research is an effective method of evaluating teaching strategies? The teachers know too much!
I had a look at some action research carried out under the direction of a respected action research company. I looked at a meta-analysis of fifteen different teaching strategies carried out in 329 classrooms involving 6,415 students. A meta-analysis is a study that looks at a number of different studies in an effort to evaluate the effectiveness of an intervention(s) across a variety of conditions (teachers, classrooms etc.).
The meta-analysis used an impressive array of statistical procedures that would reduce the average reader to a dazed biological mass. I am not an average reader, having taught advanced statistics and research methods in a research-based institution for 20 years. There was one test that the meta-analytic researcher ran on the numbers and one that I ran, independently, that caused me to shake my head and purse my lips. Across the 329 independent studies, there was NO difference in either the percent gain in the students’ performance in every study or the size of the effect of the increase in the students’ performance in every one of the 329 studies. Across 329 studies, testing 15 different teaching strategies, the increase in the performance for the students (6,415 student participants) was exactly the same after accounting for the natural variability that you would expect to find by chance. The same thing was found by the researchers on the size of any performance differences found in the individual studies (effect sizes). Both the increases in student performance and the size of those increases were the same across all of those studies.
Think about it. In 329 studies testing 15 different teaching strategies, the amount of increase in the students’ (6,415 of them) performance was virtually identical. This is either the most incredible finding ever or there is something else happening.
As a teacher, if I were to introduce a more effective teaching strategy, I would expect some improvement in the scores of the students. I think that a reasonable increase would be in the 10-12% range, enough to smile about without being over the top. Funny enough, that was the average increase that was observed across these hundreds of studies.
Do I think that any of the teachers involved in the research were being dishonest? Absolutely not! I don’t think the teachers were doing anything other than what is considered best practice in the field. Because they are not exposed to rigorous methodology training, how can they be expected to know any different? To a non-trained researcher, the methodology used makes complete sense. Test, intervention, and test followed by a comparison between the two tests. These methodological concerns are raised by experimental purists picking at perfectly good research because they want to be looked at as the experts.
Maybe that is what I am, an experimental purist picking at meaningless issues.
This is not to mean that some of the teaching interventions are not great interventions. All I am saying is that the way that they are being tested so that they can be called evidence is poor. What this means to me is that I have very little confidence in the research methods being used in the research that forms the basis upon which evidence-based practice is founded.
The Science of Learning arises from theoretical principles that have been found in tightly run experiments conducted in laboratories all over the world – some of them even mimicking real classroom settings.
If action research is going to be carried out, at least base the interventions on the experimental evidence that tells us how people learn.