Education Research

Responding to Arnold Kling, A Preference for Ingorance

This is in general a pretty good article, though I am not sure about the truth of (or the relevance of) the statement that "we prefer to be ignorant about what succeeds and what fails" in the fields of health care, education and foreign aid.

To be sure, I have spent my fair share of time criticizing politicians and others who recommend measures based on their intuitions about how people learn rather than on empirical data and reasoning. But the problem isn't that people prefer to remain ignorant, it is, rather, that they are not good at obtaining and evaluating empirical data.

To take education, for example, it is clear that a great deal of empirical research has been conducted, both observational studies and experimentation. Beyond government testing programs, agencies such as OECD have run large studies (known as PISA) to measure the impact of various factors, such as socio-economic status, computers in the home, and the use of books, on learning.

Various other measurement initiatives exist, for example, the Campbell Collaboration, which is an effort to standardize experimental methodology in education in order to enable the replication of research results (one problem is that different studies measure different variables, or define the same variables differently).

In conjunction with other disciplines (such as psychology and neuroscience) quite a bit has been learned about the nature of learning in recent years, which is why schools have started nutrition programs, why memorization has been replaced with methodology, and which constructivist teaching methodologies are supplanting existing behaviourist approaches.

So while the efficacy of research, as described in this article, is well known, the other suggestion, that people don't care, does not appear to be true.

It's hard to know where the author is intending to go with this article, or whom, even, he is addressing. But I suspect that were the conversation to continue we would end up in a conversation about the nature of the complex phenomena being studied.

This is because the sorts of studies described by the author are suitable for measuring stable systems with known independent variables. They are good at discovering natural laws of physics, for example, because the laws of nature remain unchanged over time and the states being measured, such as temperatures, do not vary randomly according to uncontrollable factors.

Education is not like that. Neither, for that matter, are some aspects of medicine and foreign aid. We cannot count on a consistency of environment, not even when we conduct experimental studies, much less observational studies. This is because the environment is densely connected to a variety of other phenomena that cannot be taken into account in the experimental setting.

Take, for example, the study that concludes, according to Kling, "What these studies suggest is that we are sending patients to specialists, to hospitals, and for expensive diagnostic tests without knowing when this is cost-effective and when it is not." The presumption of such a study is that the object of the health care system is to heal the sick. In a private system, however, it is also to make money. Therefore, it is equally likely that what the study shows is that we (doctors?) don't care whether the diagnostic test is cost-effective, since the purpose is to generate customers for diagnostic services.

The issue here is not whether doctors are really so cold-hearted or not. Other things being equal, some would probably care more about money, while others would care about health. But this is not something that can be abstracted and controlled in an experimental setting, since the doctors' attitudes may be different from what the doctors report, and these attitudes may vary according to a wide variety of external factors (time allocated, disposition of supervisors, hour of the day) well beyond the experimental range.

Because of this, when these experiments are conducted in a health (or educational, or foreign aid) setting, what happens of necessity is that the experiment is deliberately isolated and abstracted from the environment as a whole.

For example, to test whether flip-charts improve education, all other factors need to remain the same, and since these factors never remain the same on their own, the experiment must be *designed* in order to hold those factors constant. In other cases (such as the PISA studies) the impact of (those measurable) other factors must be eliminated mathematically in an analysis of the experimental data.

What this produces, however, is a result that applies only in the experimental setting. The inference from the experimental setting to the wider practise is not warranted. Moreover, because there is a priori no standard for the development of experimental settings (as contrasted, say, with the standards for obtaining random samples in polls) the experimental setting can, and often does, presume the conclusion the author is attempting to establish.

In the case of flip charts, for example, the educational setting consists (presumably) of a teacher presenting educational information to a group of children. The students can be sufficiently randomized. However, the experimental design entails a methodology in which education is attempted via the transfer of information from a teacher to a student, in which case a flip-chart would be useful, as compared to a program of self-study by a student, in which case a flip-chart would be useless.

More cynical examples can be found in the field of medicine. A controlled study looking at the best way to treat war wounds, for example, will by its design exclude the most effective way to eliminate these wounds: don't have wars.

That education is a complex phenomenon, and therefore resistant to static-variable experimental studies, does not mean that it is beyond the realm of scientific study. It does mean, however, that the desire for simple empirically supported conclusions (such as, say, "experiments show phonics is more effective") is misplaced. No such conclusions are forthcoming, or more accurately, any such conclusion is the result of experimental design, and not descriptive of the state of nature.

Some items that have popped up recently and which are relevant to this issue:

Creating A Motivational Classroom

The Art of Complex Problem Solving

The Megacommunity Manifesto

A Tectonic Shift in Global Higher Education

Comments

Popular Posts