Failing is embarrassing and inevitable.
It is for that reason that is very refreshing to find organizations that want to learn from failure, where failure is recognized and built upon, like the Canadian NGO Engineers Without Borders, led by David Damberger, and which produces an Annual Failure Report.
A good reason for the flourishing of Impact Evaluations is the perception that development economics has failed to produce concrete evidence on what works and what does not. Generic development questions can only have ideological answers.
No matter what the evidence, Sachs will probably never agree with Easterly or Moyo. After all, in the words of Bertrand Russell: “the most savage controversies are those about matters as to which there is no good evidence either way”.
Knowing what works and what does not can only be drawn from intervention specific evidence, where attribution is uncontested. Failure is only possible when attribution is clear.
But not all failures are created equal.
An evaluation might identify a causal link between a treatment and an effect. But if it fails to identify such causality, that failure can be interpreted as a theoretical failure, an implementation failure, or a methodological failure.
A theoretical failure can happen when the wrong theory of change underpins the null hypothesis, that is when the theory linking the causal variable to the outcome is flawed, or when complexity hides causality and rival explanations are possible.
An implementation failure, on the other hand, reflects the fact that evaluations are put in place by real people in constantly changing environments and few protocols are strictly followed in full.
Methodological failure, on the other hand, is a statistical power failure. This type of failure takes many shapes: external and internal validity or Type I and Type II errors.
Failing is embarrassing and inevitable, but necessary.
Leave a Reply