A few days ago I attended a book presentation on experimental evaluations where one participant stated, “this is all well and good, but it does not apply to institutional reforms; one cannot ‘randomize’ the reform of a ministry.”
Similarly, a document from a bilateral aid agency stated that it can experimentally evaluate only 5 percent of the total development assistance under its management, noting in particular its inability to perform evaluations of institutional reforms.
I disagree. Institutional structures and reforms affect – at the end of the day – individual behavior in citizens and functionaries.
And we can generate rigorous evidence on this behaviour based on randomized evaluations.
And we have to be able to generalize and validate these evaluations in order to inform the larger institucional frameworks, based on hard evidence.
The fact is, I believe, that this kind of conclusions follow from the skewed approach we apply to institutional reforms, which helps to explain our limited success in their application. Historically, institutional reforms have focused more on the instrument (the formal or informal rules that we want change), than on the behavior (of citizens or public officials) that the change in rules seeks to influence in order to achieve a specific result (for example, to increase tax receipts, formalize employment, or provide more access to the judicial system).
By focusing our attention on rules or institutional arrangements, the path to change tends to become mired in ideologies that claim superiority for a certain institutional engineering approach over another (usually because it is practiced in a more advanced or more admired country).
We operate through imitation (even in his time, de Tocqueville said that institutional museums contained very few originals and many copies.)
The result is that many institutional transplants are rejected and the new rules are merely approved, and not translated into changes in behavior. Even worse, they are twisted by the participants in ways that subvert their original intention.
This is what a colleague of mine calls the corruption of anti-corruption, when warnings are issued about anti-corruption measures and attention is diverted from substantive issues that remain unaddressed.
We frequently invest in institutional changes that are not based on empirical evidence that demonstrate causal connections between institutions, effects on behavior, and ultimate objectives. Very frequently the excuse is that these changes do not have a counter factual. The truth is that very few assessments provide not even a rigorous before and after narrative that describes the changes taking place
If we focus on changes in behavior of officials and citizens that we want to modify, and we apply experimental logic to their design and evaluation processes, we will uncover more effective and efficient approaches.
“Randomization” is one of the methods at our disposal for testing the effects of various approaches (e.g., different relationships between tax inspectors and groups of randomly selected taxpayers, and ascertaining their effects on compliance and tax revenue). Other empirical methods can be applied to collect evidence to match tables of available administrative data.
An example of this effort is contained in an article on the effects of different levels of administrative effort in implementing labor standards in Brazil.
Some will say that this cannot be done, that all citizens should be treated equally. But the fact is that random selection of a sample can, in fact, give all potentially affected actors the same options. In addition, the different approaches would be carried out within reasonable policy spaces where citizens and public officials interact.
Sometimes, just by using a different way to frame options for citizens to relate to governmental officials can produce substantial change.
This is also the point of Nudge, the provocative text of Thaler and Sunstein where we can find many examples that show the potential for public policy of this approach.
Leave a Reply