Program Scale-Up and Sustainability
Conventional wisdom suggests a tradeoff between program effectiveness and scale: as a program scales up it gets “watered down”, utilizing lower-quality inputs and less-productive workers, and achieving smaller impacts. This paper attempts to understand this tradeoff by disentangling the effects of changes in input quality and increased scale on the effectiveness of a mother-tongue literacy program. We also examine program sustainability by comparing medium-run effects of one year of program exposure to multiple years of program exposure.
We randomly assign government primary schools in northern Uganda to receive either a) a program delivered by the implementing organization with high-quality inputs, b) a reduced-cost model delivered by the government, or c) no intervention (the status quo). The reduced-cost version achieves cost savings by cutting back on the quality and quantity of training, which is one of the most-important inputs provided by the program. We follow students over five years and compare program effectiveness before and after a scale-up of the program that increases the number of participating schools by 230%. We also measure program effectiveness when students are either exposed to the program one year, or exposed all four.
First, we find that cutting back on the quality and quantity of training causes large declines in the effectiveness of the program. Second, scaling up the program has close to zero impact on its effectiveness. Third, the effects of the program on students fade out at an average rate of approximately 0.075 standard deviations per year after the treatment ends. If the treatment is continued, the effects increase at a rate of 0.44 standard deviations per year for the implementer-run version of the program and 0.26 standard deviations per year for the government-run version. Gains for teachers persist strongly one year after they are treated, and then drop off substantially.