Abstract

This paper applies novel techniques to long-standing questions of aid effectiveness. It constructs a new data set using machine-learning methods to encode aspects of development project documents that would be infeasible with manual methods. It then uses that data set to show that the strongest predictor of these projects’ contributions to development outcomes is not the self-evaluation ratings assigned by donors, but their degree of adaptation to country context and that the largest differences between ratings and actual impact occur in large projects in institutionally weak settings. It also finds suggestive evidence that the content of ex post reviews of project effectiveness may predict sector outcomes, even if ratings do not.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)
You do not currently have access to this article.