Experimental and Quasi-Experimental Methods

Applied to the Study of Governance in the Developing World


September 28 (dinner), September 29 (conference and dinner)

At Harvard University




          · The Harvard Academy for International and Area Studies

          · Kirk Radke, principal funder of the Clinton Global Initiative at Boston University

          · The Pardee Center for the Study of the Longer-Range Future at Boston University



Methods that exploit randomization or “as if” randomization as a means of identifying causal effects -- including laboratory experiments, field experiments, and natural experiments – are increasingly used in many subfields of political science, with two notable limitations.  Heretofore, they have been applied primarily to interventions with individual- and proximal-level outcomes, e.g., short-term changes in voting, opinions, and public health.  Rarely are these methods applied to institutional outcomes or to longer term attitudinal or behavioral change. 

               This conference will consider recent and potential applications of randomized interventions to understanding institutional outcomes, with a special emphasis on institutions of relevance to governance in the developing worldWhat studies of this nature have been conducted, to date?  How successful have they been?  What are the prospects for future or ongoing studies?  More generally, what is the growth potential of this mode of analysis? 

               Within this broad rubric, five classes of questions will be given special consideration.

               First, how can the subject of governance, which tends to be holistic and all-encompassing, be “scoped down” in such a way that it becomes amenable to scientific study?  Can local governments serve as useful units of analysis (across a broad range of questions)?  Can variation within national-level units – e.g., across legislative committees, across agencies, across policy areas -- be adopted for use in this context?

               Second, what is an appropriate “treatment”?  What sorts of interventions should we be studying?  We are conscious of the ambiguity that arises when we try to assess the causal impact of large, institutional factors such as democracy/authoritarianism.  Yet, we are also cognizant of the theoretical triviality that may arise from highly focused treatments (e.g., bed nets).  This is a particular problem given that field experiments cannot be endlessly iterated with slight variations due to limitations imposed by time, cost, or contamination of the original units of study.  One must choose interventions carefully, with an eye to policy significance and theory development.

              Third, where randomization is not possible, is there a good, or even semi-decent, alternative?  How useful are control groups where the treatment is not randomized (either by the experimenter or by nature)?  How much leverage can be gained from pre- and post-tests (without a control group)?  How “experimental” does a research design need to be in order to tell us something useful about questions of governance?

               Fourth, we want to discuss some of the practical obstacles to this sort of work in the developing world.  Are experimental methods too expensive, or too time-consuming?  What are the political and ethical issues raised by experimental work on governance topics?  Are these sorts of projects likely to be accepted by democratically elected governments, by local communities, and by local NGOs? 

               Finally, what are the prospects for experimental methods in the area of program evaluation?  There is great interest right now in better program evaluation at USAID, the World Bank, the UNDP, and elsewhere.  Might this interest be channeled towards experimental methods? 



Friday, September 28


6:30 pm:   Dinner


Saturday, September 29


9:00  Panel I: Overview of existing work

A brief review of experimental methods as applied to problems of governance in the developing world, including projects completed and projects in-progress (see Chattopadhyay & Duflo 2004; Duflo & Hanna 2006; Duflo & Kremer 2004; Humphreys, Masters & Sandbu 2007; Hyde 2006; Olken 2006; Vicente 2007; Wantchekon 2003; and sources cited in Savedoff et al. 2006).  We would like to assess what types of questions are being addressed using experimental or quasi-experimental methods, and to identify the strengths and weaknesses of the approaches that have so far been employed.


10:45  Break


11:00  Panel II: An agenda for experimental and quasi-experimental research on governance

What central questions about the origins and impact of institutions can and cannot be answered using experimental or quasi-experimental approaches? For a set of research agendas in political science—voting/political participation, accountability/corruption, decision-making rules, decentralization/federalism—we will explore the major questions being asked in each subfield and assess the applicability of experimental methods for providing new insights 


12:30  Lunch


2:00  Panel III: Issues of implementation

To what extent do the institutional “treatments” being provided by donors/NGOs provide a useful vehicle for learning about institutions? What special ethical, IRB, and implementation issues arise in experimental work of the sort under consideration here? What sorts of incentives and special arrangements may be necessary in order to secure the participation of governments, NGOs, and subjects in the developing world? When strict randomization is not possible what second best methods should be used?


3:30  Break


3:45  Panel IV: A Network of Program Evaluators, focused on problems of governance 

How might we advance the quality of program evaluation, as well as the opportunities for political scientists to do experimental work in the developing world?  Discussion of a proposal to develop a network linking researchers working on governance issues to implementing agencies.  See further discussion in addendum to this document.


5:30  Reception


7:00  Dinner





The conference brings together academic researchers along with a small number of representatives from major institutions that have some experience with these methods and can serve as resources in discussions of the opportunities and challenges for academic—project partnerships.  (A planned follow-up conference will be focused more explicitly on the policymaking community and will bring together a group of program implementers from government agencies, international institutions, and NGOs.)



John Gerring – Political Science, Boston University

Macartan Humphreys – Political Science, Columbia

Devra Moehler – Government, Cornell and Harvard

Jeremy Weinstein – Political Science, Stanford



Robert Bates – Government, Harvard

Christopher Blattman – Economics, UC Berkeley

Robert Chase – World Bank

Esther Duflo – Economics, MIT

Thad Dunning – Political Science, Yale

Ruben Enikolopov – Economics, Harvard

James Fearon – Political Science, Stanford

Don Green – Political Science, Yale

Susan Hyde – Political Science, Yale

Craig McIntosh – Economics, UCSD

Jodi Nelson -- IRC

Pippa Norris – Kennedy School of Government and UNDP

Ben Olken – Economics, NBER

Betsy Levy Paluck – Psychology, Yale

Rohini Pande – Economics, Kennedy School of Government

Pamela Paxton – Sociology, Ohio State

Daniel Posner – Political Science, UCLA

Matthias Schündeln – Economics, Harvard

Smita Singh -- Hewlett

Pedro Vicente – Economics, Oxford

Leonard Wantchekon – Political Science, NYU


Harvard Academy Fellows




A Network of Program Evaluators on Governance


Proposal:  A network of political scientists engaged in program evaluation using experimental or quasi experimental methods on issues related to governance.


The Opportunity:  Funding agencies, both public (e.g., UNDP and USAID) and private (e.g., IRC, Oxfam, MercyCorps), want to know how to better evaluate the effectiveness of their programs so that they can make better use of scarce resources and better justify their activities to taxpayers and private donors.  They are aware that they are often doing a poor job at program evaluation at present and that foreign aid is under threat.  At the same time, political scientists are increasingly interested in the use of experimental research designs for studying political issues.  While there has been some excellent work using experimental methods for program evaluation (notably at MIT’s Poverty Action Lab), there is still a paucity of work on specifically political themes and especially on interventions that attempt to affect institutional-level outcomes, e.g., the quality of democracy or the quality of governance.  (Among economists, Ben Olken’s work is one exception to this general pattern.) 


The Idea:  The creation of a network of social scientists, primarily focused on the discipline of political science (but not exclusively), to advance the quality and the quantity of program evaluation in the area of governance in the developing world. 


Specifics:  The network would consist of a committee, drawn from various universities and perhaps representing different disciplines.  This committee would be the governing body.  The initiatives we have in mind are the following: 



Several general issues should be kept in mind.