I’ve just completed my first Systematic Review (SR) – we looked at the impact of tariff reductions on employment and government revenue in developing countries. It’s rare that I get a chance to sit back and reflect on a research project when it’s completed, but given that SRs are only emerging in social sciences, I thought this would be a good opportunity to share my experience of the process.
DFID, in an effort to improve evidence-based policy decisions in development has funded two rounds of SRs managed by 3ie about important development questions. For this project, I worked with colleagues Dirk Willenbockel and Rajith Lakshman here at IDS, supported by the EPPI Centre.
For those of you who are unfamiliar, a Systematic Review (SR) is a methodology that attempts to synthesise and critically appraise existing evidence. While SRs have been a common practice in disciplines such as healthcare and medicine, they are rarely used by social scientists, who have tended to use literature reviews. For those that would like more background information, 3ie produce a useful introduction to SRs.
In my view, the key advantages of SRs compared to traditional surveys are (a) clarity in the methods of synthesis, (b) transparency regarding what is and is not included, and (c) replicability.
Here are a few general impressions about conducting a SR – and it would be good to know what other people think:
- It’s a very labour intensive exercise. The transparency and clear methods of SRs are translated into a very labour intensive exercise (at times painfully so), especially regarding searches and inclusion and exclusion of relevant studies.
- There’s a mismatch between the policy question that you’re trying to answer and the way existing evaluations address the question. This is especially true in the case of macro or meso interventions, where the causal link between the intervention (e.g. a tariff reduction) and outcome (e.g. employment) is distant, complex and affected by a large number of factors (such as sector differences and country characteristics etc). In these cases, rather than directly answering the big question, existing evaluations focus on narrow aspects of the causal link, which makes the final synthesis difficult.
- There’s a larger number of methodological approaches in social sciences. In medicine, many SRs synthesise studies that use the same methodology and measurement units. But SRs in social sciences face the challenge of synthesising different methodologies in evaluations; ranging from purely qualitative to experimental or Randomised Control Trials (RCTs).
- Although it’s transparent, you still need to make strong assumptions about which studies to include. While the criteria for including and excluding relevant studies is made explicit, SRs still require a significant number of assumptions regarding which studies to include, and why some studies are of higher/lower quality than others.
I think that SRs are an important and worthwhile tool for evidence-based policy. They help to specify conceptual models, theories of change and contextual factors when understanding causal relationships, which is vital for social science.
SRs are also a transparent and comprehensive approach to answering major policy questions, and very helpful in scoping the appropriate questions to answer in the first place. More importantly, SRs can be drivers for identifying important methodological limitations in existing evaluations, and at the same time provide information about how to realign policy with empirical evidence.
While the benefits clearly outweigh the costs, it is also true that more work is required to shape methodological approaches to the complex nature of evaluations in social sciences, especially regarding more macro policy interventions.
A final thought on a related issue...
While most evaluations we reviewed seemed to focus on a key question (i.e. whether a policy intervention works) very few considered an additional critical question for policy: whether interventions are the most cost effective way of achieving the specific outcome. It seems to me that the issue of cost-effectiveness is sometimes missing in the current heated debate on evaluations and RCTs, but it is critical for policy makers and practitioners who are managing a portfolio of interventions.