Reading and resources

On this page you will find a selection of resources and articles on realist research. If you have suggestions for other links, please get in touch: r.j.l.hardwick@exeter.ac.uk

Anderson, R. and R. Hardwick (2016). “Realism and resources: Towards more explanatory economic evaluation.” Evaluation (Lond) 22(3): 323-341.
To be successfully and sustainably adopted, policy-makers, service managers and practitioners want public programmes to be affordable and cost-effective, as well as effective. While the realist evaluation question is often summarised as what works for whom, under what circumstances, we believe the approach can be as salient to answering questions about resource use, costs and cost-effectiveness – the traditional domain of economic evaluation methods. This paper first describes the key similarities and differences between economic evaluation and realist evaluation. It summarises what health economists see as the challenges of evaluating complex interventions, and their suggested solutions. We then use examples of programme theory from a recent realist review of shared care for chronic conditions to illustrate two ways in which realist evaluations might better capture the resource requirements and resource consequences of programmes, and thereby produce explanations of how they are linked to outcomes (i.e. explanations of cost-effectiveness).

Dalkin, S. M., et al. (2015). “What’s in a mechanism? Development of a key concept in realist evaluation.” Implement Sci 10: 49.
BACKGROUND: The idea that underlying, generative mechanisms give rise to causal regularities has become a guiding principle across many social and natural science disciplines. A specific form of this enquiry, realist evaluation is gaining momentum in the evaluation of complex social interventions. It focuses on ‘what works, how, in which conditions and for whom’ using context, mechanism and outcome configurations as opposed to asking whether an intervention ‘works’. Realist evaluation can be difficult to codify and requires considerable researcher reflection and creativity. As such there is often confusion when operationalising the method in practice. This article aims to clarify and further develop the concept of mechanism in realist evaluation and in doing so aid the learning of those operationalising the methodology. DISCUSSION: Using a social science illustration, we argue that disaggregating the concept of mechanism into its constituent parts helps to understand the difference between the resources offered by the intervention and the ways in which this changes the reasoning of participants. This in turn helps to distinguish between a context and mechanism. The notion of mechanisms ‘firing’ in social science research is explored, with discussions surrounding how this may stifle researchers’ realist thinking. We underline the importance of conceptualising mechanisms as operating on a continuum, rather than as an ‘on/off’ switch. The discussions in this article will hopefully progress and operationalise realist methods. This development is likely to occur due to the infancy of the methodology and its recent increased profile and use in social science research. The arguments we present have been tested and are explained throughout the article using a social science illustration, evidencing their usability and value.

Pawson, R., et al. (2014). “Do reviews of healthcare interventions teach us how to improve healthcare systems?” Soc Sci Med 114: 129-137.
Planners, managers and policy makers in modern health services are not without ingenuity – they will always try, try and try again. They face deep-seated or ‘wicked’ problems, which have complex roots in the labyrinthine structures though which healthcare is delivered. Accordingly, the interventions devised to deal with such stubborn problems usually come in the plural. Many different reforms are devised to deal with a particular stumbling block, which may be implemented sequentially, simultaneously or whenever policy fashion or funding dictates. This paper examines this predicament from the perspective of evidence based policy. How might researchers go about reviewing the evidence when they are faced with multiple or indeed competing interventions addressing the same problem? In the face of this plight a rather unheralded form of research synthesis has emerged, namely the ‘typological review’. We critically review the fortunes of this strategy. Separating the putative reforms into series of subtypes and producing a scorecard of their outcomes has the unintended effect of divorcing them all from an understanding of how organisations change. A more fruitful approach may lie in a ‘theory-driven review’ underpinned by an understanding of dynamics of social change in complex organisations. We test this thesis by examining the primary and secondary research on the many interventions designed to tackle a particularly wicked problem, namely the inexorable rise in demand for healthcare.

Pawson, R., et al. (2005). “Realist review–a new method of systematic review designed for complex policy interventions.” J Health Serv Res Policy 10 Suppl 1: 21-34.
Evidence-based policy is a dominant theme in contemporary public services but the practical realities and challenges involved in using evidence in policy-making are formidable. Part of the problem is one of complexity. In health services and other public services, we are dealing with complex social interventions which act on complex social systems–things like league tables, performance measures, regulation and inspection, or funding reforms. These are not ‘magic bullets’ which will always hit their target, but programmes whose effects are crucially dependent on context and implementation. Traditional methods of review focus on measuring and reporting on programme effectiveness, often find that the evidence is mixed or conflicting, and provide little or no clue as to why the intervention worked or did not work when applied in different contexts or circumstances, deployed by different stakeholders, or used for different purposes. This paper offers a model of research synthesis which is designed to work with complex social interventions or programmes, and which is based on the emerging ‘realist’ approach to evaluation. It provides an explanatory analysis aimed at discerning what works for whom, in what circumstances, in what respects and how. The first step is to make explicit the programme theory (or theories)–the underlying assumptions about how an intervention is meant to work and what impacts it is expected to have. We then look for empirical evidence to populate this theoretical framework, supporting, contradicting or modifying the programme theories as it goes. The results of the review combine theoretical understanding and empirical evidence, and focus on explaining the relationship between the context in which the intervention is applied, the mechanisms by which it works and the outcomes which are produced. The aim is to enable decision-makers to reach a deeper understanding of the intervention and how it can be made to work most effectively. Realist review does not provide simple answers to complex questions. It will not tell policy-makers or managers whether something works or not, but will provide the policy and practice community with the kind of rich, detailed and highly practical understanding of complex social interventions which is likely to be of much more use to them when planning and implementing programmes at a national, regional or local level.

Pearson, M., et al. (2015). “Implementing health promotion programmes in schools: a realist systematic review of research and experience in the United Kingdom.” Implement Sci 10: 149.
BACKGROUND: Schools have long been viewed as a good setting in which to encourage healthy lifestyles amongst children, and schools in many countries aspire to more comprehensive, integrated approaches to health promotion. Recent reviews have identified evidence of the effects of school health promotion on children’s and young people’s health. However, understanding of how such programmes can be implemented in schools is more limited. METHODS: We conducted a realist review to identify the conditions and actions which lead to the successful implementation of health promotion programmes in schools. We used the international literature to develop programme theories which were then tested using evaluations of school health promotion programmes conducted in the United Kingdom (UK). Iterative searching and screening was conducted to identify sources and clear criteria applied for appraisal of included sources. A review advisory group comprising educational and public health practitioners, commissioners, and academics was established at the outset. RESULTS: In consultation with the review advisory group, we developed four programme theories (preparing for implementation, initial implementation, embedding into routine practice, adaptation and evolution); these were then refined using the UK evaluations in the review. This enabled us to identify transferable mechanisms and enabling and constraining contexts and investigate how the operation of mechanisms differed in different contexts. We also identified steps that should be taken at a senior level in relation to preparing for implementation (which revolved around negotiation about programme delivery) and initial implementation (which centred on facilitation, support, and reciprocity-the latter for both programme deliverers and pupils). However, the depth and rigour of evidence concerning embedding into routine practice and adaptation and evolution was limited. CONCLUSIONS: Our findings provide guidance for the design, implementation, and evaluation of health promotion in schools and identify the areas where further research is needed.

Salter, K. L. and A. Kothari (2014). “Using realist evaluation to open the black box of knowledge translation: a state-of-the-art review.” Implement Sci 9(1): 115.
BackgroundIn knowledge translation, complex interventions may be implemented in the attempt to improve uptake of research-based knowledge in practice. Traditional evaluation efforts that focus on aggregate effectiveness represent an oversimplification of both the environment and the interventions themselves. However, theory-based approaches to evaluation, such as realist evaluation (RE), may be better-suited to examination of complex knowledge translation interventions with a view to understanding what works, for whom, and under what conditions. It is the aim of the present state-of-the-art review to examine current literature with regard to the use of RE in the assessment of knowledge translation interventions implemented within healthcare environments.MethodsMultiple online databases were searched from 1997 through June 2013. Primary studies examining the application or implementation of knowledge translation interventions within healthcare settings and using RE were selected for inclusion. Varying applications of RE across studies were examined in terms of a) reporting of core elements of RE, and b) potential feasibility of this evaluation method.ResultsA total of 14 studies (6 study protocols), published between 2007 and 2013, were identified for inclusion. Projects were initiated in a variety of healthcare settings and represented a range of interventions. While a majority of authors mentioned context (C), mechanism (M) and outcome (O), a minority reported the development of C-M-O configurations or testable hypotheses based on these configurations. Four completed studies reported results that included refinement of proposed C-M-O configurations and offered explanations within the RE framework. In the few studies offering insight regarding challenges associated with the use of RE, difficulties were expressed regarding the definition of both mechanisms and contextual factors. Overall, RE was perceived as time-consuming and resource intensive.ConclusionsThe use of RE in knowledge translation is relatively new; however, theory-building approaches to the examination of complex interventions in this area may be increasing as researchers attempt to identify what works, for whom and under what circumstances. Completion of the RE cycle may be challenging, particularly in the development of C-M-O configurations; however, as researchers approach challenges and explore innovations in its application, rich and detailed accounts may improve feasibility.

Van Belle, S., et al. (2016). “Can “realist” randomised controlled trials be genuinely realist?” Trials 17(1): 313.
In this paper, we respond to a paper by Jamal and colleagues published in Trials in October 2015 and take an opportunity to continue the much-needed debate about what applied scientific realism is. The paper by Jamal et al. is useful because it exposes the challenges of combining a realist evaluation approach (as developed by Pawson and Tilley) with the randomised controlled trial (RCT) design.We identified three fundamental differences that are related to paradigmatic differences in the treatment of causation between post-positivist and realist logic: (1) the construct of mechanism, (2) the relation between mediators and moderators on one hand and mechanisms and contexts on the other hand, and (3) the variable-oriented approach to analysis of causation versus the configurational approach.We show how Jamal et al. consider mechanisms as observable, external treatments and how their approach reduces complex causal processes to variables. We argue that their proposed RCT design cannot provide a truly realist understanding. Not only does the proposed realist RCT design not deal with the RCT’s inherent inability to “unpack” complex interventions, it also does not enable the identification of the dynamic interplay among the intervention, actors, context, mechanisms and outcomes, which is at the core of realist research. As a result, the proposed realist RCT design is not, as we understand it, genuinely realist in nature.

`Wong, G., et al. (2013). “RAMESES publication standards: realist syntheses.” J Adv Nurs 69(5): 1005-1022.
BACKGROUND: There is growing interest in realist synthesis as an alternative systematic review method. This approach offers the potential to expand the knowledge base in policy-relevant areas – for example, by explaining the success, failure or mixed fortunes of complex interventions. No previous publication standards exist for reporting realist syntheses. This standard was developed as part of the RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) project. The project’s aim is to produce preliminary publication standards for realist systematic reviews. DESIGN: A mixed method study synthesising data between 2011-2012 from a literature review, online Delphi panel and feedback from training, workshops and email list. METHODS: We: (a) collated and summarized existing literature on the principles of good practice in realist syntheses; (b) considered the extent to which these principles had been followed by published syntheses, thereby identifying how rigour may be lost and how existing methods could be improved; (c) used a three-round online Delphi method with an interdisciplinary panel of national and international experts in evidence synthesis, realist research, policy and/or publishing to produce and iteratively refine a draft set of methodological steps and publication standards; (d) provided real-time support to ongoing realist syntheses and the open-access RAMESES online discussion list to capture problems and questions as they arose; and (e) synthesized expert input, evidence syntheses and real-time problem analysis into a definitive set of standards. RESULTS: We identified 35 published realist syntheses, provided real-time support to 9 ongoing syntheses and captured questions raised in the RAMESES discussion list. Through analysis and discussion within the project team, we summarized the published literature and common questions and challenges into briefing materials for the Delphi panel, comprising 37 members. Within 3 rounds this panel had reached consensus on 19 key publication standards, with an overall response rate of 91%. CONCLUSIONS: This project used multiple sources to develop and draw together evidence and expertise in realist synthesis. For each item we have included an explanation for why it is important and guidance on how it might be reported. Realist synthesis is a relatively new method for evidence synthesis and as experience and methodological developments occur, we anticipate that these standards will evolve to reflect further methodological developments. We hope that these standards will act as a resource that will contribute to improving the reporting of realist syntheses.

Wong, G., et al. (2016). “RAMESES II reporting standards for realist evaluations.” BMC Med 14(1): 96.
BACKGROUND: Realist evaluation is increasingly used in health services and other fields of research and evaluation. No previous standards exist for reporting realist evaluations. This standard was developed as part of the RAMESES II project. The project’s aim is to produce initial reporting standards for realist evaluations. METHODS: We purposively recruited a maximum variety sample of an international group of experts in realist evaluation to our online Delphi panel. Panel members came from a variety of disciplines, sectors and policy fields. We prepared the briefing materials for our Delphi panel by summarising the most recent literature on realist evaluations to identify how and why rigour had been demonstrated and where gaps in expertise and rigour were evident. We also drew on our collective experience as realist evaluators, in training and supporting realist evaluations, and on the RAMESES email list to help us develop the briefing materials. Through discussion within the project team, we developed a list of issues related to quality that needed to be addressed when carrying out realist evaluations. These were then shared with the panel members and their feedback was sought. Once the panel members had provided their feedback on our briefing materials, we constructed a set of items for potential inclusion in the reporting standards and circulated these online to panel members. Panel members were asked to rank each potential item twice on a 7-point Likert scale, once for relevance and once for validity. They were also encouraged to provide free text comments. RESULTS: We recruited 35 panel members from 27 organisations across six countries from nine different disciplines. Within three rounds our Delphi panel was able to reach consensus on 20 items that should be included in the reporting standards for realist evaluations. The overall response rates for all items for rounds 1, 2 and 3 were 94 %, 76 % and 80 %, respectively. CONCLUSION: These reporting standards for realist evaluations have been developed by drawing on a range of sources. We hope that these standards will lead to greater consistency and rigour of reporting and make realist evaluation reports more accessible, usable and helpful to different stakeholders.

Mark and I have drawn together the following for those wanting to immerse themselves more in the realist approach. We’ll be adding to it periodically, but for now, help yourself… and if you have any suggestions, please let us know in the comment box. Thanks.

Methods – books, articles, publication guidelines (RAMESES).

ARCHER, M. 2010. Routine, reflexivity and realism. Sociological Theory. 28, (3) 272-303

ASTBURY, B. & LEEUW, F. L. 2010. Unpacking black boxes: mechanisms and theory building in evaluation. American Journal of Evaluation, 31, 363-381.

BERWICK, D. M. 2008. The science of improvement. Journal of the American Medical Association, 299, 1182-1184.

CAMPBELL, D. & RUSSO, M. J. 1999. Social Experimentation, London, Sage Publications.

DAVIDOFF, F. 2009. Heterogeneity is not always noise: lessons from improvement. Journal of the American Medical Association, 302, 2580-2586.

ELSTER, J (2007) Explaining Social Behaviour. Cambridge: Cambridge University Press.

Pawson, R. and N. Tilley (1997). Realistic Evaluation. London, Sage.

PAWSON, R. 2002. Evidence-based policy: the promise of ‘realist synthesis’. Evaluation, 8, 340-358.

PAWSON, R. 2006. Evidence-Based Policy: A Realist Perspective, London, Sage Publications.

PAWSON, R. 2013. The Science of Evaluation: A Realist Manifesto, London Sage Publications.

PAWSON, R., GREENHALGH, T., HARVEY, G. & WALSHE, K. 2005. Realist review: a new method of systematic review for complex policy interventions. Journal of Health Services Research and Policy, 10, 21-34.

PAWSON, R., OWEN, L. & WONG, G. 2010. The Today Programme’s contribution to Evidence-Based Policy. Evaluation, 16, 211-213.

Sayer, A. (1992). Method in Social Science: A Realist Approach. New York, NY, Routledge.

Sayer, A. (2000). Realism and Social Science. London, Sage.

WESTHORP, G. 2012 Using complexity consistent theory for evaluating complex systems. Evaluation, 18, (4) 405-420

WONG, G., GREENHALGH, T., WESTHORP, G., BUCKINGHAM, J. & PAWSON, R. 2013. RAMESES publication standards: realist syntheses. BMC Medicine, 11, 21.

 

 
Reviews

GREENHALGH, T., KRISTJANSSON, E. & ROBINSON, V. 2007. Realist review to understand the efficacy of school feeding programmes. British Medical Journal, 335, 858-861.

PEARSON, M., HUNT, H., COOPER, C., SHEPPERD, S., PAWSON, R. & ANDERSON, R. 2013. Intermediate care: a realist review and conceptual framework. Final report. Southampton: NIHR Service Delivery and Organisation programme.

WONG, G., GREENHALGH, T. & PAWSON, R. 2010. Internet-based medical education: a realist review of what works, for whom and in what circumstances. BMC Medical Education, 10, 12.

 

Leave a Reply