In defense of realist approaches… albeit a modest, middle-range, empirically-rich kind of defense*.

(This is not Apollo my cat.)

(I’d like to start this post by getting something off my chest. I love systematic reviews. I also love RCTs. I like stats (when someone reminds me how to understand them). And I appreciate a good forest plot as much as the next gal. I like being a researcher, that I get paid to think about stuff and write about it. I’d probably do it for free if there wasn’t a multi-morbid cat in my life who demands I spend the equivalent of a modest All Saints spree at the vet every month instead. All this to say, what follows is not an incitement to rise up against any particular methodology… far from it, in fact the opposite: there is more that unites us than divides us, and I seek to continue to bridge those divides.)

Earlier today, I went to lunch with esteemed colleagues and during our discussions, I gave answer to several points to people who are not overly fond of or convinced by realist approaches. I thought it would be helpful to do a blog post on this, as these are recurrent questions that come up, and these are the kinds of responses I give.

My starting point is that, as a realist who thinks that there is a mind independent reality, that is knowable, but only partially, I might know a lot about realist methods, and I might have some defences, but I am also only partially knowledgeable – there is never a point I will get to when I think “that’s it, I’m done, I know it all”. So what follows is my partial knowledge, ready for others to refute and refine, so that our collective knowledge can proceed. Each of these headings could easily have a blog post of their own, but seeing as I’m up to 250+ words already, I’ll be brief and maybe come back to them again at a later date.

Realist research promises so much, but does it deliver? / They don’t give us clear knowledge of what works and so are not helpful to decision makers.

In common with other methodologies, it depends on the quality of the actual research: from stakeholder engagement through design to data collection and writing and dissemination. What realism suggests is that there are ways of understanding how social programmes, policies or interventions work which are very useful to decision makers because in one sense, they under-promise: they can help explain in depth and with reasons, what is going on here, which in turn can help you think about what might happen there. Generalisable middle range theory is the goal of realist research, portable theory, which can be moved to different circumstances and tried out again, and in those attempts to disprove it, to find out where the theory doesn’t hold, we learn and refine theory and try again. This is the scientific method.

Where I think realist research is different is that there is not a claim of finality to the knowledge it produces: all theory is open to further development. Again, this is the scientific method… Brian Cox explaining something incredible about the Universe often starts by saying “Our best theory of this at the moment is…” This is good science, I just think realists are more upfront about it. And so when more research is needed, it is to refine what we already know, not to re-establish if what we already know is what we already know…. An example given in a lecture was of nearly 30 trials which happened after it had been established that audit and feedback tend to work in implementing behaviour change… our lecturer said what was the point of the 30 trials, and went on to talk of how what they are doing now is to compare intervention with refined intervention to get to the nub of ‘what works’… couldn’t agree more, this careful, conscious process of starting with what mostly works and refining it is realist research. See here for an interesting essay on the topic: The Realist Foundations of Evidence-Based Medicine: A Review Essay

(Following a useful conversation on Twitter about this last night and this morning (@rjlhardwick), it seems that there are several other issues to take into account when asking whether or not realist research has achieved its promise and is useful for decision makers (My first reflection on this, had I the time, would be to return to Ray’s original paper on realist synthesis which started the ‘promise’ thing and check out what he actually did promise, but I just don’t have time to do that right now… another post for a later date). My next reaction was a face palm as I recalled last year’s International Conference on Realist Research and Evaluation was called “From Promise to Practice“, and I was on the organising committee and also ran a workshop with the fabulous Lisa Burrows! How could I forget?! At the conference, an explicit requirement to present was to address how the realist work had made a difference in the real world. I don’t think there was a conference report, but I will talk to Gill Westhorpe and Emma Williams from Charles Darwin Uni to see if they are going to produce one.

Other points raised by colleagues were it’s too soon to call; there hasn’t been research done on this yet, although there has been work done on looking at the growth and development of realist research see here and here for two examples; it depends on what we mean by ‘useful’ and ‘used’, how would that be measured, and against what would it be compared – recognising that knowledge mobilisation is a relational process more than it is linear and that the processes of decision making, and how evidence influences that are complex and emergent, changing over time and space. Nevertheless, this would make a really interesting project.)

Realist research is qualitative

No it isn’t. Realism is a methodology and all methods are in, from the highest to lowliest, it is a broad church that takes all comers. Quant data tends to tell us where there are patterns of effectiveness which lead us to think ‘why’, for which we need more qual research to understand mechanisms and context. Next.

Realist researchers were (are) patronising and a bit arrogant

This was a really interesting point, and something I had previously wondered about, but being new to academia (circa 6 years), I didn’t have first-hand knowledge of this. It seems that when realist research was first talked of, those who were doing the talking were felt to be rather patronising and probably arrogant about what they were doing (and by inference, or directly, I don’t know) derogatory about those that weren’t doing what they were doing.

My question in response was “what can I do about that? I’m interested in how we move the field of research methods and methodology forward… must I forever account for the sins of my forefathers?” But it is something to bear in mind and is I reckon one of the primary reasons why much more experienced and cleverer researchers than me reject realist approaches: the people that do realist research are on some level a bit up themselves or obnoxious. I could have responded by retorting about the giant in group mentality of organisations like Cochrane or Campbell, but I would be pointing the finger back at ourselves: for whatever reason, when people get behind something, they really want an out-group to identify themselves as distinctive from. Same in life, same in politics, same in research. Sigh.

My hope is for a future where we recognise and mutually value the benefits of many different methodologies, and have moved beyond academic methodological arrogance. And to be part of that movement, I try to maintain a degree of humility in my words and an open mind when talking to other researchers that don’t share my philosophical underpinnings: I’m not so arrogant as to think that I have the whole truth, and I regard critical reflection on methods and methodology and my practice an inherent part of being a good academic.

What is the use of two different realist reviews if they find two different answers to the same question? (based on an example given over lunch.)

Ah, now I didn’t come up with a response to this at the time, but in the car coming home, I thought to myself that if this were the case, then you’d look at both to see how the arguments are put together, what the theory was and seek to adjudicate or refine the programme theory developed. All knowledge of generative mechanisms is partial, so it would not be surprising to find different results from different folks using the same, or indeed, different data: what you would have here is a golden opportunity to test out the programme theory in each review against the other and in doing so, understand better what is going on: this is my thinking on the matter, but I would love the opportunity to try this out.

Plus, to paraphrase Andrew Sayer the number of times something happens has nothing to do with why that something is happening: the focus of realist research on explanation and causation leads the search to find out why something is happening, not merely counting the instances when it did. If two realist reviews identified the same mechanisms, then that would not be surprising, because as we know, mechanisms exist as powers and liabilities, forces, simple rules or reasoning and resources, whether or not we can ‘see’ them… we observe their effects, and if two reviews observed similar things, and came to similar conclusions about mechanisms, then hurrah. And if they didn’t, then by Jove, get that deerstalker and cape, my pipe and carriage, and the games afoot!

Some of them are crap.
Yes. True. So are some RCTs, systematic reviews, ethnographies and so on. And reporting guidelines/publication standards don’t necessarily prevent this, although they are a start. Next.

It’s a bandwagon.

Also yes, true, probably. I came to realist research when doing my public health MSc in 2010. It was not a bandwagon then and no one had really heard about it**. In fact, if you said you were doing realist research, people generally looked a bit baffled. The RAMESES project was just starting. Exeter Uni was one of the first in the country to undertake realist research. Our own Richard Byng was one of the first people to do a realist evaluation for his PhD. Etc.

Since then, interest in realist approaches has grown steadily, and NIHR commission realist research now, and it seems everyone has at least heard of it, even if they’re not doing it. And a lot of places are doing it. And people are interested in it. The idea of it being a bandwagon sounds derogatory, as if realist research is in fashion now, but will soon be replaced by the new kid on the block. This could be true, but I think a correct understanding of realism, and how it applies to methods and in particular how Pawson and Tilley write and talk about it makes me think that it wont: the foundations of the methodology are embedded in the writing and thinking of great social scientists whose work influences our work to this day (Sayer, Archer, Bhaskar, Campbell, Merton, and so on). And that bunch are still cited and relevant. So I don’t think it’s going to go out of fashion.

I also wonder too why so many PhD students seem to choose it, and I have a few reflections on that: I wonder sometimes if this is because of how friendly and accessible the realist community is? The online JSCMAIL group for RAMESES is a haven of friendly and kind advice. Sometimes from the author of the RAMESES publication standards themselves! This accessibility is priceless I think, and gives realist approaches an egalitarian feel. And within the realist community, there are differences and debates about definitions, practices and so on: but entered into in my experience so far with an open mind and heart, recognising the impartial knowledge we have and how that knowledge grows through disputation and disagreement. Plus, I don’t think I’ve met a realist that I didn’t like: they’re a fun and modest group in my experience. And finally, realist research is really hard work and makes you think, sometimes I feel like my brain is turning inside out. But I like that. Weird I know.

And so if it is a bandwagon, then I don’t really care. The more the merrier I say. And it’s more fun than walking! Climb aboard!

Nevertheless, I don’t think or claim realist approaches are perfect, I know the way I practice them is in need of refinement, and I don’t necesarily think that the answer to what works, for whom, in what circumstances and why can only be found through realist research: plenty of other methodologies have other ways of approaching these things which can be useful too, (like, obvs), and which can be incorporated into a realist frame of reference too… and often are (I’m thinking of Engager here). And I think there’s room for us as realists to continue to offer constructive, collegiate criticism to each others work, and in doing so to refine and improve our understanding, methods and ultimately, in my field at least, the experiences of people running and using our healthcare services.

I doubt these answers changed the minds of my colleagues today, but I hope they offered a friendly, reasoned defense of the methodology. There really is no need to get hot under the collar when discussing methodology: we’re all in the business of trying to develop usable knowledge, and there are many ways of achieving that. So in closing, I’d just like to say that these the kinds of responses I give when faced with these kinds of questions or comments: but I’d be interested to hear yours – so do get in touch: r.j.l.hardwick@exeter.ac.uk

(Some of these musings are based on earlier reflections from a post from 2016 following the London CARES conference, see points 3 and 7 in particular.)

* Ray’s description of the kind of realism he’s interested in. A Pawson Profile
** this is me shamefully making it clear that I have been around a bit, and therefore I am implying that what I think or say is credible…it is virtue signalling, and I’m sorry… not so humble or modest at that point eh? My apologies.