“Recovery: great in theory, but where’s your evidence?”

PeaPod has a great post addressing the rejection of recovery on the basis of “evidence”:

…existing research is focussed on professionally dominated, time limited treatment and uses “out of date” methods, seeing treatment as technology. In other words, you come along, get treatment “done” to you and then outcomes are measured. This misses so much, such as the importance of the relationship between the client and the professional which is key to success.

Different sorts of treatment have similar results, timescales for measuring outcomes are way too short and there is a wealth of other influences which we don’t measure. This includes wider social networks, which are likely to be more influential than treatment in the longer term. The slide that struck home most powerfully to me was this one where the prof argues that we have:

A Restricted Definition of Science and Knowledge Production
• Which ignores culture and assumes findings are universal
• Which assumes research is value-free and that researchers are neutral
• Which privileges top-down, expert theory over tacit, implicit knowledge
• Which ignores the patient’s view

In worshipping the god of randomised controlled trials above all others we end up with a distorted view where something “easy” to measure, like a prescribing intervention, gains dominance. This limits our knowledge (absence of evidence is not evidence of absence) and limits the choices clients have when coming to treatment.

This point that the emphasis on randomizes controlled trials generate a bias for easy to measure interventions. This could easily turn into a cop-out but it seems like a very important point.

Read the rest here.

2 thoughts on ““Recovery: great in theory, but where’s your evidence?”

  1. Thanks for covering this and also for making the point about RCTs. I don’t think I’ve ever heard it put as clearly as that: RCTs created biased evidence. Kinda blunt and kinda true.

Comments are closed.