Why I’m At Odds With Evidence-Based Practice in 2021

Why I’m At Odds With Evidence-Based Practice in 2021

Over the last few years, I’ve been questioning what the term ‘evidence-based practice’ means to me personally and professionally as a Physiotherapist (Physical Therapist).

I understand what it is and why it’s important, but I can’t help but feel we’re collectively losing a little perspective.

Research has been the foundation of medicine, health, and fitness for decades. It drives decisions, gives us direction and helps shield us from potentially harmful or ineffective treatments. A guiding light if you will.

But clinically it feels our interpretation of research is shifting. And by default, I think its role within the evidence-based practice model now feels dangerously unbalanced.

Knowing the internet, I’m a little apprehensive to expose my thoughts on this. However, I’ve recently come across a few like-minded people who’ve inspired me to challenge the status quo. I feel this is important as the one thing we’re all here to do – help others, might be eroding before our eyes. So here we are.

With that being said, here is what I’ve come to understand about the current state of evidence-based practice, the way we currently use research and why I think we can do better.

What is Evidence-Based Practice

Evidence-based practice (or evidence-based medicine) is a term used to describe the use of evidence to support clinical decision-making.

Its ultimate goal in healthcare and fitness is to eliminate the use of ineffective and potentially dangerous practices and improve patient/athlete care.

The evidence-based approach was adopted in the 1990s to “de-emphasize” intuition in clinical decision-making. And as a Physio, it’s staggering to think it’s only 25-30 years old. Particularly when you consider the high esteem it holds.

The evidence-based practice model consists of three important pillars:

  • Research Evidence
  • Clinical Experience
  • Patient Feedback
evidence based practice infographic

For the purposes of this article, it’s important to emphasize “evidence-based” means more than just research. It’s a mixture of scientific research, professional expertise and patient experience.

The Role of Research in Evidence-Based Practice

We use research to understand the relationship between variables. In healthcare and fitness, we want to know if what we’re doing is worthwhile and safe.

Interestingly, research studies have enjoyed a meteoric rise over the years.

A report published by the International Association of Scientific, Technical and Medical Publishers in 2018 suggests 3 million studies are published worldwide each year. They reference 4% annual growth, with 40% of the world’s articles generated by China and the US. Furthermore, they estimate global research studies generated $25.7 billion in revenue in 2017. It’s big business.

Yet despite the power research wields, we often gloss over its cracks and freely succumb to its findings. So much so we seem to value research above all else, including the relative ‘art’ of being a therapist or trainer – clinical experience and patient feedback.

So despite such a clear framework, here’s why I’m at odds with the current state of evidence-based practice.

Why I’m at Odds with Evidence-Based Practice

From what I’ve come to understand as an Australian Physiotherapist, there are a few specific areas of evidence-based practice I feel are letting us down.

1. Our Priorities Seem Unbalanced

When tossing up whether to write something like this I came across a great article written by Brett Bartholomew and Stuart McMillan – two titans of the coaching community. And what they wrote summarized my thoughts exactly:

“Regrettably, evidence-based practice has evolved to ignore the experience of the practitioner (or coach) and the perspective of the patient (or athlete).”

It seems that somewhere along the line, the evidence-based practice model has morphed from a way to support clinical decision-making to be at the mercy of what research suggests. We now rely on research to tell us what is and is not important at the expense of thinking and exploring for ourselves.

This may be a product of the sheer amount of research available for consumption by practitioners and the general public. Research results need context to flourish and not everyone has the clinical experience needed to gain this important perspective. Without context research results become more ‘definitive’. You don’t need to be an expert to interpret results, but experience helps.

As a therapist, I’m fortunate to be able to test the robustness of research in real-time with my patients. I can continue to use what works and discard what doesn’t. Not everyone has that luxury.

Part of me feels this growing dependence on research may hide a subtle lack of general self-confidence and critical thinking. We’re only human (and doing our best) of course, but research can encourage us to avoid thinking and exploring for ourselves if we’re not careful.

And I think a good example of this is the current debate raging around the role of posture in back pain.

The Role of Posture in Pain

As it stands there’s no conclusive research linking poor posture to back pain. So many are quick to devalue any connection for this reason. But they shouldn’t.

Clinically, poor postures, shapes, and positions are potentially the most important feature. There’s a tangible link between a patient’s default shapes and their accumulated dysfunction. Pain is a complex process that makes it hard to simplify via research, but we do know poor posture leads to poor function. We also know that poor function sets tissue up for pain. And it’s hard to appreciate this link unless you’re attending to it and seeing the results.

This is an ongoing example of something research is yet to “support” despite more clinical certainty. And this touches on another issue with the current state of evidence-based practice.

2. Research is Inherently Flawed but We Don’t Seem to Care

Despite research dominating evidence-based thinking, it’s easy to forget it’s inherent limitations.

The Randomized-Controlled Trial (RCT) is considered the ‘gold standard’ design for testing health-related theories. However according to an Australian-based rating system – the PEDro scale, an overwhelming number of RCTs display strong potential for bias and rate low for quality. Yet we so easily take what they say as gospel.

pedro scale 10 point checklist for assessing the strength of research trials for evidence based practice

Staggeringly, the average PEDro score is 5.1 out of 10. That’s terrible. What’s worse is only 37% of studies rate ≥ 6/10, or “moderate to high quality”. And judging by the graph below the majority of these are not 9s and 10s. In fact, 20,800 of 32,300 individually indexed trials rated by PEDro are considered subpar.

pedro scores for strength of bias in research studies
Distribution of PEDro scores in 2019.

In a perfect world, every single study would rate 10/10, or very close to it, but by PEDro’s standards, at least two in three studies are potentially compromised. This is a huge concern for those who don’t explore beyond a study’s conclusion. And this highlights another important issue facing the current research culture.

We Don’t Apply Context

Accounting for design bias is one thing, but how many take the time to apply context to the results? If a study suggests a statistically significant result – the likelihood it’s real and not due to chance, we need to know if these findings can be generalized to others outside the study.

Simple features like the number of participants and their individual characteristics (age, gender, level of health and fitness, etc.) can cast serious doubt on our ability to generalize findings if not broad enough.

Take a recent study that questions whether training to failure is the best way to build muscle size. It used a small number of poorly trained men and exposed them to a specific resistance training plan. Its results were interesting but tricky to generalize with confidence – especially to well-trained women. These features don’t automatically render the results good or bad, but they do add some color to a conclusion often viewed as black and white.

To further complicate things, there’s a growing push to retire the term ‘statistical significance’. Nature.com suggests many researchers are too quick to categorize their results as either ‘statistically significant’ (p-value ≤ 0.05) or ‘not statistically significant’ (p-value > 0.05). This is important as many interpret not statistically significant to mean ‘no effect’, when this may not be the case.

wrong interpretation of non-statistical significance in research

They assessed 791 studies and found approximately half misinterpreted the significance of their own results.

So its no surprise journals like the American Physiological Society now support divorcing the p-valuefrom any word or phrase that reflects statistical significance.”

Similarly, another must-read article on statistical significance warns we should not “conclude anything about scientific or practical importance based on statistical significance (or lack thereof).”

So not only are many research designs potentially biased and the conclusion hard to generalize, but the results themselves can be based on a false presumption.

At the end of the day, we just can’t blindly rely on research results without thinking for ourselves and applying clinical or personal judgement.

With this in mind, I’ve always thought you can argue with research – but you can’t argue with clinical results. Well, you can technically argue with both, but it’ll often say more about arguer than anything else. And this brings me to my next point.

3. Research Culture Has Become Toxic

The third reason I’m at odds with evidence-based practice is the culture around its use seems toxic.

This may have more to do with current internet culture and social media ‘decorum’ but far too many of us are closed-minded, aggressive and dismissive.

Our unbalanced view of evidence-based practice gives some the power to reject an idea without having to think deeply about it. I struggle with those who staunchly live and die by research results. The ‘prove me wrong’ or ‘show me the research’ types genuinely get in the way of progress.

As someone actively trying to push the boundaries of what we know about human function, its tiresome to have my clinical results dismissed because research hasn’t caught up yet. And this toxicity gets in the way of the sole reason my industry exists – to help people.

In my opinion, research can give authority to those without the relevant experience to back it up. And I’ve seen this first hand when promoting a recent article on why we should no longer ice an injury. For some reason, challenging a long-held industry belief is more likely met with derision and arrogance than support – irrespective of any logic, facts and robust clinical evidence. And I’m unsure why?

Surely an open-mind can only benefit us as a community? Are we so insecure and easily threatened by new concepts that we can’t have a civil conversation or debate? Again, we’re not here for ourselves, we’re here for others. And they’re ultimately the ones who suffer from a toxic culture.

The Future of Evidence-Based Practice

All things being considered, I can see a huge gap in the current evidence-based practice model. It’s clearly an important part of the decision-making process, but something needs to change.

I think we can start by better implementing the following:

  • Use something like the PEDro scale to look for design bias.
  • Consider the ability to generalize results.
  • Place greater value on clinical treatment results (where possible).
  • Increase the value of patient experience and feedback.
  • Maintain an open mind when presented with new information.
  • Ultimately think for ourselves.

Let’s move away from using research to tell someone what they should be doing and instead use this information to figure it out in real-time for ourselves.

With this in mind, I think it important to clarify I’m not anti-research. Nor am I suggesting we tip the scales back too far in favor of clinical experience and patient values. I’m simply preaching balance (and mature conversation). Clinical experience and patient input come with their own risks of bias – but the skilled will find an optimal balance to achieve the best outcome. Let’s be more skilled.

Conclusion

The current state of evidence-based practice is not broken by any means, but I think we need a healthy dose of perspective.

We should look at research as it was originally intended – to help support our clinical decision making, not replace it.

Once we strike a better balance, I can see it fostering better, more highly-skilled practitioners and coaches, a more supportive community and ultimately improve what we are all here to do – help those in need.

How do you view the current state of Evidence-based Practice? Are you seeing the same issues? Let me know below!



3 thoughts on “Why I’m At Odds With Evidence-Based Practice in 2021”

  • not sure this qualifies as an example, but here goes: as a biodynamic craniosacral therapist, and that’s apparently not evidence-based. makes no difference to me, however, as i’ve had some remarkable results with it, both with my own experience as well as for some of my clients. we don’t know why it works, per se, but we know it does, and perhaps someday technology will be able to explain. in the meantime, to exclude BCST as an effective form of therapy because it’s not evidence-based seems ridiculous. witness that, until very recently, medical schools taught that cranial bones eventually fused, but this was disproven when NASA scientists (for one) were able to “prove” cranial bone movement. strangely, however, teaching that cranial bones fused was accepted enough to teach it in medical schools despite any evidence to support that, so an assumption claimed as truth or fact!

  • Thank you for this article. This is so “a propos”. As an energy practitioner, modern evidence based research is basically a waste of time. As long as the concept of our world being total energy, no research whatsoever will ever be definitive. The words wholistic/holistic, energetic, mindful, intuitive, emotional, etc …. must all be included in any research based evidence. As long as “treatment”, whatever it may be, is not understood as coming from all aspects of the practitioner, as well being offered to the whole of the client, there is no evidence to rely on.
    Most research is “disconnected” from the human factor. It is mostly one-dimensional. We may be considered machines in some way, but we are not ‘machines’ in the conventional sense. One part may not be removed from the whole, be it a specific aspect or literally a physical part of the whole for study. So in my understanding after many years of practice, research is always flawed, be it from the researcher’s viewpoint visually, orally, tangibly, etc … or emotionally or spiritually or any other aspect of the human part of the researcher. There is always a bias of SOME kind. And the energy of that bias will, inevitably, always enter into play in the findings.

Leave a Reply

Your email address will not be published. Required fields are marked *