man looking at evidence based practice on a wall

Why I'm At Odds With Evidence-Based Practice

Over the last few years, I've been questioning what the term 'evidence-based practice' means to me personally and professionally as a Physiotherapist.

I understand what it is and why it’s important, but I can’t help but feel we're collectively losing a little perspective.

Research has been the foundation of medicine, health, and fitness for decades. It drives decisions, gives us direction and helps shield us from potentially harmful or ineffective treatments. A guiding light if you will.

But clinically, it feels our interpretation of research is shifting. And by default, I think its role within the evidence-based practice model now feels dangerously unbalanced to me.

Pleasingly, I’ve recently come across a few like-minded people who’ve inspired me to challenge the status quo. I feel this is an important conversation as the one thing we’re all here to do - help others, may be becoming unnecessarily compromised before our eyes.

Knowing the internet, I’m a little apprehensive to expose my thoughts on this, and I ask that you please keep an open mind and consider what my experience has highlighted to me.

So here we are.

With that being said, here is what I’ve come to understand about the current state of evidence-based practice, the way we currently use research and why I think we can do better.

 

What is Evidence-Based Practice

Evidence-based practice (or evidence-based medicine) is a term used to describe the use of evidence to support clinical decision-making.

Its ultimate goal in healthcare and fitness is to eliminate the use of ineffective and potentially dangerous practices and improve patient/athlete care.

The evidence-based approach was adopted in the 1990s to “de-emphasize” intuition in clinical decision-making. And as a Physio, it’s staggering to think it’s only 30-35 years old. Particularly when you consider the high esteem it holds.

The evidence-based practice model consists of three important pillars:

  • Research Evidence
  • Clinical Experience
  • Patient Feedback
evidence based practice model

For the purposes of this article, it’s important to emphasize “evidence-based” means more than just research. It’s a mixture of scientific research, professional expertise and patient experience.

 

The Role of Research in Evidence-Based Practice

We use research to understand the relationship between two things. In healthcare and fitness, we want to know if what we're doing is worthwhile and safe.

Interestingly, research studies have enjoyed a meteoric rise over the years.

A report published by the International Association of Scientific, Technical and Medical Publishers in 2018 suggests 3 million studies are published worldwide each year. They reference 4% annual growth, with 40% of the world's articles generated by China and the US. Furthermore, they estimate global research studies generated $25.7 billion in revenue in 2017. It’s big business.

Yet despite the power research wields, we often gloss over its cracks and freely succumb to its findings. So much so we seem to value research above all else, including the relative 'art’ of being a therapist or trainer - clinical experience and patient feedback.

So despite such a clear framework, here's why I'm at odds with the current state of evidence-based practice.

 

Why I’m at Odds with Evidence-Based Practice

From what I've come to understand as an Australian Physiotherapist over the last 20 years, there are a few specific areas of evidence-based practice I feel are letting us down.

 

1. Our Priorities Seem Unbalanced

When tossing up whether to write something like this I came across a great article written by Brett Bartholomew and Stuart McMillan - two titans of the athlete coaching community. And what they wrote summarized my thoughts exactly:

“Regrettably, evidence-based practice has evolved to ignore the experience of the practitioner (or coach) and the perspective of the patient (or athlete).”

It seems that somewhere along the line, the evidence-based practice model has morphed from a way to support clinical decision-making to be at the mercy of what research suggests. We now rely on research to tell us what is and is not important at the expense of thinking and exploring for ourselves.

https://twitter.com/BradSchoenfeld/status/994219824720080897?s=20

This may be a product of the sheer amount of research available for consumption by practitioners and the general public. Research results need context to flourish and not everyone has the clinical experience needed to apply this important perspective.

Without context, research results can easily become more ‘definitive’. It's much easier to say to a patient "research says this works", than it is to suggest "my clinical experience tells me this works." It may seem like semantics, but the difference is huge.

You don’t need to be an expert to interpret results, but experience clearly helps.

As a therapist, I’m fortunate to be able to test the robustness of research in real-time with my patients. I can continue to use what works and discard what doesn’t.

Not everyone has that luxury.

Part of me feels this growing dependence on research may hide a subtle lack of general self-confidence and critical thinking. We’re only human (and doing our best of course), but research can encourage us to avoid thinking and exploring for ourselves if we’re not careful.

And I think a good example of this is the ongoing debate raging around the role of posture and pain.

 

The Role of Posture in Pain

As it stands, it's hard to connect "poor posture" to pain via research, despite it being quite easy clinically - once you have all the necessary information.

Interestingly, I've actually come to understand that poor postures, shapes, and positions are potentially the most important feature to consider when trying to understand why someone has developed back or neck pain in the first place - even just to help rule out other things.

For many, there is a tangible, repeatable, and reliable clinical link between the default stationary shapes they get in to the most often throughout their day, and the location of their accumulated dysfunction.

When you eventually locate the exact, underlying cause of someone's neck or back pain - where treating this area consistently improves their symptoms, the next question is always: "Why is this exact part of the spine dysfunctional?" - especially when other areas directly above, below, or on the other side are not.

And when you layer a person's most common sitting, standing, or bending shapes over top, we can often see a sustained change in loading taking place through that exact area. More importantly, we can see improvements in the stiffness, tightness and tenderness associated with this area just by putting people back into a more anatomically optimal shape over time.

However, as I mentioned above, there is still a disconnect here between what research can validate and what clinical experience can highlight.

One reason, is that pain is a highly complex process. This alone makes it hard to simplify via research. For many, nothing hurts until it hurts which often gives people a sense that the beginning of their pain was the beginning of their dysfunction. You may have bent over that one time and your back got sore. Perhaps you just woke up with a sore neck one day. Maybe, you just sneezed and now your back hurts.

The key point to consider here, is that none of these events are abnormal. The body is far too robust for simple things to create dysfunction on their own. Clinically, it's become abundantly clear to me that the onset of people's pain is rarely the start of something new, but the last straw.

As a result, understanding how to resolve someone's pain is more an exercise in trying to understand why an area that should not be sore - has become so, not just trying to get that soreness to go away in isolation.

And it's this context that is often beyond the capacity of structured research. It's hard for research to support something that's so multifaceted - despite more clinical certainty.

And this touches on another issue with the current state of evidence-based practice.

 

2. Research is Inherently Flawed But We Don't Seem to Care

Despite research dominating evidence-based thinking, it's easy to forget it's inherent limitations.

The Randomized-Controlled Trial (RCT) is considered the 'gold standard' design for testing health-related theories. However according to an Australian-based rating system - the PEDro scale, an overwhelming number of RCTs display strong potential for bias and rate low for quality. Yet we so easily take what they say as gospel.

pedro criteria

Staggeringly, the average PEDro score is 5.1 out of 10. That's terrible. What's worse is only 37% of studies rate ≥ 6/10, or "moderate to high quality". And judging by the graph below the majority of these are not 9s and 10s. In fact, 20,800 of 32,300 individually indexed trials rated by PEDro are considered subpar.

pedro scores for bias in research
Distribution of PEDro scores in 2019.

In a perfect world, every single study would rate 10/10, or very close to it, but by PEDro's standards, at least two in three studies are potentially compromised. This is a huge concern for those who don't explore beyond a study's conclusion.

Furthermore, this study looked at sixty-two studies from PEDro that scored greater than a 6/10. They found that "many of the positive statistically significant conclusions from high-quality randomized controlled trials in sports physical therapy are probably no more than suggestive." This means that while a study did potentially find something of note, we can't be sure how noteworthy it actually is.

Especially when approximately 1 in 10 studies provided a "false-positive", or found something that statistically may not have actually been there.

And this highlights another important issue facing the current research culture.

 

We Don't Apply Context

Accounting for design bias is one thing, but how many take the time to apply context to the results? If a study suggests a statistically significant result - the likelihood it's real and not due to chance, we need to know if these findings can be generalized to others outside the study.

Simple features like the number of participants and their individual characteristics (age, gender, level of health and fitness, etc.) can cast serious doubt on our ability to generalize findings if not broad enough.

Take this study that questions whether training to failure is the best way to build muscle size. It used a small number of poorly trained men and exposed them to a specific resistance training plan. Its results were interesting but tricky to generalize with confidence - especially to well-trained women. These features don't automatically render the results good or bad, but they do add some color to a conclusion often viewed as black and white.

To further complicate things, there's a growing push to retire the term 'statistical significance'. Nature.com suggests many researchers are too quick to categorize their results as either 'statistically significant' (p-value ≤ 0.05) or 'not statistically significant' (p-value > 0.05). This is important as many interpret not statistically significant to mean 'no effect' or not important, when this may not be the case.

wrong interpretation of research

They assessed 791 studies and found approximately half misinterpreted the significance of their own results.

So its no surprise journals like the American Physiological Society now support divorcing the p-value "from any word or phrase that reflects statistical significance."

Similarly, another must-read article on statistical significance warns we should not "conclude anything about scientific or practical importance based on statistical significance (or lack thereof)."

So not only are many research designs potentially biased, and the conclusion hard to generalize, but the results themselves can be based on a false presumption.

Further to this, a thought-provoking article by the Physio Network highlights that even the abstract (the summary of the research paper at the top of the research paper) can inadvertently (or advertently) mislead us and "commonly misrepresent the findings of their study."

This shouldn't be an issue if we just took the time to read the paper ourselves, but how often do people do that? Especially when a strong proportion of full articles are hidden behind a paywall or subscription service.

At the end of the day, this isn't to say that research as a whole is bad or untrustworthy, but just we just can't blindly rely on research results without thinking for ourselves and applying clinical or personal context.

With this in mind, I've always thought you can argue with research - but you can’t argue with clinical results. Well, you can technically argue with both, but it’ll often say more about arguer than anything else. And this brings me to my next point.

 

3. Research Culture Has Become Toxic

The third reason I'm at odds with evidence-based practice is the culture around its use seems toxic.

This may have more to do with current internet culture and social media 'decorum' but far too many of us seem closed-minded, unnecessarily aggressive and dismissive.

An unbalanced view of evidence-based practice gives some the power to reject an idea without having to think too deeply about it. I struggle with those who staunchly live and die by research results. The 'prove me wrong' or 'show me the research' types genuinely get in the way of progress - and helping people. This isn't to say we shouldn't hold people accountable and question things that seem questionable, but again, we need the context and open-mindedness to do it properly.

As someone actively trying to push the boundaries of what we know about how the human body functions best and falters, its tiresome to have my clinical results dismissed because research hasn't caught up yet.

This is especially so, when people in my industry tend to dismiss things without actually taking the necessary time and energy to test it out clinically for themselves. 

And this toxicity gets in the way of the sole reason my industry exists - again, to help people.

In my opinion, research can give authority to those without the relevant experience to back it up. And I've seen this first hand a few years ago when promoting an article I put together on why we should no longer ice an injury. For some reason, challenging a long-held industry belief is more likely met with derision and arrogance than support - irrespective of any logic, facts and robust clinical evidence. And I'm unsure why?

Surely an open-mind can only benefit us as a community? Are we so insecure and easily threatened by new concepts that we can't have a civil conversation or debate? Again, we're not here for ourselves, we're here for others. And they're ultimately the ones who suffer from a toxic culture.

 

4. Correlation vs Causation

The final reason I find the evidence-based practice space challenging revolves around the idea of correlation versus causation.

For those unaware, in my industry, a correlation suggests some degree of a link between an intervention or treatment and the desired outcome. But, we ultimately can't be sure in isolation. On the other hand, causation suggests that one thing most likely influences the other.

And, depending on how argumentative someone wants to be, it's easy to downplay potentially important results by suggesting there's only a correlation involved. In short, correlations are often looked upon more negatively and causation more favourably.

BUT, what I find is almost always lost in this conversation is that the label of causation can only really be achieved once enough consistent correlation has been found.

So to demonize one, partially subtracts from the other.

Again, instead of bringing our emotions into an analytical conversation, we just need to take a breath, shift our perspective a little, and continue to keep an open mind about what's actually working, and what isn't.

 

The Future of Evidence-Based Practice

All things being considered, I can see a huge gap in the current evidence-based practice model. It’s clearly an important part of the decision-making process, but something needs to change.

I think we can start by better implementing the following:

  • Use something like the PEDro scale to look for design bias.
  • Consider the ability to generalize results.
  • Place greater value on clinical treatment results (where appropriate).
  • Increase the value of patient experience and feedback.
  • Maintain an open mind when presented with new information.
  • Ultimately think for ourselves and test things out for ourselves.

Let’s move away from using research as a definitive crutch that tells us what we should and should not be doing, and instead use this information as a platform to figure things out in real-time for ourselves.

With this in mind, I think it important to clarify I'm not anti-research. Nor am I suggesting we tip the scales back too far in favour of clinical experience and patient values.

I'm simply preaching balance - and mature conversation. Clinical experience and patient input come with their own risks of bias - but the skilled and reasonable will find an optimal balance to achieve the best outcome for the people we are here to help.

Let's be more skilled and reasonable.

 

Conclusion

The current state of evidence-based practice is not broken by any means, but I think we need a healthy dose of perspective.

We should look at research as it was originally intended - to help support our clinical decision making, not replace it.

Once we strike a better balance, I can see it fostering better, more highly-skilled practitioners and coaches, a more supportive community and ultimately improve what we are all here to do – help those in need.

How do you view the current state of Evidence-based Practice? Are you seeing the same issues? Let me know below!

 

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.