-
PDF
- Split View
-
Views
-
Cite
Cite
Anthony Delitto, PT, PhD, FAPTA, Research in Low Back Pain: Time to Stop Seeking the Elusive “Magic Bullet”, Physical Therapy, Volume 85, Issue 3, 1 March 2005, Pages 206–208, https://doi.org/10.1093/ptj/85.3.206
Close -
Share
For physical therapists who practice in the area of low back pain (LBP) and pride themselves on being evidence-based practitioners, the period 1995 to 2005 would seem to be the decade of our dreams:
An unprecedented number of randomized controlled clinical trials (RCTs) investigating interventions for LBP were published.
Virtually every intervention for LBP that a physical therapist would use was subjected to an RCT.
In addition to the substantial number of RCTs published since 1996, 37 systematic reviews gauged the evidence for interventions for LBP.
Finally, evidence-based practice guidelines were published in the United States and other countries, including the United Kingdom, New Zealand, Australia, and Sweden.
In this issue of Physical Therapy, Koumantakis and colleagues contribute the results of yet another RCT, one that investigated the adjunct effect of trunk stabilization exercises when added to “general exercise” in people with recurrent LBP.
Given all of this accumulating evidence, we might conclude that we should be better able to address some of the shortfalls that have plagued us over the years in our efforts to manage LBP in a cost-effective manner. Indeed, with such a wealth of new evidence, we might assume that clinicians should now be capable of integrating the best evidence with each individual clinical encounter to eventually arrive at an optimal level of practice proficiency. Instead, we are faced with real facts that do not bode well for such an outcome:
Indefensible variation in clinical practice that often is at complete odds with any standard of practice, whether that standard is evidence based or not.
Suboptimal clinical outcomes that generally have not improved over the years and might actually be getting worse.
Costs that continue to rise and take up an inordinate amount of resources at a time when we can least afford it.
One possible explanation for the persistent shortfalls is the well-documented failure to implement evidence in practice, a problem that certainly is not unique to physical therapy. As with other health care professions, one of the reasons for this “disconnect” may be that many clinicians are rigid and unwilling to change with the times and to adopt more effective approaches to patient care. But another explanation might be that clinicians see deficiencies in current methods used to study LBP—and therefore deficiencies in the evidence.
The deficiencies include designs where all subjects are given stereotypical treatment regimens without regard to clinical presentation (other than loosely defined entrance criteria such as “recurrent”). The experimental intervention is compared to the intervention used with the comparison group (eg, “usual care”), frequently resulting in the conclusion that the outcomes of both approaches are equivalent. Even when there is a difference between the experimental and comparison groups, the effect sizes often are too small to warrant very much enthusiasm.1 In consolidating RCTs using these kinds of approaches, systematic reviews have led to conclusions that many interventions (eg, “exercise”) do not have a role in management of patients with acute LBP at all!2
The use of these approaches, which can best be described as the “search for the magic bullet,” results in what I believe are legitimate barriers to clinical implementation—especially given that we continue to see “magic bullet” studies despite the fact that consensus opinion has recognized the immediate priority for research to classify patients into relevant subgroups.3 The calls for classification are made with the underlying belief that LBP is a heterogeneous condition with regard to its etiology and responsiveness to interventions—thus precluding the viability of a magic bullet treatment that would be effective for all patients.
Although it may be relatively easy to grasp the fact that different interventions may be indicated for different subgroups of patients, the idea that classification is ongoing within an episode of care adds complexity. Good practitioners can be expected to vary their approach to a single patient within an episode of care; in fact, it has been demonstrated that physical therapists change interventions as the patient's symptoms become less acute and as the patient can tolerate increased activity levels.4 For instance, physical therapists may use manual physical therapy techniques early in an episode of care for acute LBP and follow up with general strengthening and aerobic exercise later in the episode when the patient is able to better tolerate increased activity levels. This approach also is consistent with practice guidelines that call for judicious use of manipulation in acute stages and introduction of aerobic activity and exercise when tolerated.5 Designing a high-quality RCT that captures the nuances of this level of decision making presents a challenge, however. More important, even if investigators are able to implement this type of design, we do not know how highly systematic reviews would value them.
Because magic bullet approaches are far easier to design and implement in a classic RCT, they are more likely to lead to “methodologically sound” studies. In fact, I would argue that systematic reviews value rigor to a point where the most “highly valued” studies (those that represent the “best evidence”) must include—almost by definition—magic bullet approaches. But these approaches are rarely representative of real-world practice, where physical therapists spend considerable time examining patients and evaluating patient data with the presumed goals of matching multifaceted treatments based on their findings and, at subsequent visits, changing the interventions based on new findings.
In many systematic reviews, studies with “multiple co-interventions,” which also may better reflect real-world practice, are eliminated from consideration due to legitimate methodological issues,6–10 so the clinician reading the systematic reviews is forced to evaluate the possible interventions in a piecemeal fashion. It is extremely rare, of course, that any one intervention would comprise the sole treatment for a disorder such as LBP. In addition, it is extremely unlikely that the effectiveness of multiple interventions can be measured by simply adding the effects of individual interventions together. Astute clinicians (eg, those who read the literature and who most likely practice using evidence-based principles) quickly realize that highly relevant studies may have been eliminated from the systematic review as a result of author choice or exclusion criteria related to multiple co-interventions. If systematic reviews value methods where interventions are not representative of real-world clinical care, then most clinicians are not going to blindly follow the recommendations of those systematic reviews.
One way to add credibility to systematic reviews is to conduct studies that not only are methodologically sound but are clinically relevant—which requires both a different approach to study design and the use of systematic review criteria that value such an approach. With regard to the former, consider 2 of the more commonly used interventions for LBP: (a) specific exercises based on directional preference and (b) spinal manipulation. In previous systematic reviews, these widely prescribed interventions were judged either to be ineffective or to have effect sizes that were too small to warrant a highly enthusiastic recommendation. Subsequent randomized trials have demonstrated that effect sizes will increase dramatically when classification and matching are taken into account.11,12 Those trials took into account the clinician's examination of the patient, the integration of the examination findings, and the clinical skill involved in the intervention—all of which allow for greater clinical relevance. The study by Koumantakis and colleagues used the magic bullet approach in their study, which might explain their results. If they had been able to target the manipulation to the subgroup of patients with instability (ie, matching the intervention to the examination findings), their results might have been quite different. None of the subjects had any signs of instability—which would seem to be a prerequisite finding before making a decision to use stabilization exercises.
We have come a long way in constructing an evidence-based approach in managing patients with LBP. But even if we begin to produce and value studies that take into account the heterogeneity of LBP and use methods that are clinically relevant, voluntary behavioral change among clinicians will lag, as the problem is more complex than I've presented here. There will always be evidence-based practice advocates who feel their role is to dictate practice, just as there will always be clinicians who refuse to change despite incontrovertible evidence that certain interventions improve patient outcomes—and other interventions do not.
Today, we know enough about LBP to state that there will be a limited return on any investment we make in studies that search for the “one intervention” that will be effective for all patients. At the same time, it might be premature to renounce interventions that have been debunked by studies that have used magic bullet approaches. Isn't it time that our approaches to studying LBP evolve to include not only interventions, but the examination and evaluation rules that are used to select and progress such interventions?
References
Philadelphia Panel.
Philadelphia Panel.
Philadelphia Panel.
Philadelphia Panel.
Philadelphia Panel.

Comments