Home › Forums › Journal Club Case Discussion Forum › October 2019 Journal Club
- This topic has 4 replies, 5 voices, and was last updated 5 years, 2 months ago by awilson12.
-
AuthorPosts
-
-
October 8, 2019 at 3:11 pm #7959Dhinu JayaseelanModerator
Hi all,
Attached is the Farooq et al 2018 article. I’ll be presenting a patient case next week when this paper seemed relevant. Please read through this article, and consider the following (please share your thoughts with the group, the purpose is to critically appraise this paper and facilitate discussion from a clinical perspective):
1) Given the authors’ purpose, did the conclusions make appropriate sense? (Similar to Kevin’s discussion last month, are they reporting on something they actually looked at with their study?)
2) Which components of this research study reflect realistic / contemporary clinical practice? Which components seem less relevant to what we do in the clinic? Be specific.
3) Are there additional limitations of their study methods or results beyond those mentioned in the article itself, and if so please describe. (Research by design is not perfect, we should be willing and able to poke holes in studies even if the authors don’t do it themselves)
4) How much time did you honestly spend looking through this article? Do you feel that it would be realistic to spend the same amount of time on a similar article while in clinic with a challenging patient? What are some perceived barriers (beyond physical time) that make it harder to translate research findings into clinical practice?
Attachments:
You must be logged in to view attached files. -
October 15, 2019 at 11:12 pm #7977helenrshepParticipant
1. Conclusions make sense?
Short answer: yes, I think so. However, the mean differences compared to the MCID and MDC was very close. For example the neck flexor endurance MDC is 17.8 seconds and the mean difference was only 18.45. So I think they showed that yes, manual therapy mobilization improves outcomes but I’m not sure by how much. Kevin talked about confidence intervals overlapping discrediting the findings and I’m not positive it’s the same thing, but the means in the table also overlap if you take standard deviation into account, which may negate the findings in this study. The other issue is that the only follow up was at 4 weeks, so the study doesn’t look at longer term outcomes.2. Relevance to clinical practice?
Able to use NDI, goniometer, neck muscle endurance test. We don’t use infrared lamps, ultrasound, and TENS. We are varying exercises and interventions throughout care as opposed to doing the exact same treatment each session. We do usually treat with CPAs and UPAs in manual therapy but also incorporate other techniques that were not studied.3. Limitations?
Eliminated patients with neurologic findings and discogenic disorders as well as history of cervical spine injury (but isn’t that most people who seek PT for their necks?). Blinding by telling the accessor and patients not to talk about treatment (truly blind?). Done in Pakistan – different than US? Routine physiotherapy – same exercises every time (not progressed), use of infrared lamp/ultrasound/TENS – not supported by literature. Cervical mobilization – different for each patient (patients do not all present the same way, different anatomy, etc). Inflexibility of routine physiotherapy, manual intervention based on symptoms. Patients using pain medication – same number in each group but same amount/type of medication? Compliance with HEP? Longer term outcomes? Residual confounding – unknown if standard number of sessions per week.4. How long?
I spent about an hour and a half reading through the article and thinking about the presented questions. We definitely don’t have time to do that while in the clinic! I find it difficult to translate research into clinical practice because to be a “good” research study it has to be so specific in terms of who the participants are and what the interventions are, which makes it less likely to be reflective of my actual patient or how I provide treatment. For example, can we use the findings in this article with patients who do have neurologic or discogenic problems? Also, what about manual therapy vs “routine PT” where routine PT is more specific to the patient and uses evidence based interventions rather than ultrasound and infrared lamps? -
October 16, 2019 at 8:12 am #7978Steven LagasseParticipant
1) Yes, I feel the authors made conclusions regarding the primary purpose of the article. However, they did begin to make some leaps regarding biomarkers, biomechanical effects, and the rationale behind increased recruitment of the deep neck flexors. These topics went beyond the article’s purpose and began to sound like conjecture.
2) Relevant: ultrasound, stretching, TENS, and mobilizations are all backed by the CPG under the chronic stage.
Less Relevant: superficial thermal therapy and isometrics are not backed under specifically under the chronic stage in the CPG. I do believe isometrics would still be effective, especially if a patient is highly irritable.3) The researchers did not specify the duration of treatment received by the control group versus the experimental group. The experimental group received an additional intervention. It may have been the case they also received a longer duration of treatment. If so, this could skew their results.
4) I spent roughly 60 minutes reviewing this article. I would invest a similar amount of time to find and read an article regarding contemporary treatment for a challenging patient. With my poor ability to scour the literature, I believe most of that time would be spent attempting to find a pertinent article. I need to work on getting better at this skill. However, I have found the PEDro website to be very helpful in finding more meaningful articles quickly.
-
October 16, 2019 at 8:32 am #7980pbarrettcolemanParticipant
1) Conclusions:
I say overall yes. The only concern I have is I tend to look at the difference between groups and the difference within group to see if there is enough distance to make a firm conclusion. It seems that the huge deviations between the data sets meaning there is a lot of overlap between the two groups. This to me speaks to some of the problems of generalizing this information as there were many people within the study itself that the conclusions did not apply to.
2) Which components of this research study reflect realistic / contemporary clinical practice? Which components seem less relevant to what we do in the clinic? Be specific.
It is scary to me the definition of “routine physical therapy.” I would hope doing some sort of clinically reasoned application of mobilization with other treatments would be more beneficial than just a sheet of exercises done everyday. More realistic are the clinically reasoned mobilizations while the less relevant being the modalities and isometric holds.
3) Are there additional limitations of their study methods or results beyond those mentioned in the article itself, and if so please describe. (Research by design is not perfect, we should be willing and able to poke holes in studies even if the authors don’t do it themselves)
It’s always an external and internal validity battle. They eliminated all these people with comorbidities and findings, but most of the patients we work have additional findings. It’s rare to find someone with just mechanical neck pain with no neuro, disc, or previous treatment history. This is not a fault of the authors since they were leaning more internal validity, but it is a limitation inherent in the study design.
4) How much time did you honestly spend looking through this article? Do you feel that it would be realistic to spend the same amount of time on a similar article while in clinic with a challenging patient? What are some perceived barriers (beyond physical time) that make it harder to translate research findings into clinical practice?
30 minutes due to getting Eric’s reminder e-mail this morning and having work at 9. Even though I steamrolled through the article, I still wouldn’t have 30 minutes of time within clinic as I start to see 14-16 patients a day. I think my biggest concerns with applying research in general can be found in this email I sent to Aaron a few weeks ago:
“Reading research takes a long time and to dive this deeply and analytically takes an even longer amount of time. At the end of research, most of it isn’t applicable or has too many problems and then because of the nature of N =1, none of what we read may be beneficial to the actual patient in front of us. I sometimes wonder about bang for the buck. I want to stay up to date on research, but after all the effort to avoid getting hoodwinked or to really understand patient application, it could be a small return on investment.”
-
October 16, 2019 at 10:59 am #7981awilson12Participant
1) I think that their methods and results they stated are in line with their conclusion that mobilization + “routine” PT is better than just “routine” PT. The outcome measures they used seemed to be appropriate to measure what they wanted to.
2) I am always interested at how various studies define “routine” therapy, and feel like the definition is never actually what would be done in the clinic… I hope. I can see from a study design standpoint that having a more passive “active” treatment would potentially lead to easier comparison and determination of the effects of mobilization compared to if they performed some other manual technique as a part of routine PT. Adding in some sort of other cervical technique (ROM, STM, etc.) could potentially be more variability in treatment received compared to ultrasound, thermal therapy, or TENS. I think in this situation you just have to take the evidence with the understanding of this limitation and that research design to control for variables and increase internal validity is difficult.
Aside from that dosing mobilizations based on patient presentation is in line with what you would do in the clinic, so that was a strong point of the study in that respect.3) One limitation to the study is that there was no long term follow up. It would be interesting to see any changes in the effect in the long term. I feel like the exclusion criteria also may have excluded the type of patients that would come into clinic- I would argue it is rarely just mechanical neck pain with no other contributions. Not necessarily a limitation in research design, but I feel like they kind of got away from their focus with the discussion and kind of brought in various topics that weren’t addressed previously or super relevant to discuss.
4) I would say it took me about 45 minutes to read and critically analyze the article in terms of limitations, strengths, benefits, applicability, and validity of the study. Personally this is always where I struggle with research- I have so many questions but my efficiency of searching and reading is a big limiting factor in being able to look up everything that I have questions about. Another thing that I have a hard time with is determining applicability in clinic when my patient doesn’t exactly fit the population, or to what degree I can use the methods in a way that is reasonable in clinic.
-
-
AuthorPosts
- You must be logged in to reply to this topic.