Many painful problems are surprisingly mysterious, and there are many theories about why people hurt. Debate can rage for years about whether or not a problem even exists. For instance, chiropractic “subluxations” have been a hot topic for decades now: are these little spinal dislocations actually real? What if five different chiropractors all looked at you, but each diagnosed different spots in your spine that were supposedly “out” and in need of adjustment?
That’s a reliability study.
Reliability studies are emotionally compelling. Evidence of unreliable diagnosis can make further debate pointless. If chiropractors can’t agree on where subluxations are in the same patient — and some studies have shown that they can’t1 — then the debate about whether or not subluxations actually exist gets less interesting. A reliability study with a negative result doesn’t necessarily prove anything,2 but they are strongly suggestive, and can be a handy shortcut for consumers. Who wants a diagnosis that will probably be contradicted by each of five other therapists? No one, that’s who.What if five different chiropractors all looked at you, but each diagnosed different spots in your spine that were supposedly “out” & need of adjustment?
In reliability science, we talk about “raters.” A rater is a judge … of anything. One who rates. The person who makes the call. All health care professionals are raters whenever they are assessing and diagnosing.
Reliability studies are studies of “inter-rater” reliability, agreement, or concordance. In other words, how much do raters agree with each other? Not in a meeting about it later, but on their own. Do they come to similar conclusions when they assess the same patient independently?
There are formulas that express reliability as a score, such as a “concordance correlation coefficient.” For the non-statistician, that boils down to: how often are health care professionals going to come to the same or similar conclusions about the same patient? Every time? Half the time? One in ten?
This reliability thing is not subtle: you don’t need a second opinion for a gunshot wound. Ten out of ten doctors will agree: “Yep, that’s definitely a gunshot wound!” Well, almost.3
That’s high inter-rater reliability.
Lots of diagnostic challenges are harder, of course. Humans are complex. It’s not always obvious what’s wrong with them. This is why you need second and third opinions sometimes. And it’s perfectly fine to have low reliability regarding difficult medical situations. Patients are pretty forgiving of low diagnostic reliability quickly when professionals are candid about it. All a doctor has to say is, “I’m not sure. I don’t know. Maybe it’s this, and maybe it isn’t.”
What you have to watch out for is low reliability combined with high confidence: the professionals who claim to know, but can’t agree with each other when tested. Unfortunately, this is a common pattern in alternative medicine. And it is a strong argument that it’s actually alternative medicine practitioners who are “arrogant,” not doctors.Ten out of ten doctors will agree: “Yep, that’s definitely a gunshot wound!”
True story: a patient of mine, a young woman with chronic neck pain and nausea, went to a “body work” clinic for her problem. Three deeply spiritual massage therapists hovered over her for three hours, charging $100/hour — each, for a total of $900 — and provided (among some other things) a running commentary/translatation of what her stomach was “trying to tell her” about her psychological issues.
True story: my eyes rolled out their sockets. And my patient was absolutely horrified.
Obviously, if she’d gone to another gurgle-interpreter down the road, her gastric messages would have been interpreted differently.
That’s low inter-rater reliability.
There are numerous common diagnoses and theories of pain that suffer from lousy inter-rater reliability. Here are some good examples:
And so on and on. Over the months and years, I’ll add other nice examples to this list as they occur to me.
I do enjoy reliability studies, and this is one of my favourites. Three chiropractors were given twenty patients with chronic low back pain to assess, using a complete range of common chiropractic diagnostic techniques, the works. Incredibly, assessing only a handful of lumbar joints, the chiropractors agreed which joints needed adjustment only about a quarter of the time (just barely better than guessing). That’s an oversimplification, but true in spirit: they couldn’t agree much, and researchers concluded that all of these chiropractic diagnostic procedures “should not be seen … to provide reliable information concerning where to direct a manipulative procedure.”BACK TO TEXT
“Palpation of a cranial rhythmic impulse (CRI) is a fundamental clinical skill used in diagnosis and treatment” in craniosacral therapy. So, researchers compared the diagnostics methods of “two registered osteopaths, both with postgraduate training in diagnosis and treatment, using cranial techniques, palpated 11 normal healthy subjects.” The researchers concluded that “interexaminer reliability for simultaneous palpation at the head and the sacrum was poor to nonexistent.” Emphasis mine.BACK TO TEXT
This is one of those fun studies that catches clinicians in their inability to come up with the same assessment of a structural problem. Three doctors were asked to “rate forefoot alignment,” but they didn’t agree with each other much about it. From the abstract: “… the commonplace method of visually rating forefoot frontal plane deformities is unreliable and of questionable clinical value.”BACK TO TEXT
Diagnosis by acupuncturists may be unreliable. In this study, “six TCM acupuncturists evaluated the same six patients on the same day” and found that “consistency across acupuncturists regarding diagnostic details and other acupoints was poor.” The study concludes: “TCM diagnoses and treatment recommendations for specific patients with chronic low back pain vary widely across practitioners.”BACK TO TEXT
This paper is a survey of the state of the art of trigger point diagnosis: can therapists be trusted to find trigger points? What science has been done so far? It’s a confusing mess, unfortunately. This paper explains that past research has not “reported the reliability of trigger point diagnosis according to the currently proposed criteria.” The authors also explain that “there is no accepted reference standard for the diagnosis of trigger points, and data on the reliability of physical examination for trigger points are conflicting.” Given these conditions, it’s hardly surprising that the conclusion of the study was disappointing: “Physical examination cannot currently be recommended as a reliable test for the diagnosis of trigger points.”BACK TO TEXT