Assessing Evidence in CPD: Meeting Standards and Missing Goals
This is a summary of the talk "Assessing Evidence in CPD: Meeting Standards and Missing Goals" (aka Baffled, Befuddled and Bemused) by Michael Allen, at the 10th National CPD Accreditation Conference.
In the media, we often see conflicting information abut science, especially nutrition studies. Why?
Podcast: Best Science
1. Surrogates and Mechanisms
2. Designing & Seeing What We Want
3. Tricky Numbers
4. Understanding Perspectives
1. Surrogates and Mechanisms
We get very distracted by the things that don't matter, and miss the things that matter. A story: it was noticed abnormal heart beats followed heart attacks, and more beats meant more deaths. So we tried to decrease extra beats. It was good until someone asked, are we saving lives? So we gave the medicine to some and not to others. It turned out the drugs were killing more people than the placebo. This is a classic example; this a turning point.
So ask, can the patient can feel the outcome? If not, it's a surrogate outcome.
Imagine two treatments, one that reduces bad cholesterol and increases good cholesterol, the other than does nothing. The first is a medicine, and actually increases deaths. The second is the Mediterranean diet, which actually reduces deaths. We have to focus on things that really matter.
More examples re: diabetes and lipids.
These have to do with mechanisms. How often have we seen those diagrams showing all the mechanisms. But - they have no data. They are showing how something works, without knowing whether it works. This is 'the beguiling power of a beautiful mechanism'.
This is where suppositories fit in. Which end do you put in first? The big end. This is a classic description of a mechanism. But more people found the wrong way a lot easier. Also, it worked better (less expulsion).
Some examples:
- hormone replacement therapy for women - using it for men made sense to us, to reduce heart attacks. It didn't.
- beta blockers for heart failure - we stopped their beta blocker because it was harming them. Then we found out it was actually saving lives.
- anti-oxidants for everything - it had a beautiful mechanism of action, but we increased mortality
Reflect on our nature, how we love to know what's going on, how we love an explanation, but we need to know, does it work.
2. Designing & Seeing What We Want
So: coffee is good for you, coffee is bad for you. Chocolate is good for you, chocolate is bad, etc...
Chocolate: big study, 157,089 people for 12 years, reduced heart disease 37%. Coffee - same thing - 400,000 people for 14 years - coffee increases mortality, but adjust intake and there's 10-15% reduction in dying.
But what we do is we study only those who don't drink coffee at all, and those who can't put it down. But these people differ in other ways as well. So then we adjust for other data - but once we tinker with the data, we tinker until we find exactly what we want.
Consider eggs - they're good, they're bad, etc. But we begin with a belief: eggs are good, or eggs are bad. Then we design research to show that. Eg. study showing people who are harmed by eggs - but 8 better studies showed nothing. Here's the simple prescription: if you like eggs, they're good for you, enjoy them. If you don't enjoy eggs, they're dangerous, don't eat them.
Same case with anti-salt and no-effect groups. Paul Simon: "Still a man who hears what he wants to hear and disregards the rest."
Great example of a man won a George Clooney look-alike contest (we couldn't guess until we were told).
Example: country-wide initiative to reduce salt consumption in a country. Also reduction in mortality the same time. Also, decline in red tail hawk migration over the same time. Do Popsicle cause drowning?
This comes into play with respect to guidelines a lot. Guidelines are like sausages - we don; want to see how they're made. All that we've talked about in the research percolates up.
Evidence-based guidelines: Level 1 evidence - 10-15% of all recommendations in guidelines. About 50% is based on opinion only. The use of guidelines is not as good as chasing down the original evidence if it's there.
3. Tricky Numbers
"Statistics show that teen pregnancy drops off significantly after age 25."
Evening news, 2003: 41% increase in stroke, 29% in heart attacks, double leg clots, 26% in breast cancer, from hormone replacements. But the changes were actually tiny, eg. 2.2 to 2.1 % stroke. A lot of the numbers cited in the media are relative numbers. 3% makes a big difference when buying a car, not so much of a difference when buying a stick of gum.
Another example. The shingles vaccine reduces shingles 60%. That means, dropping from 3 in 100 to 1 in 100. What is the risk if we do nothing, and what is the risk if we do something.
This is why most published research findings are false. Every time we do a test, there's a 5% chance of finding something falsely. So if we do enough tests we will find things falsely.
Funny things happen. A woman saw Donald Trump in vegan butter. It doesn't mean anything. When you do more and more studies, the number of false results will increase. But this only proves that random chance happens.
How many ways are there to measure a runny nose. Example from Cochrane - weight of Kleenex, amount of air going through nose, etc. You can always get enough numbers to find what you want.
4. Perspective is Everything
Best = new and expensive. Butter on a stick. Hairy-leg leggings (named: 'anti-pervert stockings').
Do you get what you pay for? Test of people given the same pill, different price. All were placebos. 85% of the expensive, vs 61% in the discounted group, thought the pain was relieved. This is a statistically significant outcome. So, if it's more expensive, it's better.
Similar test with a 'new' puffer (old puffer in a new case). About 50% said it was better (25% said it was worse).
Confusing messages. What do people think when we say "it's uncommon"? The EU-assigned meaning is 0.1-1%. "Very rare" means "less than 0.01%. But the patients have very different understandings. "Uncommon" is 18%, "very rare" is 2%.
When we're using general language, we're distracting people from the truth, and we're not helping their understanding at all.
A 1966 stdy called 200/100 BP "mild hypertension". This was because we had just begin to look at hypertension then. Our standards change. Blood sugar. BMI.
Questions
Q. So - how do we solve this?
We look at industry and they are definitely culprits. But CIHR receivers are also likely to change the result of their studies to get better outcomes.
It requires us to continue what we're dong, but to be honest about this predisposition bias, and the need we feel to find something positive. And we need the pictographs that show what it's like when we do nothing vs. doing something.
Q. Maybe if we reduced our allowable errors? Then we have a lot fewer things we can say are 'true'.
It frustrates me that we have turned to dichotomous answers. We don't have 'truth'. We have probabilities. I would like us to gravitate toward what the probability of something is. The confidence interval actually speaks to what the probabilities. If I just said "this works" that's truth, but there's deception in there.
Comments
Post a Comment
Your comments will be moderated. Sorry, but it's not a nice world out there.