5 Comments

I agree strongly with your overall point. However, I have to wonder if this sentence:

"Cassava Sciences, with whom readers are no doubt familiar, published results from their CMS study with the headline that stated the drug “slowed cognitive decline” for the group that withdrew from treatment versus the one that continued."

is really the right way around: Cassava is claiming that the drug had a benefit for the group that withdrew from treatment with the drug, and not the group that continued to take it?

Expand full comment
author

Yes, good catch! I fixed it.

Expand full comment

yes, the percentages meann nothing but a cognitive gain for the drug arm vs degradation in the placebo is very significant and not shown in the approval recently granted

Expand full comment

You want to be careful about that word, "significant". If one arm gains and the other declines it looks impressive--but without error bars, we just don't know if it's real or measurement error on this notoriously noisy measure.

The last graph in this post looks underpowered, just on how much the lines (both treatment and control) zig and zag.

Expand full comment
author

I think maybe Stewart was referring to the lecanemab trial (since it was recently approved)? That one did show statistical significance, but it showed it for a very small effect size—the 27% number was a subtle way of inflating that effect size.

For that last graph, however, it's even worse than what you see! It's very much underpowered, but the 200% claim is regarding the treatment effect on a subgroup, so it's underpowered again by another order of magnitude (https://statmodeling.stat.columbia.edu/2018/03/15/need16/).

Expand full comment