“To imagine what it’s like to be a whistleblower in the science community, imagine you are trying to report a Ponzi scheme, but instead of receiving help you are told, nonchalantly, to call Bernie Madoff, if you wish.” -@mumumouse, in For Better Science
Note: this is a long post and will be clipped in your email.
It must’ve been late 2013—my labmates and I, along with our advisor, were debating the upshot of an article in Science called Who's Afraid of Peer Review? If you aren’t familiar with the article and don’t have the energy to read it, I’ll briefly sum it up for you: John Bohannon, the article’s author, had submitted an egregiously fake manuscript to 304 journals in an attempt to probe the state of peer review in open-access publishing. What Bohannon found—surprise, surprise—was that most of the journals had no appreciable signs of peer review (which could actually be good or bad, depending on whether the editor simply rejected the manuscript outright) and that even among the journals that did undertake peer review, the paper was still accepted for publication an overwhelming portion of the time. All accounted for, “only 36 of the 304 submissions generated review comments recognizing any of the paper's scientific problems. And 16 of those papers were accepted by the editors despite the damning reviews.” Bohannon later repeated a variation of the experiment targeting media outlets, coaxing them to report on his dubious study claiming that chocolate aids weight loss.
I believe the prevailing sentiment was that the journals had been gotten pretty good by Bohannon, and that the experiment was overall a legitimate one. But someone offered a counterargument, which went something along the lines of, “science depends on people submitting their work in good faith. Sure, errors will get overlooked, but the process is not designed to suss out wholesale fraud. It’s a bit unfair to criticize journals for failing to catch onto something that was in essence designed to fool them.” That argument stuck with me, and I think it’s a perfectly fair one. I’m not sure it lets publishers off the hook, per se, but it feels reasonable to presume that some may have been caught off guard and had their trust taken advantage of. Derek Lowe echoes this sentiment in a recent blog post in Science (we’ll come back to this one later—you’ll no doubt recognize some names if you read the whole post):
There is a good-faith assumption behind all these questions: you are starting by accepting the results as shown. But if someone comes in with data that have in fact been outright faked, and what’s more, faked in such a way as to answer or forestall just those sorts of reviewing tasks, there’s a good chance that these things will go through, unfortunately. That is changing, slowly, in no small part due to sites like PubPeer and a realization of how many times people are willing to engage in such fakery.
Bohannon’s Science experiment specifically targeted open-access journals—which charge publication fees but make their papers publicly accessible instead of billing for subscriptions—with the basic premise being that many (most, it turns out) of these journals would be more than happy to collect your publication tithes but then fail to actually uphold their end of the bargain—namely, ensuring that your science enters the canon alongside carefully vetted peers. There was some pushback among the bamboozled editors, among them the Editor-in-Chief of the Journal of International Medical Research who contended that “an element of trust must necessarily exist in research" and that the sting operation “detract[ed] from that trust.” Others reacted quite differently to Bohannon’s operation, with one publisher informing him that they had "taken immediate action and closed down the Journal of Natural Pharmaceuticals." Yeesh.
There was also perhaps just a hint of aristocracy about the piece. After all, it was published in a highly regarded subscription-based journal and targeted only open-access counterparts, though in fairness the author did confer with advocates of open-access publishing and even relayed the suggestion that the results of the experiment would very possibly be similar among traditional journals. But the subtext is certainly there—open-access journals, broadly speaking, simply aren’t up to the task of refereeing the literature, and some of them might even be outright scams! And with the explosion in the raw number of papers being published alongside commensurate growth in the percentage of open-access papers as a share of the whole, maybe it was time to entertain the idea that papers published in open-access journals were in danger of becoming de facto second-class citizens, so to speak.
The Clergy and the Congregation
The conceit of this particular worldview, of course, is that there exists an “upper echelon” of journals with large editorial boards and enough resources to carefully appraise submissions. I don’t mean that antagonistically—the resources that are required in order to thoroughly assess brand new science are sparse and in high demand, and a hierarchy among journals is probably inevitable. As an aside, this dynamic can get flipped on its head sometimes. I recall some minor intrigue around the decision to publish the first observation of gravitational waves from a black hole merger in Physical Review Letters, spurning Nature and Science. When you know your publication is going to make waves (pun intended; but also, it has about ten thousand citations), you get to work outside the system. For everyone else, a Nature paper can make your career. Nobody in their right mind would pass one up.
There’s another bugbear in the publishing ecosystem, though, and it’s been getting more attention over the last five or ten years—replication crises. I’m sure the roots go back further, but my personal awareness of it comes primarily from Andrew Gelman (see the last two links) and his coverage of the crisis in psychology and social science. To unbury the lede a moment, the main point of this essay is to express my belief that the replication crisis not only extends to biomedical research—particularly in neuroscience—but that the enormous financial incentives behind this research make the crisis all the more insidious. What has risen to the fore over the last few years in these fields is not only a replication crisis, but also a fabrication crisis. It’s a crisis that peer review—or editorial review, for that matter—currently appears utterly incapable of surmounting.
If I could force everyone to sit down and read Gelman’s incisive missive about the history of the replication crisis—a post entitled What has happened down here is the winds have changed—I would. It’s jarring and informative. The impetus for the post was a screed published in the APS Observer by one of its former presidents, at the invitation of the then-president in 2016, rebuking social media criticism of published research (reproduced in the blog post in its entirety; it has since been deleted from APS). The article described a veritable insurgency of “online vigilantes” that were “encouraging uncurated, unfiltered trash-talk” which amounted to “methodological terrorism.” The brutes had stormed the temple, and if the clergy didn’t stop them then the discourse might deviate from the more palatable “peer-reviewed critiques, which serve science without destroying lives.” The public criticism from mere laypeople had hit a nerve inside psychology and social science circles, and that nerve betrayed a culture of pretension that adorned a framework of privacy, deference, peer-only review, and surreptitious regulation of scientific dialogue. Gelman went so far as to assert that the author was following “what I’ve sometimes called the research incumbency rule: that, once an article is published in some approved venue, it should be taken as truth.”
New Evangelists
If this is starting to sound a bit familiar, well, there’s very good reason. The recent editorial by the Editor-in-Chief of the Journal of Clinical Investigation about unruly whistleblowers (and to be clear, these people are whistleblowers) may have lacked the zest and pomp of its APS counterpart, but it more than made up for it in heavy-handedness. The JCI editorial warned that the Editor-in-Chief may report this round of online vigilantes to the authorities for conspiracy to, uh—question the science? The details of the transgressions weren’t exactly explained, but what was explained was the notion that “fundamentally, journals must trust that investigators are truthful about what is represented in their figures.” Good faith. But again, a very particular kind of good faith. Specifically, the kind that holds the honor of the researcher above the reliability of the science. As for the aspersions cast on the rabble-rousing critics, the JCI and APS editorials are again spiritual brethren—as Gelman notes, the APS editorial “provides zero evidence but instead gives some uncheckable half-anecdotes.” JCI, too.
I could’ve copied Gelman’s post nearly word for word and claimed it was about the JCI editorial and I’m not sure anyone would be the wiser. I’m hoping that by now I’ve convinced you to read it, but here are a few selections just in case I haven’t been persuasive enough. About the author of the APS piece, Susan Fiske, Gelman notes:
She’s seeing her professional world collapsing—not at a personal level, I assume she’ll keep her title as the Eugene Higgins Professor of Psychology and Professor of Public Affairs at Princeton University for as long as she wants—but her work and the work of her friends and colleagues is being questioned in a way that no one could’ve imagined ten years ago. It’s scary, and it’s gotta be a lot easier for her to blame some unnamed “terrorists” than to confront the gaps in her own understanding of research methods.
Confronted with the possibility of mistakes in the research, leaders and oversight bodies begin to feel threatened and lash out. Why? Well, because there is a level of control that the occupants of these positions have over how scientific discourse unfolds, and the prospect of that grasp weakening is unnerving:
Fiske is annoyed with social media, and I can understand that. She’s sitting at the top of traditional media. She can publish an article in the APS Observer and get all this discussion without having to go through peer review; she has the power to approve articles for the prestigious Proceedings of the National Academy of Sciences; work by herself and her colleagues is featured in national newspapers, TV, radio, and even Ted talks, or so I’ve heard. Top-down media are Susan Fiske’s friend. Social media, though, she has no control over.
The uncomfortable truth, however, is that open dialogue is vital to the self-correcting mechanisms that scientists frequently take for granted:
When it comes to pointing out errors in published work, social media have been necessary. There just has been no reasonable alternative. Yes, it’s sometimes possible to publish peer-reviewed letters in journals criticizing published work, but it can be a huge amount of effort. Journals and authors often apply massive resistance to bury criticisms.
This all might even seem too on the nose in relation to the JCI editorial except for the fact that it was written six years ago! More recent blog posts by Gelman—who is not some run-of-the-mill social media agitator, by the way—suggest that it’s been slow going, this changing of the tides.
Faith, Questioned
I mentioned a “fabrication crisis” earlier, and what I was presaging was not only the incident with JCI, but also a number of other recent high-profile catastrophes. There was Charles Piller’s exposé about the misdeeds of Sylvain Lesné; and the resurfacing of years-old issues in the publications of Stanford president Marc Tessier-Lavigne; and, of course, there's the whole imbroglio around Cassava Sciences, which is basically the entire reason that I stumbled into this sphere to begin with. I don’t want to spend too much time rehashing things that I’ve covered in more detail elsewhere, but it’s worth touching on a few of the more salient aspects of how these scandals have unfolded. In Lesné’s case, we quickly learned that apparently everyone was skeptical of his highly-cited and influential papers on the role of certain oligomers in the development of Alzheimer’s disease progression. If you’re confused about how a Nature paper with a couple thousand citations jibes with science that was widely understood to be bunk, well, I’m right there with you.
The Tessier-Lavigne debacle is a sort of microcosm of the larger problem. Serious issues were documented in his papers more then five years ago on PubPeer, but nobody really seemed to pay much attention. Recently, as the problems in Tessier-Lavigne’s papers mounted, there appears to have been a renewed effort to bring them to the attention of editors and the media, but to no avail—until the Stanford student newspaper, of all places, finally took the issues seriously and sparked a firestorm of media attention and misconduct investigations. Suddenly, expressions of concern began rolling in, colleagues started calling for Tessier-Lavigne to step down, and Science—yes, the very same publication where Bohannon had published his peer-review critique—admitted that it bungled its editorial response to the initial concerns in a cryptic Twitter thread. Meanwhile, Cell apparently had “closed” its investigation into a Tessier-Lavigne paper only to abruptly reopen it when the Stanford Daily article hit. Yikes, yikes, and yikes.
These are just a couple of the most prominent examples of major scandals in biomedical research this year. And their theater is not some collection of obscure open-access journals. Nature, Science, Cell—the best of the best are just as mired in the swamps as their fledgling competitors. We could keep going, too, if we had the stomach for it. A five-year investigation into the lab of cancer researcher Carlo Croce at OSU finally concluded this year, leaving a trail of shattered science in its wake (Croce himself was not deemed “guilty” of misconduct, but many of his subordinates were, and he was stripped of his endowed chair). Gregg Semenza, a recipient of the 2019 Nobel prize in medicine, has found himself beset by retractions and inquiries into dozens of papers in the last couple years after sleuths on PubPeer started noticing irregularities in his published data. In reporting swaths of highly suspicious mouse data across a handful of papers by Domenico Pratico, an anonymous whistleblower actually enlisted the help of the original inventor of the experiment, yet found no journal willing to take action. The list goes on.
Faith, Lost
I don’t think that I’d be writing this essay if it weren’t for my own experience with the scientific publishing apparatus over the last year and a half. When the allegations of data fabrication levied against Cassava Sciences became public in August 2021, it struck me how muted the response was from journals and the CUNY School of Medicine. So I dove in, figuring there was a decent chance that I’d missed something or that some of these institutions would begin to clarify the situation in the meantime. When it became apparent that that wasn’t happening, I started to write to the journals. Somewhat to my surprise, they often wrote back.
I’ve been putting off writing up this next section for some time—it’s not exactly pleasant subject matter to tackle. Plus, there’s that other thing, which is that many of these emails have been shared directly with federal investigatory agencies. That’s part of the reason that I’m going to stick to selective quotations and synopses rather than sharing my communications in their entirety. If there are questions about the accuracy or authenticity of these snippets, all that I can really offer for the time being is the fact that there are members of the media and aforementioned regulatory bodies that can attest to their veracity.
The Journal of Neuroscience
I contacted the Journal of Neuroscience shortly after Cassava Sciences issued a press release claiming that the journal had cleared one of Cassava’s foundational papers of any data manipulation. I voiced my disappointment about the journal’s decision, given that there was no actual evidence made available by the authors or journal, and asked if the journal itself would be addressing the concerns raised by forensic experts. The Editor-in-Chief responded:
We are also very concerned about fraud and request raw data whenever allegations of data manipulation are raised. Examination of raw data is the only way to determine whether the cropped images provided in the published manuscript are based on faulty or manipulated data.
This seemed to suggest that the journal had in fact received raw data! Well then, maybe all was well and the data explained the inconsistencies. So I pressed a bit and asked whether the raw data had been provided. They responded, confirming that they did in fact receive the raw data:
We did receive raw data for the Western blots in the 2012 Journal of Neuroscience paper referenced here and did not find evidence of data manipulation. We did corroborate the finding that there was a duplicated panel and we requested raw data from that experiment as well. The duplicated panel has been acknowledged and the correct panel will be provided in a Corrigendum that should be published soon.
Ok, great! We’re getting somewhere. I asked if the journal would be making the data available to the public, or at least to forensic experts who could verify their authenticity. Across a couple emails, here was the response:
I would suggest requesting the raw data from the authors and company if you have additional concerns.
Cassava sent us the blots, so we communicated back to them after our investigation was complete. We do not have the authority to send out the raw data without permission.
I suppose if they aren’t allowed to share the data and the authors won’t cooperate, then their hands are tied? That feels like it runs counter to the spirit of their data availability policy, but so be it. The journal declined to comment any further, but we know what happened next—the raw data was printed in an erratum, which did not pass muster upon further evaluation, and an Expression of Concern was issued shortly thereafter. We learned later via FOIL documents that the authors had provided PowerPoint slides, not actual raw data, and that the Editor-in-Chief signed off on them in less than an hour. My follow-up questions to the Editor-in-Chief and current president of the Society for Neuroscience as to why these questionable “raw” data do not warrant a retraction have gone unanswered.
I should also note here that the data that was published in the erratum is, according to what the Editor-in-Chief told me, the raw data that was provided to them by Cassava Sciences. There has been some confusion on this front, but unless the editors misspoke then there is no additional raw data that remains unpublished in the hands of The Journal of Neuroscience. What you see is all there is.
Neuroscience
I submitted an inquiry about another paper linked to the development of Cassava’s experimental drug to the Editor-in-Chief of Neuroscience, the flagship journal of IBRO, using their online editor contact form. I asked whether the journal was aware of image manipulation concerns documented on PubPeer and if they planned to investigate them, and what I received back was the full text of a pending Expression of Concern for the article, complete with a DOI and everything. Well okay, that settles that I suppose! Not so fast—the notice never actually made it to publication. Instead, a short time later, an editorial note appeared on the paper with some purported “raw” data and the following remarks:
After careful examination of these original material, Neuroscience found no evidence of manipulation of the Western blot data or other figures of this publication. For transparency, the illustrations containing the uncropped Western blot images used for the assembly of these figures that were subjected to scrutiny can be found below. If any subsequent information arises from an institutional investigation, this will be considered once available.
It was apparent almost immediately that these “raw” images were not, in fact, what they were made out to be by the authors. The Editor-in-Chief has not commented about these issues, either publicly or privately, to my knowledge. So, I reached out to the president of IBRO to find out if they were aware of the issue at Neuroscience and whether they planned to take any action—after all, it’s printed in their flagship journal! The reply from IBRO essentially stated that they had no real oversight of the journal, and claimed that “journals are not supposed to (allowed to, really) make decisions on fraud without final decisions from the authors’ institution integrity office.” Perhaps we can split hairs about whether retractions are “decisions on fraud,” but journals absolutely can make decisions about whether or not to retract papers that they have lost confidence in. I would argue that this is doubly true if authors made false representations about the data during the course of a journal’s investigation. After peer journals retracted a number of Cassava-linked papers, I asked one last time if IBRO planned to take any action. They reminded me that they do not have any input into what happens in their flagship journal and directed me to the Editor-in-Chief and the journal’s publisher, Elsevier. Nothing has happened in the approximately eight months since.
Behavioural Pharmacology
I sent a similar inquiry to Behavioural Pharmacology about an article by Dr. Wang which had no obvious link (that I’m aware of) to Cassava. It has just a couple fairly routine duplication concerns. After a follow-up I received word that the journal had received a response from the authors and considered the matter closed. The Editor-in-Chief copied a few members of the editorial team in the email to me, so I responded to everyone asking if the journal planned to publish the raw data that they received. I sent one additional follow-up but never heard anything back, and to my knowledge no data has been provided to anyone outside of the editorial team at Behavioural Pharmacology. It’s still a mystery what they received and how they reviewed it.
The Journal of Prevention of Alzheimer’s Disease
This was a bit of a wacky one. I initially reached out to the journal and was referred by the Editor-in-Chief to an editor at Springer Nature, the journal’s publisher. The Springer Nature editor responded to let me know that an investigation was ongoing. Pretty straightforward so far. I left it there until about six months later, after other journals began taking action or making statements one way or the other, at which point I checked back in and asked about the status of the investigation. In the meantime, however, I learned that the Editor-in-Chief was actually a co-author with Dr. Wang in a different paper, which has since been retracted by the editors at Alzheimer’s Research & Therapy! Funny, there was no mention of that before. I copied both the Editor-in-Chief and the editor at Springer Nature.
The Editor-in-Chief responded and suggested that all of the editors (and me? It wasn’t actually clear to me whether or not I was invited) get together in Toulouse. Then, another email, unprompted, asking for clarification about my reference to papers by the author which had been retracted or issued Expressions of Concern in other journals—they wanted to know, from me apparently, how many journals felt positively or negatively about this author, as if it were a popularity contest rather than an investigation. They also mentioned that a JPAD editor and “leader in the field,” is “convinced that the paper is ok.” Still no mention of the fact that this author had a paper with Dr. Wang under investigation in a separate journal.
Eventually, the second JPAD editor communicated to me that their “investigation included review of the published blots (as well as others provided by the authors) with experts familiar with the techniques used (self-poured gels and film imaging). We examined blot images from other sources showing similar apparent artifacts though there had been no manipulation.” There was no response to my email asking whether the authors had actually provided any raw data or whether the data would be published. It’s still a mystery what they chose to review and how they made the determination that the signs of tampering were perfectly acceptable.
The Journal of Clinical Investigation
I’ve covered this one in detail elsewhere, but I’ll just add a bit of additional background here. When I contacted the journal with the usual questions—are you aware of the concerns and do you plan to investigate?—I received a response from the Executive Editor stating that the journal’s “analysis of the high resolution images provided by the author at the time of submission does not suggest that manipulation occurred. The editors have decided not to move forward with a further inquiry.” So basically, they trusted that their evaluation of the data back in 2012 was plenty good enough, thank you very much, and no additional scrutiny was required. I initially wondered why I was hearing back from the Executive Editor and not the Editor-in-Chief, but then someone anonymously tipped me off to the fact that the Editor-in-Chief at the time of my email was also a co-author with Dr. Wang! I wrote back to the Executive Editor, who confirmed that the Editor-in-Chief had recused themselves. The infamous JCI editorial came almost a year later. It’s still a mystery what kind of analysis they performed which convinced them not to undertake an investigation.
Neurobiology of Aging
I emailed the Editor-in-Chief of Neurobiology of Aging with the same basic set of questions, for which I initially received a boilerplate response. However, when Cassava added an erratum for the paper to their publication list I decided to check and see if the Editor-in-Chief was aware of this and whether I’d missed Neurobiology of Aging publishing something. They confirmed that no erratum had been published or was pending publication, but seemed unconcerned about Cassava’s intimation—in fairness, Cassava did not specifically claim that Neurobiology of Aging had actually published an erratum, but I felt that the use of the technical term was pretty suggestive. Despite some back-and-forth, there was very little of actual substance discussed. In the end, Neurobiology of Aging issued a remarkably difficult to locate Expression of Concern (it is not actually attached to the paper) stating that although almost every aspect of the experiment was erroneous, the editors found no evidence of intentional manipulation and may allow the authors to correct everything post-hoc once the investigations at CUNY conclude. It’s still a mystery what they received and how they reviewed it.
Molecular Neurodegeneration
When the lead author responded to concerns on PubPeer about a 2021 paper published with Dr. Wang in Molecular Neurodegeneration informing us that they could not explain the irregularities in Dr. Wang’s section of the paper, I emailed the editor asking for a clarification. The response was stunning—a straightforward explanation of what transpired, stating that the editors were not satisfied with the data they received and that the authors had thus agreed to retract the paper. Doesn’t sound very stunning? Well, this was the first time that I had gotten an informative response which detailed even the smallest bit of due diligence. I had been referred to other people; told that it wasn’t the journal’s or society’s responsibility; told to ask the company for their data; told that the data could not be provided; ignored; informed that matters were closed and no data would be published; pontificated to; asked to compile feedback for all the other journals and then meet the editors in Toulouse; and on and on. But a simple answer describing an investigation in which the editors carefully appraised the raw data and took concrete action? I could hardly believe it.
Changing the Paradigm
The public dialogue about how to fix scientific publishing is mostly centered around the question of peer review, and for perfectly good reason—it’s incredibly costly from a time perspective, has questionable efficacy and, like John Bohannon showed us, is often simply cast aside without any obvious indication by the publisher. That said, I think we need to consider the possibility that bereft editorial review is a problem of a whole new magnitude. Why? Failed peer review permits, at worst, the publication of flawed or fabricated science, but failed editorial review writes that faulty science into stone. There will always be flawed papers and bad actors, and expecting peer review to weed them out may well be a lost cause, but as long as editorial review is functioning properly then it’s still a tractable problem; science can still self-correct. After all, in a world of arbitrary-but-finite p-values we’d expect a certain percentage of published research to be incorrect anyway! In our modern age, with near limitless flow of information, self-correction requires putting as many eyes on the science as possible. It requires conducting reviews in the open and disposing with the stigma that a researcher’s reputation will be irreparably damaged by subjecting their work to additional scrutiny. It does not require catching every faulty paper before it makes it to publication, it simply requires that the post-publication review process be robust and allowed to function out in the open.
As scientists, we have to ask ourselves a difficult question: who is the current system of clandestine editorial review and self-appointed, internal university ethics inquiries serving? Is it serving the public good and those honest researchers who strive to push our understanding of the world forward? Or is it serving the honor and good name of editors, reviewers, universities, and established scientific luminaries? What may well feel like ecclesiastical benevolence from the inside has started to resemble a cabal from the outside. That editors and their society hangers-on defer to institutions at every possible juncture only exacerbates both the self-correction problem and the optics of how scientific culture is perceived by outsiders.
And honestly, how else could an outsider even be expected to interpret the current state of things? I had almost zero experience communicating with journal editors—this despite having a (lone) first-author publication—prior to this fiasco, and no familiarity with most of the scientific methods under scrutiny. I ventured into this exercise about as devoid of expectations as anyone could reasonably expect, but I left with my confidence in the stewardship of neuroscience and biomedical research in tatters. Two journals printed data that was clearly problematic and have yet to take any responsibility for it. Another journal appears willing to allow almost every aspect of the science to be rewritten post-hoc after it was admitted to be in error. One journal flatly refuses to investigate. Two others closed their inquiry with zero published data or analysis. Even the one that did take concrete action made no communication regarding what transpired—I had to write to them personally in order to get an explanation, and even then they never addressed it publicly. In an ironic twist, it was the open-access PLoS ONE that appears to have taken their investigation most seriously, pulling five papers by Dr. Wang.
The quote from Derek Lowe at the start of this essay mentions that he believes peer review is “changing, slowly, in no small part due to sites like PubPeer and a realization of how many times people are willing to engage in such fakery.” I hope so, but I’m not sure that his optimism should be extended to editorial review. Let’s take a look at another passage from his article to get a sense of what I mean:
Now Cassava is a story of their own, and I have frankly been steering clear of it, despite some requests. To me, it’s an excellent example of a biotech stock with a passionate (and often flat-out irrational) fan club. In such cases, if you say bad things about a beloved stock then plenty of helpful strangers will point out that you are an idiot, a shill, an evil agent of the moneyed interests, and much, much more. There’s a detailed sidebar in the Science article on Cassava and on simufilam, which I recommend to anyone who wants to catch up on that aspect. That compound is supposed to restore the function of the protein Filamin A, which is supposed to be beneficial in Alzheimer's, and my own opinion is that neither the published work on this compound nor the conduct of the company inspires my trust. There’s an ongoing investigation into the work at CUNY, and perhaps I’ll return to the subject once it concludes.
I can hardly blame Dr. Lowe for deciding to steer clear of a motivated group of investors whose ringleader has repeatedly characterized whistleblowers as perpetrating a genocide against Alzheimer’s patients, but herein lies the problem. In the recent lawsuit that Cassava filed, the statements and actions by the editors of The Journal of Neuroscience, Neurobiology of Aging, The Journal of Prevention of Alzheimer’s Disease, Neuroscience, and even Molecular Neurodegeneration—yes, the same journal that informed me that their editors had found Cassava’s raw data unacceptable—are used as exhibits supporting a claim of defamation against the whistleblowing scientists. The secrecy and deference by journal editors cultivates a culture of hesitation and silence; why risk upsetting the nasty investors and putting yourself at risk of lawsuit when there’s no discernible evidence that anything will come of it?
As always, we reap what we sow. Alzheimer’s continues to evade our best efforts to find a treatment, and yet another high-profile disaster in Biogen’s aducanumab approval combined with the lackluster sequel in lecanemab appears to be sending a strong signal that the traditional approach is unlikely to make much of an impact. But it sent another concerning signal as well—that money and clout are having their way with the science, at the expensive of people suffering from neurodegenerative disease. Can we change the tide? Of course we can, but it won’t be easy, and progress will be made all the more difficult if we continue to gamble on a system of good faith designed to protect the delicate sensibilities of the researchers ahead of the integrity of the research.
Thank you for the blog and all your great work. Happy 2023!