Discussion about this post

User's avatar
notacc's avatar

Personally, I think all of the arguments under "Journals are the Problem" do not actually support that proposition. Let's walk through them:

1. "Illusion of truth and finality: Publication is treated as a stamp of approval. Mistakes are rarely corrected. " But the same is true of other sources. Blog posts and news paper articles are routinely shared and referred to as truth, when they are actually much less likely to be so! Have you tried getting a journalist to print a correction on their work? It is all but impossible. And even harder for bloggers. Unfortunately, people do not like admitting to mistakes. It is a cultural and social issue to fix across the board: fixing it is entirely orthogonal to the journal system. Journals actually provide a *bit* of accountability here with editorial review of claims, where-as preprints and blogs authors have no obligation or incentive to admit mistakes.

2. "Artificial scarcity: Journals want to be first to publish, fueling secrecy and fear of being “scooped.”": Not only is this a human trait not limited to the journal system, it is almost famously not limited to the journal system? When people have new ideas, or new results, they want to be first. This is true in almost any context. The first blog post on an issue is important, the first book, the first newspaper article. I mean, journalists *invented* the concept of a "scoop" - it's a word that we scientists have merely borrowed from them!

We should absolutely convince scientists to consider the importance of a work regardless of its novelty, and journals should value high quality independent reproductions of other works. But we need to convince scientists that matters *either way*: this cultural shift has nothing to do itself with the journal system.

3. "Insufficient review that doesn’t scale": The alternative we have now is even worse. Most peer reviewed publications get 2-4 reviewers: but the median preprint gets 0. Written review of other works simply isn't valued by the scientific community currently as it should be. Again, this is a cultural value that needs to change. Right now, one of the only reasons scientists value it at all is because they are placed within the journal system that requests it of them: remove that, and there will be no incentives for it, and people won't do it just because.

It is <1% of scientists (like me) who ever share public reviews of other scientific works. Most scientists do not like doing this. I like doing this. You might like doing this. Even we don't do it as often as we should, though. This is a cultural shift that needs to change, and we need to have structural answers for how to change it, instead of just killing the existing system and hoping that people will suddenly start liking to do it.

4. "Narrow formats: Journals often seek splashy, linear stories with novel mechanistic insights. A lot of useful stuff doesn’t make it into public view, e.g. null findings, methods, raw data, untested ideas, true underlying rationale": this is true of all communication, especially in the sciences. Scientists do this with preprints.

Personally, I have never felt this pressure. I have never really felt like I need to omit analyses or experiments for social reasons. I omit analyses and experiments that were irrelevant, or unrelated, or highly caveated, but for the purpose of accurately communicating the scientific points as best as possible, and for the purpose of not confusing the reader. Cases like this are even less common in the last decade with the rise of supplementary text and supplementary material, where you can share additional details to your heart's content.

If this was true, wouldn't all scientists just load up the preprint form of their article with all of this great stuff, or publish short preprints on it? But nobody does that, even though there are currently exactly 0 barriers to doing so. People need *structural incentives* to change the kinds of work they share. Structural incentives are largely upstream of scientific culture. We can try to change the culture, and try to change the incentives.

5. "Incomplete information: Key components of publications, such as data or code, often aren’t shared to allow full review, reuse, and replication": But many journals do enforce this. They also enforce this way, way better than the alternative, where there is of course no enforcement! We see this all of the time: companies that simply post white papers without code or data, because they cannot publish it in a journal without the code. Or, we see preprints that are posted without the data, and without the code, all of the time. There are reverse exceptions in journals (like the AlphaFold paper), which is why it is crucial for our journals to be non-profit and society run, so that they can be democratically beholden to scientists who can make sure the journal rules are actually enforced.

So what is the answer, then? My answers are: preprints, rigorous open peer review, non-profit, journals run by scientists and scientific societies. These journals need to be able to clearly convey the perceived quality of the article by the editors and reviewers (either by journal name, as is currently done, or by stamps). They need to be institutionally supported, and not have Article Processing Charges. We can get there - a lot of journal systems, like the ASM journals, are close. It can be done correctly.

Most importantly, the answers need to be democratic: they need to be run by scientists and beholden to their members, as scientific societies and their journals are. Blogs are not democratic. Blogs 2.0 are not democratic. AI sure as fuck isn't democratic.

"AI" is not the answer - dear lord. Blogs are not the answer. Social media is not the answer. All of these are systems that are even more biased, more uneven, more distorted, and worse at communicating science than the current peer reviewed journal system. But the journals we can create in the future would be far better than either of our current options.

Signed,

Someone who has walked the walk and written a lot of biorxiv comments

Expand full comment
Austin Cole's avatar

Are y'all creating some metric of intellectual output to compare against federally funded projects?

Papers are the currency of the realm within federal funding institutions, but the ultimate source of research capital is the public, and *they* seem to collectively agree that we're not getting what we pay for atm (and who knows, maybe we're not, are we really generating $80B of progress a year from the NIH?)

I think we (benefactors of science) would be better served with more competition between funding institutions for performance on thoughtful metrics

I don't know what those metrics are, but things like: model success as measured by peer adoption, successful tech commercialization, revealed disease mechanisms, discovered therapeutics... seem right

Expand full comment
47 more comments...

No posts