49 Comments
User's avatar
notacc's avatar

Personally, I think all of the arguments under "Journals are the Problem" do not actually support that proposition. Let's walk through them:

1. "Illusion of truth and finality: Publication is treated as a stamp of approval. Mistakes are rarely corrected. " But the same is true of other sources. Blog posts and news paper articles are routinely shared and referred to as truth, when they are actually much less likely to be so! Have you tried getting a journalist to print a correction on their work? It is all but impossible. And even harder for bloggers. Unfortunately, people do not like admitting to mistakes. It is a cultural and social issue to fix across the board: fixing it is entirely orthogonal to the journal system. Journals actually provide a *bit* of accountability here with editorial review of claims, where-as preprints and blogs authors have no obligation or incentive to admit mistakes.

2. "Artificial scarcity: Journals want to be first to publish, fueling secrecy and fear of being “scooped.”": Not only is this a human trait not limited to the journal system, it is almost famously not limited to the journal system? When people have new ideas, or new results, they want to be first. This is true in almost any context. The first blog post on an issue is important, the first book, the first newspaper article. I mean, journalists *invented* the concept of a "scoop" - it's a word that we scientists have merely borrowed from them!

We should absolutely convince scientists to consider the importance of a work regardless of its novelty, and journals should value high quality independent reproductions of other works. But we need to convince scientists that matters *either way*: this cultural shift has nothing to do itself with the journal system.

3. "Insufficient review that doesn’t scale": The alternative we have now is even worse. Most peer reviewed publications get 2-4 reviewers: but the median preprint gets 0. Written review of other works simply isn't valued by the scientific community currently as it should be. Again, this is a cultural value that needs to change. Right now, one of the only reasons scientists value it at all is because they are placed within the journal system that requests it of them: remove that, and there will be no incentives for it, and people won't do it just because.

It is <1% of scientists (like me) who ever share public reviews of other scientific works. Most scientists do not like doing this. I like doing this. You might like doing this. Even we don't do it as often as we should, though. This is a cultural shift that needs to change, and we need to have structural answers for how to change it, instead of just killing the existing system and hoping that people will suddenly start liking to do it.

4. "Narrow formats: Journals often seek splashy, linear stories with novel mechanistic insights. A lot of useful stuff doesn’t make it into public view, e.g. null findings, methods, raw data, untested ideas, true underlying rationale": this is true of all communication, especially in the sciences. Scientists do this with preprints.

Personally, I have never felt this pressure. I have never really felt like I need to omit analyses or experiments for social reasons. I omit analyses and experiments that were irrelevant, or unrelated, or highly caveated, but for the purpose of accurately communicating the scientific points as best as possible, and for the purpose of not confusing the reader. Cases like this are even less common in the last decade with the rise of supplementary text and supplementary material, where you can share additional details to your heart's content.

If this was true, wouldn't all scientists just load up the preprint form of their article with all of this great stuff, or publish short preprints on it? But nobody does that, even though there are currently exactly 0 barriers to doing so. People need *structural incentives* to change the kinds of work they share. Structural incentives are largely upstream of scientific culture. We can try to change the culture, and try to change the incentives.

5. "Incomplete information: Key components of publications, such as data or code, often aren’t shared to allow full review, reuse, and replication": But many journals do enforce this. They also enforce this way, way better than the alternative, where there is of course no enforcement! We see this all of the time: companies that simply post white papers without code or data, because they cannot publish it in a journal without the code. Or, we see preprints that are posted without the data, and without the code, all of the time. There are reverse exceptions in journals (like the AlphaFold paper), which is why it is crucial for our journals to be non-profit and society run, so that they can be democratically beholden to scientists who can make sure the journal rules are actually enforced.

So what is the answer, then? My answers are: preprints, rigorous open peer review, non-profit, journals run by scientists and scientific societies. These journals need to be able to clearly convey the perceived quality of the article by the editors and reviewers (either by journal name, as is currently done, or by stamps). They need to be institutionally supported, and not have Article Processing Charges. We can get there - a lot of journal systems, like the ASM journals, are close. It can be done correctly.

Most importantly, the answers need to be democratic: they need to be run by scientists and beholden to their members, as scientific societies and their journals are. Blogs are not democratic. Blogs 2.0 are not democratic. AI sure as fuck isn't democratic.

"AI" is not the answer - dear lord. Blogs are not the answer. Social media is not the answer. All of these are systems that are even more biased, more uneven, more distorted, and worse at communicating science than the current peer reviewed journal system. But the journals we can create in the future would be far better than either of our current options.

Signed,

Someone who has walked the walk and written a lot of biorxiv comments

Expand full comment
Jessica Polka's avatar

Thanks so much for this thoughtful reply. While we’re betting that starting fresh (vs revising the existing journal system) is the best path forward, no one has enough evidence to say conclusively what the right path is, and we’re hoping our efforts here help to illuminate that. A few reactions to your specific points:

1. Illusion of truth and finality

You’re right that text presented without context is often implicitly trusted, and it’s compounded by deferring judgement to gatekeepers who create an illusion of truth and finality. The solution is to focus on the dialog instead of editorial decisions reached in private with limited information. . Surfacing and collecting additional opinions (whether supportive or contradictory) along with the original content is the way - your comment here is one example of how design choices can support this.

2. Artificial scarcity

You’re right that humans want to be first, and that’s probably a good thing from the perspective of promoting information sharing (after all, many people post preprints to avoid being scooped). Journals just raise the stakes by keeping information secret during a protracted review process and by creating a winner-take-all culture in which novelty is a prerequisite for publication.

We should measure what matters: If we want to change the values system (which I agree is necessary), we first have to find ways of surfacing things like reproducibility so that scientists who put in the effort can receive benefits of doing so. If it’s not visible, it can’t be rewarded.

3. Insufficient review that doesn’t scale

I think we agree that journal review isn’t scaling, and I think it’s better to be upfront about that as opposed to the situation we have now, where peer review rings and predatory journals benefit from a false sense of trust and security that journal content has been properly reviewed. I don’t have a problem with the median preprint (or other research output, for that matter) getting 0 reviews. Reviews are most needed when the work is likely to be viewed or potentially re-used by non-experts. For some articles, for example those with immediate policy or public health implications, we need more reviews than 2-4, and open dialog is the best way to get that. For others about niche topics, I think it’s fine for them to exist without reviews until someone decides to follow up in the course of their own work. After all, reproduction and reuse are superior trust indicators. That said, at the point that such close reading does happen, it would be really beneficial to actually collect that expert’s opinion and share it publicly, too.

4. Narrow formats

Totally agree about the need for structural incentives to reinforce a culture to publish more than what fits a narrow narrative, and I wish more people were immune to the pressure to do so. At Astera we are changing the incentives by making future funding contingent on open sharing of the kind of information you’re describing.

5. Incomplete information

I too am grateful that some journals do this! But not all journals are equal, and they don’t always make it clear to readers what has and hasn’t been checked (and they have little incentive to highlight what they *aren’t* doing). Other ways of assessing reproducibility exist - including the Automated Screening Working Group, which rapidly ran automated checks of COVID-19 preprints. Post-publication methods like this can be run independently and create common expectations for what materials can and should be shared.

Re: your answer, scientific societies have historically been some of the most vocal opponents of open science (Kling et al 2001) I’m also not sure that science should be strictly "democratic." Scientists, like all people, can fall victim to group think. Democratic processes (depending on how they are structured) can sometimes suppress all but the prevailing viewpoint, and as I think you and I both agree, getting to the truth is often a process that requires time. Suppressing discourse by excessive gatekeeping - even if it’s democratic - isn’t the answer. We need tools that help us to see, compare, digest, and substantiate that discourse, instead of hiding it behind closed doors.

Expand full comment
notacc's avatar

Thanks for the thoughtful response!

Of course, I want all endeavours in scientific publishing and communication to succeed, so I wanted to give some perspectives to "bridge the divide"....

Have you read much Rapid Reviews Infectious Diseases? https://rrid.mitpress.mit.edu/

This is a fantastic novel science publishing / communication platform that shares many features of what seems to also be in your vision: (a) it has no accept/reject decision, (b) it has an emphasis on open and citable peer review, (c) it has no conscious or overt emphasis on impact per se (although of course, impact very much underlies the editorial decisions to recruit reviewers for a paper).

I love RRID, and I think pretty much everyone who comes across it loves RRID!

I think it may be useful to you as a sounding board. Many scientists haven't heard of RRID. You could send it to someone who is struggling with your vision, ask if they like it. They'll almost certainly really like it. You can then describe each of the additional aspects to your vision, and ask which of those they don't like, to get further in your conversation.

For example, it's worth thinking about why many scientists who are totally supportive of RRID have strong reservations about the new eLife model.

Is it because they hate removing editorial accept/reject? Is it because they hate open peer review? Or overlay journals? Well, we know the answer to all of those is NO, because RRID has all of those.

Finally, right now we are in a period where science is under unprecedented attacks from anti-science political movements, whether it is on climate change, or vaccines, conservation, GMOs, or virology. Currently, AI is making this problem worse, not better, and the scales of publicity and communication will likely continue to tip in favor of pseudoscience and charlatans: https://www.scientificamerican.com/article/elon-musks-ai-chatbot-grok-is-reciting-climate-denial-talking-points/

This represents an unprecedented and unique opportunity for any new works in science communication. Will your model platform and enable scientific critiques and responses to pseudoscience? RRID did so admirably out of the door, in ways that traditional journals were slow or reluctant to: https://rrid.mitpress.mit.edu/pub/r94z275c/release/2

I think it is likely that the primary metrics by which any new science publishing approach will be judged is in its ability to boost high quality science, while also boosting critiques of low-quality science and even pseudoscience. I hope you can make strides here.

Just as a final hint there.... here is a recent (fairly rare!) failure of the traditional journal system: https://www.nejm.org/doi/full/10.1056/NEJMsb2506929

It presents quite an opportunity - and a test - for science publishing models!

Edit: I just wanted to say I'm using a plural "you" here, to refer to anyone working on novel publishing systems, to which I think these thoughts would be relevant, honestly not trying to single any one approach out. I think this is a broader conversation relevant to many sci publishing reformers.

Expand full comment
Seemay Chou's avatar

Really appreciate you engaging with us on this. There's so much to figure out. Will check out the stuff you shared!

Expand full comment
Austin Cole's avatar

Are y'all creating some metric of intellectual output to compare against federally funded projects?

Papers are the currency of the realm within federal funding institutions, but the ultimate source of research capital is the public, and *they* seem to collectively agree that we're not getting what we pay for atm (and who knows, maybe we're not, are we really generating $80B of progress a year from the NIH?)

I think we (benefactors of science) would be better served with more competition between funding institutions for performance on thoughtful metrics

I don't know what those metrics are, but things like: model success as measured by peer adoption, successful tech commercialization, revealed disease mechanisms, discovered therapeutics... seem right

Expand full comment
Seemay Chou's avatar

Great question. This is an unsolved challenge, and a very hard one. Figuring out the right proxies that represent such a wide range of value without creating yet another gameable metric that doesn't serve us. I don’t know the answers, but I want so much for scientists to reflect on how they think about the value of their own work first. How do THEY know if they're providing value and doing what they hope to achieve? It's definitely not a one size fits all solution. I like the direction you’re going in with your ideas - have you tried any of them for your own work?

Expand full comment
Austin Cole's avatar

Present bias is super strong :) In a better world we could create test institutions that allocate funding based on several of these new stats and then find out who does best. Like darpa & the internet / bell & the transistor or Florentine patronage & dome construction / Ibn Sina's experimental method.

I think we (the collective we) forget the cost of counterfactuals. $80B is a ton of money and we could experiment with new allocation mechanisms using only a fraction of that

"Do other scientists use my model" could probably be measured empirically / semantically

"Do people pay for this enabled tech" is definitely measurable

Internally, we work under the same incentive structures as everyone else. Sell useful tech, make a medicine, communicate that to funders & other cooperative teams. The fashion / hype / detachment from impact in Uni drove most of my team to the company (fashion matters but less so)

Seems like the right sort of question for an irl working group :)

Expand full comment
Seemay Chou's avatar

your company could be a "working group"? :) let me know if yall ever want help publishing any lessons yall have.

but i think i've definitely converged on a broader point you're making, which is that utility is the highest bar for both rigor and impact. the more you can point your laser towards that, align with that, the clearer it becomes. i understand that's hard for basic science because it can be long timescales, but we have a lot of improvement for how we think about that. we shouldn't be totally divorced from that convo

Expand full comment
Seemay Chou's avatar

one of our mantras internally is "the highest bar is utility" (broadly defined)

Expand full comment
Maia's avatar

I'm not sure exactly what you mean when you say "the highest bar is utility," but I potentially disagree. I strongly believe that curiosity has intrinsic value, and that curiosity-driven science requires freedom from justifying its utility.

Expand full comment
Eswar Krishnan, MD's avatar

Love this essay

Expand full comment
Alexander MacInnis's avatar

Thank you, Seemay, for doing and writing this.

I'm an independent epidemiologist specializing in autism.

I have spent a great deal of time analyzing papers in top journals and found an almost unbelievable set of problems with many famous papers.

The big journals have demonstrated that they try very hard to avoid constructive criticism of their articles. They publish some letters that point out very minor flaws, but a well written, well researched letter that points out major flaws? Not a chance!!! That would show that the editor and reviewers did not do their jobs properly. And yes, I have the receipts.

As a result we are doomed to believe some completely bogus stories. Especially regarding autism.

Any new and better system must inherently avoid the problem wherein editors avoid publishing comments that might make the journal look bad. The new system must relish well thought out, eveidence-based constructive criticism.

Expand full comment
Seemay Chou's avatar

Overcoming widely-accepted models in science is super hard. I think even scientists are sometimes understandably nervous about this, since it can sometimes be the case that the loudest critics are not necessarily presenting sound evidence-based arguments. But, in my view, those are situations in which we should lean even more into opening up debate -- not discourage. It's scary, yes, but the only path to truth is by running towards it in the open. Shutting down discourse does the opposite of what's intended and actually breeds distrust in the long-run. I think a lot of people disagree with me on this point, and we will just have to find out by talking about it more and running actual experiments :)

Expand full comment
Ewan's avatar

You're not wrong! Anyone who's had experience of trying to overturn a corpus of blatantly erroneous research has run into a Kafka-esque series of gatekeepers from uncooperative authors to cynical editors. There are some journals trying to change things (e.g. eLife) at least. Full transparency is one thing that would help: all peer review comments and authors responses including revision steps should be published, all editorial decision making should be reported along with statements of justification.

Expand full comment
Seemay Chou's avatar

I agree, I really wish that some of the reviews for even my own past published manuscripts could have been shared. Sometimes I got critiques that were very valid and echoed some of my own open questions about the work. But once the pubs got accepted, a lot of that super valuable dialog got lost. I tried to bring it up at conferences and whatnot, but I would have preferred that the journals present it as part of the public record. You can of course revise the manuscript, but it hits differently to elaborate on a million caveats as opposed to pointing out the reviews from different experts. The last thing I would want is to overstate claims, and I think public reviews (and the underlying back and forth as you suggest) would be a step in the right direction.

Expand full comment
ELHS Institute - AJ's avatar

Great thoughts! Particularly in healthcare and medicine, there is an urgent need for a more efficient publishing model and ecosystem. The promise of generative AI to transform healthcare depends on the generation, publication, and utilization of evidence for every clinical task. However, this process remains so inefficient that only a small fraction—perhaps 5%—of healthcare GenAI/LLM studies published to date have used data from real clinical settings.

Of course, patient data privacy and security must be protected, and more broadly, responsible AI is essential to prevent harm to humanity. But the requirements for responsible AI significantly raise the bar for clinical AI research and the publication of resulting data and evidence, turning knowledge generation and dissemination into a bottleneck in the transformation of healthcare through GenAI.

To overcome this challenge, innovative approaches are needed to facilitate open science in healthcare—so that ordinary clinicians can efficiently generate robust evidence from real-world data and rapidly publish and share that knowledge. I hope Astera’s open science experiments will soon bring good news to health services research and healthcare delivery.

Expand full comment
Seemay Chou's avatar

such a good point, i think the community often forgets about how much science never makes it out for public consumption because there aren't really publishing norms to begin with. providing new avenues isn’t just about replacing options for traditional academics, it’s also for going from 0-->1 for sectors that don’t typically publish to begin with. this will fundamentally add to the pie. hope we can help spark some change for clinical work.

Expand full comment
ELHS Institute - AJ's avatar

Yes, that would be great. One low-hanging fruit would be creating a new ecosystem for the creation, publication, updating, and dissemination of GenAI-based medical guidelines across all clinical areas. The current process cannot keep up with the pace of GenAI, making change inevitable.

Expand full comment
Tobias Kuhn's avatar

Thank you for this very interesting and important post.

The problems you describe are definitely real, and we need to address them. It's great to see big entities like Astera and Arcadia taking such a clear stance because solving these problems is difficult. People don't like to let go of their habits, particularly if those habits made them "successful" researchers. It will also be a big transition, and those are difficult and take time, particularly when collective incentives are involved, where individuals are punished by the system if they deviate.

I hope this helps gain momentum and that we can all work together to eliminate these vast inefficiencies in current science communication (and thereby science itself).

As a specific remark, I would add one point to your "structural features" list. You allude to this point when you mention "linked data" and "narratives", but I think it would deserve to feature as a separate point in this list:

- Unstructured information: The scientific findings as well as their context are not represented in a structured manner that would allow for things like automatic and precise aggregations, updates, and question answering. For example, when evidence (or counter-evidence) for a particular relation between a given gene and a certain disease is presented, the entities (gene and disease) and the relation (e.g. "tends to contribute to") should be formally expressed with community-agreed identifiers, and this statement should be linked in an equally structured way to the equiment, scientific method, raw data, significance values, and researchers involved in arriving at that finding. Only then, we can reliably aggregate all these findings (like live literature reviews!), have all researchers interested in these topics be updated in real-time, provide these data as input for future neuro-symbolic AIs, and generally build reliable and provenance-aware tools on top of this (for researchers as well as the general public).

Expand full comment
Seemay Chou's avatar

Thanks for highlighting this; I totally agree. I think every scientist has experienced that moment -- even with their own work -- where they step back and see different patterns that there initial assumptions caused them to miss. Unstructuring information could really enable more of this at a broader scale. We should make it easier for ourselves to deconflate data from interpretations, since we want to allow the interpretations to evolve with more information and/or different perspectives.

It's one of the (many) things I enjoyed about doing X-ray crystallography at the start my career. I think there was more built-in deconflation of the data (diffraction) from models (the submitted structure) from interpretations (any follow-up experiments or interpretations around it), and different ways to share these distinct assets. Of course we can still do a better job of having clarity around these aspects.

But anyway, I always feel pretty nervous about drawing any mechanistic conclusions with partial information, so the ability to think about these scientific processes separately was helpful for me. Thanks for your comment!

Expand full comment
Tobias Kuhn's avatar

Yes! Deconflation of data/models/interpretations/etc. is just part of the story though. These should also themselves and across be structured in some formalism ultimately grounded in formal logic. And if we do this for the whole relevant context, including provenance, uncertainty levels, etc., and then apply solid (symbolic) AI, then we don't need to be nervous anymore about "mechanistic conclusions", or at least much less than what we should be now with human reasoning and LLMs doing guesswork at interpreting narratives :)

And we do have now most of the technology to achieve this, which Semantic Web and knowledge graph technology, general ontologies and vocabularies like Schema.org and PROV, fast and mature graph/SPARQL query engines, and approaches to share such results such as nanopublications. There is still quite a bit of technical work to be done, but the biggest challenges left are social and organizational ones.

Expand full comment
Maia's avatar

Hi Seemay, thanks for this article! I appreciate the bold steps you are willing to explore here -- a scientific approach to solving scientific publishing :)

My name is Maia. I am an independent scientist and the founder of Cosimo Research (https://www.cosimoresearch.com/). I am also exploring different ways of publishing results, funding, and executing science.

We are currently running the Big Taping Truth Trial, the first large study about the efficacy of mouth taping. We have been publishing ongoing results on a live dashboard as we gather data (https://www.cosimoresearch.com/active-studies). New participants can join at any time and the data is continuously analyzed. We are also in the process of making a public-facing video that will summarize initial findings once we reach a certain data threshold.

Without any institutional affiliation, I have been able to receive high-quality peer feedback on a pre-registration on ResearchHub, incentivizing the reviews with ResearchCoin (https://www.researchhub.com/post/3938/review-the-pre-registration-for-the-big-taping-truth-trial).

I have published my work on various online platforms such as the DeSci Journal, ResearchHub, Seeds of Science newsletter, and Cosimo's "Journal of Non-Institutional Science".

These are some of the alternative methods I have explored so far, and I am excited to see what other opportunities develop in the near future!

Expand full comment
Seemay Chou's avatar

So awesome for you to be figuring out the solutions that best fit your science needs. Curious -- as you've been doing this, what's the top thing you wish were available now that isn't? Or aspect of any of the tools above that you wish were easier?

Expand full comment
Maia's avatar

Thank you!

One difficult thing that comes to mind is: How do we evaluate/defend the quality of a study beyond relying on credentialism? What would be a better filtering method that is based on the merit of the research itself?

I was surprised to struggle to find an existing, comprehensive science rating system (please do let me know if you know of any). So for a while now, we have been working on developing a new study evaluation framework. It is based on three dimensions (reproducibility, methodological rigor, and statistical power). It's still a draft right now but it definitely needs feedback and I'd be happy to share more info.

One of the key ideas that I think should be emphasized is that the value of a given study should be considered within the context of the existing research in the field. E.g. if there have been no past studies on a topic, a small/flawed study may still represent a valuable contribution to existing knowledge.

So to answer your question, I would love a better system for judging study quality (one that allows non-institutional scientists to participate).

Expand full comment
Premal Shah's avatar

This really resonates with me, especially your point about how "the currency of value in science has become journal articles" and how this has created such perverse incentives. You've nailed the structural problems perfectly.

But I keep coming back to something that I think is central to why this is so hard to fix: prestige is actually a key feature of the current academic model, and journals are currently exploiting this. That exploitation is what's led to all these problems you've highlighted so well. But here's the thing - any attempt to revise or replace the current system WITHOUT addressing the core function it serves in providing prestige is going to run into serious headwinds.

I think about my own work, and honestly, not all papers are the same. Some of my stuff ranges from a fun idea I cooked up in the back of a van traveling to a conference that got published a couple months later as an interesting hypothesis to explore new biology, all the way to multi-year collaborations across multiple labs with hundreds of thousands of dollars of investment to uncover genuinely new biology. Even I value them quite differently, and I want others to as well. Journals serve this purpose of layering prestige - they help signal these differences.

So prestige per se isn't the issue, but how it's currently utilized, exploited, and weaponized is, especially via journals. While I'm totally on board with making the whole academic publishing ecosystem more nimble, fairer, and democratized, doing so without replacing this key function they currently serve seems really hard.

Your experiments at Arcadia and Astera are fascinating test cases for exactly this challenge. How do you think about preserving some form of meaningful differentiation while escaping the current system's grip?

Expand full comment
Seemay Chou's avatar

Hey Premal, i get what you're saying. The way I think about prestige is that it's a proxy we use for measuring our impact, which is actually the thing most scientists care about (and what downstream decisions and career stuff should be based on). But prestige is a very corruptible signal for that. So I agree that we HAVE to figure out how we as a community interact with the world to measure our impact in a wide variety of ways, which requires ways to evaluate the quality/rigor of work, how it moves the field forward. This is a big, important problem to figure out. I have no answers for you right now because scientists, as a community, need to help figure it out! Very much appreciate the question and hope it helps catalyze more ideas on this front.

Expand full comment
Emin Orhan's avatar

I don't get this. When you solve a small problem, you get small prestige. When you solve a big problem, you get big prestige. Are you worried that your colleagues aren't smart or competent enough to appreciate that you solved a big problem without a "Nature" stamp?

Expand full comment
Akhil Jalan's avatar

You are absolutely correct and I am very excited to see what new structures for science you all come up with!

Expand full comment
Alex's avatar

This is an issue I’ve also been thinking about for many years. I came back to academia after a career in the media and recognised that science had adopted the same metrics as the media - readership, sharing etc. If you are essentially creating entertainment (as in the media) that is not an unreasonable set of metrics. If you are a scientist, with the object being to communicate your work in a way that allows others to assess it and build on it, it is a terrible set of metrics.

My conclusion was to design octopus.ac - a publishing platform designed to create the incentives to encourage what we DO want in science (things like removing the pressure to create a nice, neat narrative; making it as easy to share any digital object - such as code or data - as it is to share text; making peer review transparent and post-publication so that it is no longer an antagonistic relationship; removing cues that create bias in assessment such as gender or institution of authors etc). As you say, it makes the unit of publication much smaller, as others have said, it helps people find things as everything is in one place, with digital-first infrastructure. And it is fast to publish to, and free to use.

However, I don’t think of Octopus as a replacement for journals. Not all of them, anyway. Because journals, with their short narrative summaries, do a job of dissemination. They pick the impactful findings and summarise them for those whose practice might change, or who may want to then go and find out more. Octopus is a parallel system - the primary research record where the full details of the work is recorded, where review and research assessment can (and should) happen. But it’s never going to be a ‘good read’! It’s like the organised supplementary information for papers. The crucial thing is that it should exist and it should be where research assessment is done, because we need to be assessing research on its intrinsic quality, not on how good a story it makes (and we need to be assessing every part of the research - peer reviewing methods, data collection, analysis separately etc).

I also think that the Octopus model can change grant funding models and will change professional careers - allowing specialisation much more easily (no more ‘middle author’ roles that discourage people from specialising in technical areas etc).

As others have said, though, the barriers to adoption are entirely structural. My work now that Octopus is built and growing is (apart from financial sustainability - and it’s fantastic to read about your funding) talking to institutions and funders and others about how they can help shift the incentive structure that researchers feel they have imposed on them which means that many/most feel they haven’t got the resources to experiment and ‘take a risk’ with publishing in any way other than the strictly traditional and accepted way.

Expand full comment
Seemay Chou's avatar

Thanks for this. I've checked out octopus before and really appreciate your POV around building something distinct, not trying to replace. I think this is the way in general -- I'm hoping at least at our organizations we can really move away from the limiting framework of replacing or refining. There's so much that's fundamentally not served by journals even at their best. Great place to start!

Expand full comment
Jessica Sacher, PhD's avatar

This was a great read! The case for abolishing journals is such a well discussed topic (at least in my corner of the internet) that I wasn’t expecting a lot of new ideas, but I was wrong - this had tons of new food for thought, and I especially enjoyed the thought exercise of bringing AI tools into the picture at various points! Realizing I haven’t really updated my science publishing worldview given all these new tools yet. Would love to read more on this.

I agree with mandating over encouraging in this context - otherwise no one will change practices. I also think only funders can make this happen, and that it’s the most important thing a funder could do. So I applaud that you are doing it!

I still get that it would be a big risk for scientists to trust that they’ll always be funded by Astera/Arcadia. Going permanently all in on any funder would be risky..

(On the other hand if orgs like Astera/Arcadia end up being the only funders left, then the journal issue will be solved quickly!) (and I mean this only a little jokingly... feels more likely each day)

Still, I don’t think it’s actually that risky to not publish in journals for a few years. I’ve personally gained far more professionally from a few years’ blogging about my lab work and publishing in no journals during that time. And it literally made me better in the lab, almost immediately, as I suddenly was writing about my decisions in real time - so they needed to be backed by real reasons. Now that I’m back to a somewhat traditional academic role, where there can be months of work we don’t ever say anything about, I feel myself losing/missing that edge. And I kind of can’t believe we can get away with this ‘style’ of science...

Anyway, what a dream for us all to work on a whole team doing science as you describe, communicating the reasoning in real time - or better, to work in a whole scientific ecosystem doing it that way. Much harder work in my view. But much more rewarding and better for progress and public trust. I think many scientists will get on board with this over time - just need to set the example.

Expand full comment
JMC's avatar

You wrote this using ChatGPT lol

Expand full comment
Kadubu Kadubu's avatar

"I’m a scientist. Over the past five years, I’ve experimented with science outside traditional institutes. From this vantage point, one truth has become inescapable." - AI-generated text. If you are a scientist, you should have edited this text. Most of the above commentary appears to be AI-generated.

Expand full comment
Toby Green's avatar

"It's bringing up lots of stuff that I hadn't found on Medline". This recent comment from a senior professor at a UK university shows that there is already a lot of valuable content outside journals because she was looking at a relatively new database that only indexes non-traditional content. Her comment illustrates how journal-centric the whole *ecosystem* is, i.e. not just the publishing bit, but also the vital discovery part. Medline, like most traditional discovery services, is dominated by journals to the point where few users even realise that there's more stuff out there. Their sheer size of the *ecosystem* means users are likely to be so overwhelmed with what they find there, few even think to look elsewhere - with the result they miss out on 'lots of stuff'. So, what is all this non-journal stuff? Well, it's papers and reports published on the websites of institutions which sit outside the academy, like intergovernmental and non-governmental organizations, think tanks, research centres, governments, cities, companies and even individuals who are publishing under their own name in blogs and podcasts. Outside the academy careers do not depend on one's bibliography so there's the freedom to publish in non-journal formats - which digital tools and the web has made both quick, cheap and simple to do. How much is there? Lots. The database this professor was looking at has indexed 19M items since it was launched in 2020. Is it all peer-reviewed? Most of it has gone through some form of review process, yes, but I always think another question is better. Can you trust it? And here's where non-journal content has a unique characteristic - it carries the logo of the institution that published it. If you trust the logo, you can trust the content. If you don't know the logo, check out their website. (Much the same, I always think, can be said of journals.) Bottom line? There are thousands of research organizations successfully publishing their findings without the need for journals - it's great that Asteria has now joined them - so there is a well-trodden, journal-free, publishing path that the academy can take. And it's wider than you think.

Toby Green

Coherent Digital

ps. the database the professor looked at is called Policy Commons. It now has a newly-launched twin, Applied Science Commons. Full disclosure: I'm their publisher.

Expand full comment