When we created this site to write about things in science that need to be fixed, we chose not to start with publishing, even though it has been our preoccupation for decades. The scientific community has made it clear that they do not share our view that we would be better off if we dispensed with journals.
But it turns out the world isn’t willing to wait for scientists to come around. The journal system is on the verge of collapse - not because scientists are abandoning it - but because funders have lost patience with subsidizing a bloated and ineffective publishing apparatus that is rapidly being rendered irrelevant by new technologies.
To the vast majority of scientists who have never thought about what science would look like without journals, it is likely to be unnerving to enter this new reality without knowing how the many aspects of science currently dependent on journals can function well without them.
But as two scientists who have spent most of our professional careers thinking about this challenge and trying to operate in our own post-journal worlds, we can reassure you that we are already ready for it, that it will be fairly easy to navigate as individual researchers, and that we will soon find we are far better off once we do.
Assuming that the concerns people have expressed when we have proposed getting rid of journals voluntarily are the concerns they have now, we offer the following practical advice.
Without journals, where will I publish my work?
Publishing in a post-journal world is a solved problem. The basic infrastructure already exists on the preprint servers (arXiv, bioRxiv and the like) that have been operating as repositories for scientific manuscripts for decades. While bareboned in their features, they are stable and robust, easy to use, and inexpensive to operate. Neither authors nor readers have to pay to use them, and there are already funded projects underway to improve the underlying technology and to add new functionality.
The inevitable move to make these sites the point of primary publication that will accompany the demise of publishers will dramatically accelerate science by removing the artificial and unnecessary delays in communicating results inherent to journals. And freeing scientists up to write only for the people who want to understand and build on their work - rather than just evaluate it - will inevitably make papers more accessible, useful and enjoyable to write.
Who will decide when my work is good enough to "publish"?
You and only you. As it should be.
The defining - and best - feature of preprint servers is that they do not try to litigate the rigor of the science in a paper, or try to decide whether it will ultimately prove important, before posting it. They just post it. We should embrace this, and fight the temptation to reinstate gatekeeping criteria and systems that disempower authors without providing any value.
Pre-publication peer review is the worst feature of journals, and we should not allow it to survive in any form. Rather than facilitating the discovery of work that stands the test of time, it actively prevents it by prioritizing consensus over innovation and conformity over creativity. No process, no matter how well intentioned or structured, can determine in advance what works will prove important. Our pretending to the world and to ourselves that it can has done immense damage to science and how it is perceived in the world.
So there won’t be any peer review? How will I know if the work I'm reading is any good?
We have leaned far too much on our current system of peer review to govern our own use of the literature. If someone else’s work is important to yours, there is no alternative to reading it and coming to your own conclusion about whether you think their data are reliable and their claims valid. There is not, has never been, and never will be, a system that can absolve you of the need to read papers, use data, conduct experiments, think about results, and share insights you find valuable. Doing anything else is an abdication of your responsibility as a scientist.
This doesn’t mean you shouldn’t make use of other people’s assessments - these can be incredibly valuable. But we’ve been denying ourselves the full benefit of our collective wisdom by pretending that only the opinions of three often arbitrarily selected peer reviewers matter, and that the only evaluations of a paper that count are the ones carried out before most people have had a chance to read it.
Peer review in a post-journal world won’t look anything like it does at journals. Instead of only reviewing papers on demand, you should read papers that you choose and, when you have something you’d like to share about it - whether this be an in depth review, an observation, a short comment, an experience with the data or methods, or anything else conceivably of use to other scientists - you should share it.
bioRxiv already has a decent system for collecting reviews and comments. We both routinely read papers with public reviews available on bioRxiv, and it is an infinitely superior experience to blindly trusting peer reviews that you never see. It is also eye opening to see how the views on a paper evolve as more and more people read and think about it, and as the context around it changes with time. Once you see how much we have been crippling not just science but ourselves with the stupid system we have today, you will not be able to imagine going back.
If you’re worried about being overwhelmed with information - don’t be. There are already efforts to manually synthesize collections of public reviews for papers and capture the various views of people who have reviewed them (not reduce it to a publish/not publish decision), and this is something that existing AIs can now reliably automate at scale.
Baseline checks of papers can also already make heavy use of AI, such as for inclusion of relevant data/metadata/code, for appropriateness or completeness of citations, statistical reporting and more. Importantly, these are things authors can use to improve the quality of their own preprints rather than only relying on centralized screening. And the breadth and quality of this kind of assisted evaluation will only continue to improve.
How will I decide what to read?
This is a manufactured problem.
Scientists managed to navigate the literature for centuries without journal editors doing it for them. While they didn’t have to deal with the volume of papers and other forms of scientific output that we have today, they also didn’t have the technologies to search, organize and synthesize the literature that we have today, and which are getting better and better by the hour.
We all have to rebuild the intellectual muscles that extensive filtering has allowed to atrophy. Evaluating and navigating the literature independently is an essential skill for all scientists and any level, and if you need journal editors to do it for you, you probably shouldn't be in science.
How will we decide which scientists we should hire, fund, promote, and select?
The fact that this question always comes up is the most compelling evidence for why the existing system needed to be destroyed, and why we are rejoicing at its imminent demise.
Instead of asking how you’re going to decide who should be considered a good and worthy scientist without journal titles, you should really ask yourself why you ever thought this was a good idea.
To identify scientists that stand out and should receive scarce resources or opportunities in any context, we need to maintain sensitivity to exceptionalism. But what sounds reasonable, agreeable, and comfortable to a large group is unlikely to identify outliers. The current system doesn't optimize for this value—it systematically destroys it by forcing every paper through a single cookie cutter form of highly structured peer review. There is nothing worse in science than the idea that it’s ok to outsource decision making to journals, the epitome of consensus thinking. And if the death of journals makes it more difficult to do so, it’s a welcome change.
Removing the distorting effects of journal-based selection will make evaluating other scientists more accurate too by removing the crutch of journal titles and once again force us to return to trying to understanding how other scientists think, how they integrate new ideas, how self-critical they are about their own theories— qualities that matter infinitely more than the opinions of peripheral experts on past work.
We have both been systematically excluding proxies from our evaluation of other scientists across academia and industry in all the contexts in which they are typically used - hiring, funding, promotion, awards, and selection for leadership positions - and find that it’s easier to assess people more suited for desired outcomes when not clouded by coarse filters.
TL;DR: How to be a scientist in a post-journal world
Preprint your work. Write for the scientists who will build on your research, not for editors and reviewers who will judge it.
Read broadly and think critically. Develop your ability to discover (including via next generation tools) and evaluate work independently rather than relying on others' filtering.
Share your insights publicly. When you find valuable work, make useful connections, or develop critiques that others might benefit from, share them openly—not through formal peer review, but as part of the ongoing scientific conversation.
Trust your scientific instincts. Use your own judgement for what should be shared or built upon to maximally advance your field.
Evaluate people richly. Take advantage of the disappearance of journal titles and other crude assessments of people’s past work to interrogate scientists with open-ended questions relevant to the tasks at hand, and pay attention to emergent signals more predictive of future outcomes.
Resist the urge to go backwards. The only way we can screw this up is if we try to recreate what’s been lost. We all know the old system wasn’t good, and that It’s time to try something new. We may have failed to make it happen ourselves, but we can seize the opportunity now that it is here.



I have been higher ed since 1980 and a faculty member since 1991. I have never been told in all that time that what matters most is not scientific excellence, but the color of one's skin and the countours of one's ancestry. Sure, there have been national organizations that have set quotas for steering committees and I've seen a small number of papers where the authors battled a bit internally about whether it was ok to say something regarding race, even though it was supported by data, or whether they had to interpret a negative association between race and outcomes as an indication of a bias. Those were all worked out for the best by the time of publication. I also had one grant get a really bad review because one reviewer thought we were not taking equity seriously, even though we did everything we could in the area we were researching. That was probably the worst of my experience. Now that we are seeing anti-DEI laws I am seeing far more distortion of science and restrictions on what gets done.
In my experience, what is most distorting and weakening science are the incentives around papers and grants. Academia has now put papers and grants above excellence in science, at least in fields I work in in healthcare. When the incentive is to get as many grants as possible and publish as many papers as possible, science gets distorted. Why refine a paper when you can publish it and then publish a follow-up later? Why take the time to do a careful causal analysis when you can just do an "associational" study--it is easier, faster, and still accepted at even top journals. I also see a lot of faculty doing stuff and then asking "How can we turn this into.a paper?" which is ok sometimes, but not the frequency with which it is done. I personally have 3-4 student-led papers that have been in the works for over a year that could have been published already, except that unlike most of the faculty I know, I look over all of the code and analysis myself... and I find major problems, problems would never be noticed by reviewers, because they would not get the data or the code. Without both and the complete clinical database that the queries used to extract the data, they have no way of finding the issues. They will look at Table 1 and Table 2, succumb to the Table 2 fallacy, make a few suggestions for revisions and move on.
This is a compelling manifesto for a post-journal scientific world. The argument is clear and I agree with all of it. Thinking aloud, it might be worth addressing 'transitional safeguards': how early-career researchers are evaluated without journal proxies, how fields with regulatory constraints (clinical, dual-use, or human-subjects research) adapt, and how permanence, prestige, credit, and discoverability are guaranteed. Including a minimal economic model and practical milestones for the transition would make this feel less like a manifesto and more like an operational guide. I am aware that a single post cannot address all concerns and I am sure there will be more posts providing clear thoughts on how to operationalize beyond the compelling and provocative stand of the first manifesto.