Do you seriously not get how telling scientists "You don't understand what you're doing" is attacking us? Here's a bit of advice: if you're not a member of the field (and I have little doubt that you're an outsider looking in), you don't get to tell us how we operate.
I am a statistician who works closely with scientists. I know very well how science operates, at least in the fields I am familiar with, which, admittedly does not include geology, which I take it is your field. Indeed, in fields that are highly dependent on statistics, such as experimental psych, in some ways I understand certain crucial aspects of the methods that the field uses better than many practitioners of the field itself. Indeed, I'm in the process of writing a paper critical of certain practices in experimental psych.
We DO get the issue. What YOU fail to understand is the myriad of OTHER issues associated with this topic. Science isn't cheap, and no one--NO ONE--pays for replication.
Funding problems don't change the fact that unreplicated research findings should, in general, not be considered proven. If medicine and experimental psych don't do what needs to be done to change whatever faulty practices they are using, then they will continue to produce many (arguably, mostly) invalid results. Sorry, if you find this an inconvenient truth.
Secondly, not all experiments CAN be repeated--at least, not exactly.
You should try actually reading what I write. I have already said that replications don't have to be exact.
Furthermore, the demand that we replicate experiments before accepting their results flies in the face of logic. I invite you to Google Strong Inference and read the wiki page or the PDF that comes up to gain an understanding of why. Your idea has been considered. It's been rejected, for numerous reasons that you haven't given due consideration.
First of all, I am not "demanding" that anybody do anything. That would be silly. All I am doing is pointing out the fact that the scientific practices of some fields have resulted in those fields publishing a great deal of invalid results. One reason for this is their reliance on frequentist significance testing, which does not permit inferences to be made about the truth of a hypothesis from a single experiment alone. Frequentist significance testing requires replication. Other methods of statistical inference do not. Perhaps this "strong inference" you mention is one of them, but whatever that actually is, it has not been widely (or at all) adopted in any scientific field that I am familiar with.
Peer review is not a substitute for rational evaluation of the report, which is what your proposal boils down to.
Put words in other peoples' mouths much?
This worship of the peer review process that you're perpetuating is actually damaging to educating people about science...
Um, yeah, I guess you do. I hardly worship the peer-review process. I think it is badly broken, and I have said so.
Peer review means that the paper is good enough.
It doesn't even mean that. Arsenic bacteria, anyone?
When you actually publish something, you'll learn your error.
Um, I've published plenty, and continue to. But thanks. It's always fun to read condescending comments from ignoramuses. And by the way, I occasionally am asked to referee a paper, which, if I'm not mistaken, you have stated you are not. So, watch the unwarranted assumptions, eh?
I'm not discussing hypotheticals here; I've been through the process. Yes, the reviewers often comment on the validity of certain arguments--but they DO NOT determine whether the paper is true or not. They CAN'T; for one thing, reviewers are professional scientists and don't have that kind of time. For another, new data can always overthrow even our most cherished ideas.
Sorry, but you don't know what you're talking about. If a reviewer believes the conclusions are invalid, then they will recommend the paper be rejected or substantially revised. My girlfriend, who is on the editorial board of several scientific journals and reviews hundreds of papers a year, estimates that she has in the past (she seems to be sent better papers lately) recommended rejection of 20% of manuscripts because the authors' conclusions were not supported by the work they performed. For an example where the reviewers failed to do just that, see the aforementioned arsenic bacteria paper that should have been rejected, but was actually published in
Science.
It's trivially obvious. They can ONLY provide the most recent data (and the evaluation thereof). They can't provide data more recent than the most recent data; time travel doesn't exist.
That is indeed trivially obvious, so you should stop writing it, as no one has suggested that it is anything other than trivially obvious.